diff --git a/.gitignore b/.gitignore index b559977b69..1109e1a2f6 100644 --- a/.gitignore +++ b/.gitignore @@ -9,6 +9,7 @@ modules-dev/ pkg/ vendor/ website/.vagrant +website/.bundle website/build website/node_modules .vagrant/ diff --git a/.travis.yml b/.travis.yml index c36571ca10..e600013027 100644 --- a/.travis.yml +++ b/.travis.yml @@ -4,7 +4,6 @@ language: go go: - 1.5 - - tip install: make updatedeps diff --git a/CHANGELOG.md b/CHANGELOG.md index 467c352d26..b1eb5a7f55 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,9 +1,184 @@ -## 0.6.8 (Unreleased) +## 0.6.10 (Unreleased) + +BACKWARDS INCOMPATIBILITIES: + + * The `-module-depth` flag available on `plan`, `apply`, `show`, and `graph` now defaults to `-1`, causing + resources within modules to be expanded in command output. This is only a cosmetic change; it does not affect + any behavior. + * This release includes a bugfix for `$${}` interpolation escaping. These strings are now properly converted to `${}` + during interpolation. This may cause diffs on existing configurations in certain cases. + +FEATURES: + + * **New resource: `azurerm_cdn_endpoint`** [GH-4759] + * **New resource: `azurerm_cdn_profile`** [GH-4740] + * **New resource: `azurerm_network_security_rule`** [GH-4586] + * **New resource: `azurerm_subnet`** [GH-4595] + * **New resource: `azurerm_network_interface`** [GH-4598] + * **New resource: `azurerm_route_table`** [GH-4602] + * **New resource: `azurerm_route`** [GH-4604] + * **New resource: `azurerm_storage_account`** [GH-4698] + * **New resource: `aws_lambda_alias`** [GH-4664] + * **New resource: `aws_redshift_cluster`** [GH-3862] + * **New resource: `aws_redshift_security_group`** [GH-3862] + * **New resource: `aws_redshift_parameter_group`** [GH-3862] + * **New resource: `aws_redshift_subnet_group`** [GH-3862] + * **New resource: `docker_network`** [GH-4483] + * **New resource: `docker_volume`** [GH-4483] + * **New resource: `google_sql_user`** [GH-4669] + +IMPROVEMENTS: + + * core: Add `sha256()` interpolation function [GH-4704] + * core: Validate lifecycle keys to show helpful error messages whe they are mistypes [GH-4745] + * core: Default `module-depth` parameter to `-1`, which expands resources within modules in command output [GH-4763] + * provider/aws: Add new parameters `az_mode` and `availability_zone(s)` in ElastiCache [GH-4631] + * provider/aws: Allow ap-northeast-2 (Seoul) as valid region [GH-4637] + * provider/aws: Limit SNS Topic Subscription protocols [GH-4639] + * provider/aws: Add support for configuring logging on `aws_s3_bucket` resources [GH-4482] + * provider/aws: Add AWS Classiclink for AWS VPC resource [GH-3994] + * provider/aws: Supporting New AWS Route53 HealthCheck additions [GH-4564] + * provider/aws: Store instance state [GH-3261] + * provider/aws: Add support for updating ELB availability zones and subnets [GH-4597] + * provider/aws: Enable specifying aws s3 redirect protocol [GH-4098] + * provider/aws: Added support for `encrypted` on `ebs_block_devices` in Launch Configurations [GH-4481] + * provider/aws: Add support for creating Managed Microsoft Active Directory + and Directory Connectors [GH-4388] + * provider/aws: Mark some `aws_db_instance` fields as optional [GH-3138] + * provider/digitalocean: Add support for reassigning `digitalocean_floating_ip` resources [GH-4476] + * provider/dme: Add support for Global Traffic Director locations on `dme_record` resources [GH-4305] + * provider/docker: Add support for adding host entries on `docker_container` resources [GH-3463] + * provider/docker: Add support for mounting named volumes on `docker_container` resources [GH-4480] + * provider/google: Add content field to bucket object [GH-3893] + * provider/google: Add support for `named_port` blocks on `google_compute_instance_group_manager` resources [GH-4605] + * provider/openstack: Add "personality" support to instance resource [GH-4623] + * provider/packet: Handle external state changes for Packet resources gracefully [GH-4676] + * provider/tls: `tls_private_key` now exports attributes with public key in both PEM and OpenSSH format [GH-4606] + * state/remote: Allow KMS Key Encryption to be used with S3 backend [GH-2903] + +BUG FIXES: + + * core: Fix handling of literals with escaped interpolations `$${var}` [GH-4747] + * core: Fix diff mismatch when RequiresNew field and list both change [GH-4749] + * core: Respect module target path argument on `terraform init` [GH-4753] + * core: Write planfile even on empty plans [GH-4766] + * core: Add validation error when output is missing value field [GH-4762] + * core: Fix improper handling of orphan resources when targeting [GH-4574] + * config: Detect a specific JSON edge case and show a helpful workaround [GH-4746] + * provider/openstack: Ensure valid Security Group Rule attribute combination [GH-4466] + * provider/openstack: Don't put fixed_ip in port creation request if not defined [GH-4617] + * provider/google: Clarify SQL Database Instance recent name restriction [GH-4577] + * provider/google: Split Instance network interface into two fields [GH-4265] + * provider/aws: Error with empty list item on security group [GH-4140] + * provider/aws: Trap Instance error from mismatched SG IDs and Names [GH-4240] + * provider/aws: EBS optimised to force new resource in AWS Instance [GH-4627] + * provider/aws: Wait for NACL rule to be visible [GH-4734] + * provider/aws: `default_result` on `aws_autoscaling_lifecycle_hook` resources is now computed [GH-4695] + * provider/mailgun: Handle the fact that the domain destroy API is eventually consistent [GH-4777] + * provider/template: Fix race causing sporadic crashes in template_file with count > 1 [GH-4694] + * provider/template: Add support for updating `template_cloudinit_config` resources [GH-4757] + +## 0.6.9 (January 8, 2016) + +FEATURES: + + * **New provider: `vcd` - VMware vCloud Director** [GH-3785] + * **New provider: `postgresql` - Create PostgreSQL databases and roles** [GH-3653] + * **New provider: `chef` - Create chef environments, roles, etc** [GH-3084] + * **New provider: `azurerm` - Preliminary support for Azure Resource Manager** [GH-4226] + * **New provider: `mysql` - Create MySQL databases** [GH-3122] + * **New resource: `aws_autoscaling_schedule`** [GH-4256] + * **New resource: `aws_nat_gateway`** [GH-4381] + * **New resource: `aws_network_acl_rule`** [GH-4286] + * **New resources: `aws_ecr_repository` and `aws_ecr_repository_policy`** [GH-4415] + * **New resource: `google_pubsub_topic`** [GH-3671] + * **New resource: `google_pubsub_subscription`** [GH-3671] + * **New resource: `template_cloudinit_config`** [GH-4095] + * **New resource: `tls_locally_signed_cert`** [GH-3930] + * **New remote state backend: `artifactory`** [GH-3684] + +IMPROVEMENTS: + + * core: Change set internals for performance improvements [GH-3992] + * core: Support HTTP basic auth in consul remote state [GH-4166] + * core: Improve error message on resource arity mismatch [GH-4244] + * core: Add support for unary operators + and - to the interpolation syntax [GH-3621] + * core: Add SSH agent support for Windows [GH-4323] + * core: Add `sha1()` interpolation function [GH-4450] + * provider/aws: Add `placement_group` as an option for `aws_autoscaling_group` [GH-3704] + * provider/aws: Add support for DynamoDB Table StreamSpecifications [GH-4208] + * provider/aws: Add `name_prefix` to Security Groups [GH-4167] + * provider/aws: Add support for removing nodes to `aws_elasticache_cluster` [GH-3809] + * provider/aws: Add support for `skip_final_snapshot` to `aws_db_instance` [GH-3853] + * provider/aws: Adding support for Tags to DB SecurityGroup [GH-4260] + * provider/aws: Adding Tag support for DB Param Groups [GH-4259] + * provider/aws: Fix issue with updated route ids for VPC Endpoints [GH-4264] + * provider/aws: Added measure_latency option to Route 53 Health Check resource [GH-3688] + * provider/aws: Validate IOPs for EBS Volumes [GH-4146] + * provider/aws: DB Subnet group arn output [GH-4261] + * provider/aws: Get full Kinesis streams view with pagination [GH-4368] + * provider/aws: Allow changing private IPs for ENIs [GH-4307] + * provider/aws: Retry MalformedPolicy errors due to newly created principals in S3 Buckets [GH-4315] + * provider/aws: Validate `name` on `db_subnet_group` against AWS requirements [GH-4340] + * provider/aws: wait for ASG capacity on update [GH-3947] + * provider/aws: Add validation for ECR repository name [GH-4431] + * provider/cloudstack: performance improvements [GH-4150] + * provider/docker: Add support for setting the entry point on `docker_container` resources [GH-3761] + * provider/docker: Add support for setting the restart policy on `docker_container` resources [GH-3761] + * provider/docker: Add support for setting memory, swap and CPU shares on `docker_container` resources [GH-3761] + * provider/docker: Add support for setting labels on `docker_container` resources [GH-3761] + * provider/docker: Add support for setting log driver and options on `docker_container` resources [GH-3761] + * provider/docker: Add support for settings network mode on `docker_container` resources [GH-4475] + * provider/heroku: Improve handling of Applications within an Organization [GH-4495] + * provider/vsphere: Add support for custom vm params on `vsphere_virtual_machine` [GH-3867] + * provider/vsphere: Rename vcenter_server config parameter to something clearer [GH-3718] + * provider/vsphere: Make allow_unverified_ssl a configuable on the provider [GH-3933] + * provider/vsphere: Add folder handling for folder-qualified vm names [GH-3939] + * provider/vsphere: Change ip_address parameter for ipv6 support [GH-4035] + * provider/openstack: Increase instance timeout from 10 to 30 minutes [GH-4223] + * provider/google: Add `restart_policy` attribute to `google_managed_instance_group` [GH-3892] + +BUG FIXES: + + * core: skip provider input for deprecated fields [GH-4193] + * core: Fix issue which could cause fields that become empty to retain old values in the state [GH-3257] + * provider/docker: Fix an issue running with Docker Swarm by looking up containers by ID instead of name [GH-4148] + * provider/openstack: Better handling of load balancing resource state changes [GH-3926] + * provider/aws: Treat `INACTIVE` ECS cluster as deleted [GH-4364] + * provider/aws: Skip `source_security_group_id` determination logic for Classic ELBs [GH-4075] + * provider/aws: Fix issue destroy Route 53 zone/record if it no longer exists [GH-4198] + * provider/aws: Fix issue force destroying a versioned S3 bucket [GH-4168] + * provider/aws: Update DB Replica to honor storage type [GH-4155] + * provider/aws: Fix issue creating AWS RDS replicas across regions [GH-4215] + * provider/aws: Fix issue with Route53 and zero weighted records [GH-4427] + * provider/aws: Fix issue with iam_profile in aws_instance when a path is specified [GH-3663] + * provider/aws: Refactor AWS Authentication chain to fix issue with authentication and IAM [GH-4254] + * provider/aws: Fix issue with finding S3 Hosted Zone ID for eu-central-1 region [GH-4236] + * provider/aws: Fix missing AMI issue with Launch Configurations [GH-4242] + * provider/aws: Opsworks stack SSH key is write-only [GH-4241] + * provider/aws: Update VPC Endpoint to correctly set route table ids [GH-4392] + * provider/aws: Fix issue with ElasticSearch Domain `access_policies` always appear changed [GH-4245] + * provider/aws: Fix issue with nil parameter group value causing panic in `aws_db_parameter_group` [GH-4318] + * provider/aws: Fix issue with Elastic IPs not recognizing when they have been unassigned manually [GH-4387] + * provider/aws: Use body or URL for all CloudFormation stack updates [GH-4370] + * provider/aws: Fix template_url/template_body conflict [GH-4540] + * provider/aws: Fix bug w/ changing ECS svc/ELB association [GH-4366] + * provider/azure: Update for [breaking change to upstream client library](https://github.com/Azure/azure-sdk-for-go/commit/68d50cb53a73edfeb7f17f5e86cdc8eb359a9528). [GH-4300] + * provider/digitalocean: Fix issue where a floating IP attached to a missing droplet causes a panic [GH-4214] + * provider/google: Fix project metadata sshKeys from showing up and causing unnecessary diffs [GH-4512] + * provider/openstack: Handle volumes in "deleting" state [GH-4204] + * provider/rundeck: Tolerate Rundeck server not returning project name when reading a job [GH-4301] + * provider/vsphere: Create and attach additional disks before bootup [GH-4196] + * provider/openstack: Convert block_device from a Set to a List [GH-4288] + * provider/google: Terraform identifies deleted resources and handles them appropriately on Read [GH-3913] + +## 0.6.8 (December 2, 2015) FEATURES: - * **New resource: `digitalocean_floating_ip`** [GH-3748] * **New provider: `statuscake`** [GH-3340] + * **New resource: `digitalocean_floating_ip`** [GH-3748] + * **New resource: `aws_lambda_event_source_mapping`** [GH-4093] IMPROVEMENTS: @@ -16,8 +191,11 @@ IMPROVEMENTS: BUG FIXES: * core: Fix a bug which prevented HEREDOC syntax being used in lists [GH-4078] + * core: Fix a bug which prevented HEREDOC syntax where the anchor ends in a number [GH-4128] + * core: Fix a bug which prevented HEREDOC syntax being used with Windows line endings [GH-4069] * provider/aws: Fix a bug which could result in a panic when reading EC2 metadata [GH-4024] * provider/aws: Fix issue recreating security group rule if it has been destroyed [GH-4050] + * provider/aws: Fix issue with some attributes in Spot Instance Requests returning as nil [GH-4132] * provider/aws: Fix issue where SPF records in Route 53 could show differences with no modification to the configuration [GH-4108] * provisioner/chef: Fix issue with path separators breaking the Chef provisioner on Windows [GH-4041] diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index f5554557f5..0b4e91c513 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -11,30 +11,98 @@ best way to contribute to the project, read on. This document will cover what we're looking for. By addressing all the points we're looking for, it raises the chances we can quickly merge or address your contributions. +Specifically, we have provided checklists below for each type of issue and pull +request that can happen on the project. These checklists represent everything +we need to be able to review and respond quickly. + +## HashiCorp vs. Community Providers + +We separate providers out into what we call "HashiCorp Providers" and +"Community Providers". + +HashiCorp providers are providers that we'll dedicate full time resources to +improving, supporting the latest features, and fixing bugs. These are providers +we understand deeply and are confident we have the resources to manage +ourselves. + +Community providers are providers where we depend on the community to +contribute fixes and enhancements to improve. HashiCorp will run automated +tests and ensure these providers continue to work, but will not dedicate full +time resources to add new features to these providers. These providers are +available in official Terraform releases, but the functionality is primarily +contributed. + +The current list of HashiCorp Providers is as follows: + + * `aws` + * `azurerm` + * `google` + +Our testing standards are the same for both HashiCorp and Community providers, +and HashiCorp runs full acceptance test suites for every provider nightly to +ensure Terraform remains stable. + +We make the distinction between these two types of providers to help +highlight the vast amounts of community effort that goes in to making Terraform +great, and to help contributers better understand the role HashiCorp employees +play in the various areas of the code base. + ## Issues -### Reporting an Issue +### Issue Reporting Checklists -* Make sure you test against the latest released version. It is possible - we already fixed the bug you're experiencing. +We welcome issues of all kinds including feature requests, bug reports, and +general questions. Below you'll find checklists with guidlines for well-formed +issues of each type. -* Provide steps to reproduce the issue, along with your `.tf` files, - with secrets removed, so we can try to reproduce it. Without this, - it makes it much harder to fix the issue. +#### Bug Reports -* If you experienced a panic, please create a [gist](https://gist.github.com) - of the *entire* generated crash log for us to look at. Double check - no sensitive items were in the log. + - [ ] __Test against latest release__: Make sure you test against the latest + released version. It is possible we already fixed the bug you're experiencing. -* Respond as promptly as possible to any questions made by the Terraform - team to your issue. Stale issues will be closed. + - [ ] __Search for possible duplicate reports__: It's helpful to keep bug + reports consolidated to one thread, so do a quick search on existing bug + reports to check if anybody else has reported the same thing. You can scope + searches by the label "bug" to help narrow things down. + + - [ ] __Include steps to reproduce__: Provide steps to reproduce the issue, + along with your `.tf` files, with secrets removed, so we can try to + reproduce it. Without this, it makes it much harder to fix the issue. + + - [ ] __For panics, include `crash.log`__: If you experienced a panic, please + create a [gist](https://gist.github.com) of the *entire* generated crash log + for us to look at. Double check no sensitive items were in the log. + +#### Feature Requests + + - [ ] __Search for possible duplicate requests__: It's helpful to keep requests + consolidated to one thread, so do a quick search on existing requests to + check if anybody else has reported the same thing. You can scope searches by + the label "enhancement" to help narrow things down. + + - [ ] __Include a use case description__: In addition to describing the + behavior of the feature you'd like to see added, it's helpful to also lay + out the reason why the feature would be important and how it would benefit + Terraform users. + +#### Questions + + - [ ] __Search for answers in Terraform documentation__: We're happy to answer + questions in GitHub Issues, but it helps reduce issue churn and maintainer + workload if you work to find answers to common questions in the + documentation. Often times Question issues result in documentation updates + to help future users, so if you don't find an answer, you can give us + pointers for where you'd expect to see it in the docs. ### Issue Lifecycle 1. The issue is reported. 2. The issue is verified and categorized by a Terraform collaborator. - Categorization is done via tags. For example, bugs are marked as "bugs". + Categorization is done via GitHub labels. We generally use a two-label + system of (1) issue/PR type, and (2) section of the codebase. Type is + usually "bug", "enhancement", "documentation", or "question", and section + can be any of the providers or provisioners or "core". 3. Unless it is critical, the issue is left for a period of time (sometimes many weeks), giving outside contributors a chance to address the issue. @@ -47,27 +115,401 @@ it raises the chances we can quickly merge or address your contributions. the issue tracker clean. The issue is still indexed and available for future viewers, or can be re-opened if necessary. -## Setting up Go to work on Terraform +## Pull Requests -If you have never worked with Go before, you will have to complete the -following steps in order to be able to compile and test Terraform (or -use the Vagrantfile in this repo to stand up a dev VM). +Thank you for contributing! Here you'll find information on what to include in +your Pull Request to ensure it is accepted quickly. -1. Install Go. Make sure the Go version is at least Go 1.4. Terraform will not work with anything less than - Go 1.4. On a Mac, you can `brew install go` to install Go 1.4. + * For pull requests that follow the guidelines, we expect to be able to review + and merge very quickly. + * Pull requests that don't follow the guidelines will be annotated with what + they're missing. A community or core team member may be able to swing around + and help finish up the work, but these PRs will generally hang out much + longer until they can be completed and merged. -2. Set and export the `GOPATH` environment variable and update your `PATH`. - For example, you can add to your `.bash_profile`. +### Pull Request Lifecycle - ``` - export GOPATH=$HOME/Documents/golang - export PATH=$PATH:$GOPATH/bin +1. You are welcome to submit your pull request for commentary or review before + it is fully completed. Please prefix the title of your pull request with + "[WIP]" to indicate this. It's also a good idea to include specific + questions or items you'd like feedback on. + +2. Once you believe your pull request is ready to be merged, you can remove any + "[WIP]" prefix from the title and a core team member will review. Follow + [the checklists below](#checklists-for-contribution) to help ensure that + your contribution will be merged quickly. + +3. One of Terraform's core team members will look over your contribution and + either provide comments letting you know if there is anything left to do. We + do our best to provide feedback in a timely manner, but it may take some + time for us to respond. + +4. Once all outstanding comments and checklist items have been addressed, your + contribution will be merged! Merged PRs will be included in the next + Terraform release. The core team takes care of updating the CHANGELOG as + they merge. + +5. In rare cases, we might decide that a PR should be closed. We'll make sure + to provide clear reasoning when this happens. + +### Checklists for Contribution + +There are several different kinds of contribution, each of which has its own +standards for a speedy review. The following sections describe guidelines for +each type of contribution. + +#### Documentation Update + +Because [Terraform's website][website] is in the same repo as the code, it's +easy for anybody to help us improve our docs. + + - [ ] __Reasoning for docs update__: Including a quick explanation for why the + update needed is helpful for reviewers. + - [ ] __Relevant Terraform version__: Is this update worth deploying to the + site immediately, or is it referencing an upcoming version of Terraform and + should get pushed out with the next release? + +#### Enhancement/Bugfix to a Resource + +Working on existing resources is a great way to get started as a Terraform +contributor because you can work within existing code and tests to get a feel +for what to do. + + - [ ] __Acceptance test coverage of new behavior__: Existing resources each + have a set of [acceptance tests][acctests] covering their functionality. + These tests should exercise all the behavior of the resource. Whether you are + adding something or fixing a bug, the idea is to have an acceptance test that + fails if your code were to be removed. Sometimes it is sufficient to + "enhance" an existing test by adding an assertion or tweaking the config + that is used, but often a new test is better to add. You can copy/paste an + existing test and follow the conventions you see there, modifying the test + to exercise the behavior of your code. + - [ ] __Documentation updates__: If your code makes any changes that need to + be documented, you should include those doc updates in the same PR. The + [Terraform website][website] source is in this repo and includes + instructions for getting a local copy of the site up and running if you'd + like to preview your changes. + - [ ] __Well-formed Code__: Do your best to follow existing conventions you + see in the codebase, and ensure your code is formatted with `go fmt`. (The + Travis CI build will fail if `go fmt` has not been run on incoming code.) + The PR reviewers can help out on this front, and may provide comments with + suggestions on how to improve the code. + +#### New Resource + +Implementing a new resource is a good way to learn more about how Terraform +interacts with upstream APIs. There are plenty of examples to draw from in the +existing resources, but you still get to implement something completely new. + + - [ ] __Acceptance tests__: New resources should include acceptance tests + covering their behavior. See [Writing Acceptance + Tests](#writing-acceptance-tests) below for a detailed guide on how to + approach these. + - [ ] __Documentation__: Each resource gets a page in the Terraform + documentation. The [Terraform website][website] source is in this + repo and includes instructions for getting a local copy of the site up and + running if you'd like to preview your changes. For a resource, you'll want + to add a new file in the appropriate place and add a link to the sidebar for + that page. + - [ ] __Well-formed Code__: Do your best to follow existing conventions you + see in the codebase, and ensure your code is formatted with `go fmt`. (The + Travis CI build will fail if `go fmt` has not been run on incoming code.) + The PR reviewers can help out on this front, and may provide comments with + suggestions on how to improve the code. + +#### New Provider + +Implementing a new provider gives Terraform the ability to manage resources in +a whole new API. It's a larger undertaking, but brings major new functionaliy +into Terraform. + + - [ ] __Acceptance tests__: Each provider should include an acceptance test + suite with tests for each resource should include acceptance tests covering + its behavior. See [Writing Acceptance Tests](#writing-acceptance-tests) below + for a detailed guide on how to approach these. + - [ ] __Documentation__: Each provider has a section in the Terraform + documentation. The [Terraform website][website] source is in this repo and + includes instructions for getting a local copy of the site up and running if + you'd like to preview your changes. For a provider, you'll want to add new + index file and individual pages for each resource. + - [ ] __Well-formed Code__: Do your best to follow existing conventions you + see in the codebase, and ensure your code is formatted with `go fmt`. (The + Travis CI build will fail if `go fmt` has not been run on incoming code.) + The PR reviewers can help out on this front, and may provide comments with + suggestions on how to improve the code. + +#### Core Bugfix/Enhancement + +We are always happy when any developer is interested in diving into Terraform's +core to help out! Here's what we look for in smaller Core PRs. + + - [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at + several different layers of abstraction. Generally the best place to start + is with a "Context Test". These are higher level test that interact + end-to-end with most of Terraform's core. They are divided into test files + for each major action (plan, apply, etc.). Getting a failing test is a great + way to prove out a bug report or a new enhancement. With a context test in + place, you can work on implementation and lower level unit tests. Lower + level tests are largely context dependent, but the Context Tests are almost + always part of core work. + - [ ] __Documentation updates__: If the core change involves anything that + needs to be reflected in our documentation, you can make those changes in + the same PR. The [Terraform website][website] source is in this repo and + includes instructions for getting a local copy of the site up and running if + you'd like to preview your changes. + - [ ] __Well-formed Code__: Do your best to follow existing conventions you + see in the codebase, and ensure your code is formatted with `go fmt`. (The + Travis CI build will fail if `go fmt` has not been run on incoming code.) + The PR reviewers can help out on this front, and may provide comments with + suggestions on how to improve the code. + +#### Core Feature + +If you're interested in taking on a larger core feature, it's a good idea to +get feedback early and often on the effort. + + - [ ] __Early validation of idea and implementation plan__: Terraform's core + is complicated enough that there are often several ways to implement + something, each of which has different implications and tradeoffs. Working + through a plan of attack with the team before you dive into implementation + will help ensure that you're working in the right direction. + - [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at + several different layers of abstraction. Generally the best place to start + is with a "Context Test". These are higher level test that interact + end-to-end with most of Terraform's core. They are divided into test files + for each major action (plan, apply, etc.). Getting a failing test is a great + way to prove out a bug report or a new enhancement. With a context test in + place, you can work on implementation and lower level unit tests. Lower + level tests are largely context dependent, but the Context Tests are almost + always part of core work. + - [ ] __Documentation updates__: If the core change involves anything that + needs to be reflected in our documentation, you can make those changes in + the same PR. The [Terraform website][website] source is in this repo and + includes instructions for getting a local copy of the site up and running if + you'd like to preview your changes. + - [ ] __Well-formed Code__: Do your best to follow existing conventions you + see in the codebase, and ensure your code is formatted with `go fmt`. (The + Travis CI build will fail if `go fmt` has not been run on incoming code.) + The PR reviewers can help out on this front, and may provide comments with + suggestions on how to improve the code. + +### Writing Acceptance Tests + +Terraform includes an acceptance test harness that does most of the repetitive +work involved in testing a resource. + +#### Acceptance Tests Often Cost Money to Run + +Because acceptance tests create real resources, they often cost money to run. +Because the resources only exist for a short period of time, the total amount +of money required is usually a relatively small. Nevertheless, we don't want +financial limitations to be a barrier to contribution, so if you are unable to +pay to run acceptance tests for your contribution, simply mention this in your +pull request. We will happily accept "best effort" implementations of +acceptance tests and run them for you on our side. This might mean that your PR +takes a bit longer to merge, but it most definitely is not a blocker for +contributions. + +#### Running an Acceptance Test + +Acceptance tests can be run using the `testacc` target in the Terraform +`Makefile`. The individual tests to run can be controlled using a regular +expression. Prior to running the tests provider configuration details such as +access keys must be made available as environment variables. + +For example, to run an acceptance test against the Azure Resource Manager +provider, the following environment variables must be set: + +```sh +export ARM_SUBSCRIPTION_ID=... +export ARM_CLIENT_ID=... +export ARM_CLIENT_SECRET=... +export ARM_TENANT_ID=... +``` + +Tests can then be run by specifying the target provider and a regular +expression defining the tests to run: + +```sh +$ make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMPublicIpStatic_update' +==> Checking that code complies with gofmt requirements... +go generate ./... +TF_ACC=1 go test ./builtin/providers/azurerm -v -run=TestAccAzureRMPublicIpStatic_update -timeout 120m +=== RUN TestAccAzureRMPublicIpStatic_update +--- PASS: TestAccAzureRMPublicIpStatic_update (177.48s) +PASS +ok github.com/hashicorp/terraform/builtin/providers/azurerm 177.504s +``` + +Entire resource test suites can be targeted by using the naming convention to +write the regular expression. For example, to run all tests of the +`azurerm_public_ip` resource rather than just the update test, you can start +testing like this: + +```sh +$ make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMPublicIpStatic' +==> Checking that code complies with gofmt requirements... +go generate ./... +TF_ACC=1 go test ./builtin/providers/azurerm -v -run=TestAccAzureRMPublicIpStatic -timeout 120m +=== RUN TestAccAzureRMPublicIpStatic_basic +--- PASS: TestAccAzureRMPublicIpStatic_basic (137.74s) +=== RUN TestAccAzureRMPublicIpStatic_update +--- PASS: TestAccAzureRMPublicIpStatic_update (180.63s) +PASS +ok github.com/hashicorp/terraform/builtin/providers/azurerm 318.392s +``` + +#### Writing an Acceptance Test + +Terraform has a framework for writing acceptance tests which minimises the +amount of boilerplate code necessary to use common testing patterns. The entry +point to the framework is the `resource.Test()` function. + +Tests are divided into `TestStep`s. Each `TestStep` proceeds by applying some +Terraform configuration using the provider under test, and then verifying that +results are as expected by making assertions using the provider API. It is +common for a single test function to excercise both the creation of and updates +to a single resource. Most tests follow a similar structure. + +1. Pre-flight checks are made to ensure that sufficient provider configuration + is available to be able to proceed - for example in an acceptance test + targetting AWS, `AWS_ACCESS_KEY_ID` and `AWS_SECRET_KEY` must be set prior + to running acceptance tests. This is common to all tests exercising a single + provider. + +Each `TestStep` is defined in the call to `resource.Test()`. Most assertion +functions are defined out of band with the tests. This keeps the tests +readable, and allows reuse of assertion functions across different tests of the +same type of resource. The definition of a complete test looks like this: + +```go +func TestAccAzureRMPublicIpStatic_update(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMPublicIpDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVPublicIpStatic_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMPublicIpExists("azurerm_public_ip.test"), + ), + }, + }, + }) +} +``` + +When executing the test, the the following steps are taken for each `TestStep`: + +1. The Terraform configuration required for the test is applied. This is + responsible for configuring the resource under test, and any dependencies it + may have. For example, to test the `azurerm_public_ip` resource, an + `azurerm_resource_group` is required. This results in configuration which + looks like this: + + ```hcl + resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" + } + + resource "azurerm_public_ip" "test" { + name = "acceptanceTestPublicIp1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + public_ip_address_allocation = "static" + } ``` -3. [Follow the development guide](https://github.com/hashicorp/terraform#developing-terraform) +1. Assertions are run using the provider API. These use the provider API + directly rather than asserting against the resource state. For example, to + verify that the `azurerm_public_ip` described above was created + successfully, a test function like this is used: -5. Make your changes to the Terraform source, being sure to run the basic + ```go + func testCheckAzureRMPublicIpExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + publicIPName := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for public ip: %s", availSetName) + } + + conn := testAccProvider.Meta().(*ArmClient).publicIPClient + + resp, err := conn.Get(resourceGroup, publicIPName, "") + if err != nil { + return fmt.Errorf("Bad: Get on publicIPClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Public IP %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } + } + ``` + + Notice that the only information used from the Terraform state is the ID of + the resource - though in this case it is necessary to split the ID into + constituent parts in order to use the provider API. For computed properties, + we instead assert that the value saved in the Terraform state was the + expected value if possible. The testing framework providers helper functions + for several common types of check - for example: + + ```go + resource.TestCheckResourceAttr("azurerm_public_ip.test", "domain_name_label", "mylabel01"), + ``` + +1. The resources created by the test are destroyed. This step happens + automatically, and is the equivalent of calling `terraform destroy`. + +1. Assertions are made against the provider API to verify that the resources + have indeed been removed. If these checks fail, the test fails and reports + "dangling resources". The code to ensure that the `azurerm_public_ip` shown + above looks like this: + + ```go + func testCheckAzureRMPublicIpDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).publicIPClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_public_ip" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, name, "") + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Public IP still exists:\n%#v", resp.Properties) + } + } + + return nil + } + ``` + + These functions usually test only for the resource directly under test: we + skip the check that the `azurerm_resource_group` has been destroyed when + testing `azurerm_resource_group`, under the assumption that + `azurerm_resource_group` is tested independently in its own acceptance tests. -7. If everything works well and the tests pass, run `go fmt` on your code - before submitting a pull request. +[website]: https://github.com/hashicorp/terraform/tree/master/website +[acctests]: https://github.com/hashicorp/terraform#acceptance-tests +[ml]: https://groups.google.com/group/terraform-tool diff --git a/Makefile b/Makefile index fea0478cdc..dc5f3cc128 100644 --- a/Makefile +++ b/Makefile @@ -4,12 +4,12 @@ VETARGS?=-asmdecl -atomic -bool -buildtags -copylocks -methods -nilfunc -printf default: test # bin generates the releaseable binaries for Terraform -bin: generate +bin: fmtcheck generate @sh -c "'$(CURDIR)/scripts/build.sh'" # dev creates binaries for testing Terraform locally. These are put # into ./bin/ as well as $GOPATH/bin -dev: generate +dev: fmtcheck generate @TF_DEV=1 sh -c "'$(CURDIR)/scripts/build.sh'" quickdev: generate @@ -18,35 +18,35 @@ quickdev: generate # Shorthand for quickly building the core of Terraform. Note that some # changes will require a rebuild of everything, in which case the dev # target should be used. -core-dev: generate +core-dev: fmtcheck generate go install github.com/hashicorp/terraform +# Shorthand for quickly testing the core of Terraform (i.e. "not providers") +core-test: generate + @echo "Testing core packages..." && go test $(shell go list ./... | grep -v builtin) + # Shorthand for building and installing just one plugin for local testing. # Run as (for example): make plugin-dev PLUGIN=provider-aws -plugin-dev: generate +plugin-dev: fmtcheck generate go install github.com/hashicorp/terraform/builtin/bins/$(PLUGIN) mv $(GOPATH)/bin/$(PLUGIN) $(GOPATH)/bin/terraform-$(PLUGIN) -release: updatedeps - gox -build-toolchain - @$(MAKE) bin - # test runs the unit tests and vets the code -test: generate +test: fmtcheck generate TF_ACC= go test $(TEST) $(TESTARGS) -timeout=30s -parallel=4 @$(MAKE) vet # testacc runs acceptance tests -testacc: generate +testacc: fmtcheck generate @if [ "$(TEST)" = "./..." ]; then \ echo "ERROR: Set TEST to a specific package. For example,"; \ echo " make testacc TEST=./builtin/providers/aws"; \ exit 1; \ fi - TF_ACC=1 go test $(TEST) -v $(TESTARGS) -timeout 90m + TF_ACC=1 go test $(TEST) -v $(TESTARGS) -timeout 120m # testrace runs the race checker -testrace: generate +testrace: fmtcheck generate TF_ACC= go test -race $(TEST) $(TESTARGS) # updatedeps installs all the dependencies that Terraform needs to run @@ -88,4 +88,10 @@ vet: generate: go generate ./... -.PHONY: bin default generate test updatedeps vet +fmt: + gofmt -w . + +fmtcheck: + @sh -c "'$(CURDIR)/scripts/gofmtcheck.sh'" + +.PHONY: bin default generate test updatedeps vet fmt fmtcheck diff --git a/README.md b/README.md index 7c811d7c78..16bc83d108 100644 --- a/README.md +++ b/README.md @@ -61,6 +61,18 @@ $ make test TEST=./terraform ... ``` +If you're working on a specific provider and only wish to rebuild that provider, you can use the `plugin-dev` target. For example, to build only the Azure provider: + +```sh +$ make plugin-dev PLUGIN=provider-azure +``` + +If you're working on the core of Terraform, and only wish to rebuild that without rebuilding providers, you can use the `core-dev` target. It is important to note that some types of changes may require both core and providers to be rebuilt - for example work on the RPC interface. To build just the core of Terraform: + +```sh +$ make core-dev +``` + ### Acceptance Tests Terraform also has a comprehensive [acceptance test](http://en.wikipedia.org/wiki/Acceptance_testing) suite covering most of the major features of the built-in providers. @@ -85,3 +97,41 @@ TF_ACC=1 go test ./builtin/providers/aws -v -run=Vpc -timeout 90m The `TEST` variable is required, and you should specify the folder where the provider is. The `TESTARGS` variable is recommended to filter down to a specific resource to test, since testing all of them at once can take a very long time. Acceptance tests typically require other environment variables to be set for things such as access keys. The provider itself should error early and tell you what to set, so it is not documented here. + +### Cross Compilation and Building for Distribution + +If you wish to cross-compile Terraform for another architecture, you can set the `XC_OS` and `XC_ARCH` environment variables to values representing the target operating system and architecture before calling `make`. The output is placed in the `pkg` subdirectory tree both expanded in a directory representing the OS/architecture combination and as a ZIP archive. + +For example, to compile 64-bit Linux binaries on Mac OS X Linux, you can run: + +```sh +$ XC_OS=linux XC_ARCH=amd64 make bin +... +$ file pkg/linux_amd64/terraform +terraform: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped +``` + +`XC_OS` and `XC_ARCH` can be space separated lists representing different combinations of operating system and architecture. For example, to compile for both Linux and Mac OS X, targeting both 32- and 64-bit architectures, you can run: + +```sh +$ XC_OS="linux darwin" XC_ARCH="386 amd64" make bin +... +$ tree ./pkg/ -P "terraform|*.zip" +./pkg/ +├── darwin_386 +│   └── terraform +├── darwin_386.zip +├── darwin_amd64 +│   └── terraform +├── darwin_amd64.zip +├── linux_386 +│   └── terraform +├── linux_386.zip +├── linux_amd64 +│   └── terraform +└── linux_amd64.zip + +4 directories, 8 files +``` + +_Note: Cross-compilation uses [gox](https://github.com/mitchellh/gox), which requires toolchains to be built with versions of Go prior to 1.5. In order to successfully cross-compile with older versions of Go, you will need to run `gox -build-toolchain` before running the commands detailed above._ diff --git a/builtin/bins/provider-azurerm/main.go b/builtin/bins/provider-azurerm/main.go new file mode 100644 index 0000000000..f81707338f --- /dev/null +++ b/builtin/bins/provider-azurerm/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/azurerm" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: azurerm.Provider, + }) +} diff --git a/builtin/bins/provider-chef/main.go b/builtin/bins/provider-chef/main.go new file mode 100644 index 0000000000..b1bd8b537e --- /dev/null +++ b/builtin/bins/provider-chef/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/chef" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: chef.Provider, + }) +} diff --git a/builtin/bins/provider-mysql/main.go b/builtin/bins/provider-mysql/main.go new file mode 100644 index 0000000000..0c21be953d --- /dev/null +++ b/builtin/bins/provider-mysql/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/mysql" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: mysql.Provider, + }) +} diff --git a/builtin/bins/provider-mysql/main_test.go b/builtin/bins/provider-mysql/main_test.go new file mode 100644 index 0000000000..06ab7d0f9a --- /dev/null +++ b/builtin/bins/provider-mysql/main_test.go @@ -0,0 +1 @@ +package main diff --git a/builtin/bins/provider-postgresql/main.go b/builtin/bins/provider-postgresql/main.go new file mode 100644 index 0000000000..860ae37f48 --- /dev/null +++ b/builtin/bins/provider-postgresql/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/postgresql" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: postgresql.Provider, + }) +} diff --git a/builtin/bins/provider-postgresql/main_test.go b/builtin/bins/provider-postgresql/main_test.go new file mode 100644 index 0000000000..06ab7d0f9a --- /dev/null +++ b/builtin/bins/provider-postgresql/main_test.go @@ -0,0 +1 @@ +package main diff --git a/builtin/bins/provider-vcd/main.go b/builtin/bins/provider-vcd/main.go new file mode 100644 index 0000000000..7e040dd432 --- /dev/null +++ b/builtin/bins/provider-vcd/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/vcd" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: vcd.Provider, + }) +} diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index d8a9ff862d..1c9ab296db 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -3,14 +3,19 @@ package aws import ( "fmt" "log" + "net/http" + "os" "strings" + "time" "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/go-multierror" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" - "github.com/aws/aws-sdk-go/aws/credentials" + awsCredentials "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" + "github.com/aws/aws-sdk-go/aws/ec2metadata" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/autoscaling" "github.com/aws/aws-sdk-go/service/cloudformation" @@ -22,6 +27,7 @@ import ( "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/aws/aws-sdk-go/service/ecr" "github.com/aws/aws-sdk-go/service/ecs" "github.com/aws/aws-sdk-go/service/efs" "github.com/aws/aws-sdk-go/service/elasticache" @@ -34,6 +40,7 @@ import ( "github.com/aws/aws-sdk-go/service/lambda" "github.com/aws/aws-sdk-go/service/opsworks" "github.com/aws/aws-sdk-go/service/rds" + "github.com/aws/aws-sdk-go/service/redshift" "github.com/aws/aws-sdk-go/service/route53" "github.com/aws/aws-sdk-go/service/s3" "github.com/aws/aws-sdk-go/service/sns" @@ -41,11 +48,13 @@ import ( ) type Config struct { - AccessKey string - SecretKey string - Token string - Region string - MaxRetries int + AccessKey string + SecretKey string + CredsFilename string + Profile string + Token string + Region string + MaxRetries int AllowedAccountIds []interface{} ForbiddenAccountIds []interface{} @@ -62,6 +71,7 @@ type AWSClient struct { dsconn *directoryservice.DirectoryService dynamodbconn *dynamodb.DynamoDB ec2conn *ec2.EC2 + ecrconn *ecr.ECR ecsconn *ecs.ECS efsconn *efs.EFS elbconn *elb.ELB @@ -70,6 +80,7 @@ type AWSClient struct { s3conn *s3.S3 sqsconn *sqs.SQS snsconn *sns.SNS + redshiftconn *redshift.Redshift r53conn *route53.Route53 region string rdsconn *rds.RDS @@ -104,9 +115,14 @@ func (c *Config) Client() (interface{}, error) { client.region = c.Region log.Println("[INFO] Building AWS auth structure") - // We fetched all credential sources in Provider. If they are - // available, they'll already be in c. See Provider definition. - creds := credentials.NewStaticCredentials(c.AccessKey, c.SecretKey, c.Token) + creds := getCreds(c.AccessKey, c.SecretKey, c.Token, c.Profile, c.CredsFilename) + // Call Get to check for credential provider. If nothing found, we'll get an + // error, and we can present it nicely to the user + _, err = creds.Get() + if err != nil { + errs = append(errs, fmt.Errorf("Error loading credentials for AWS Provider: %s", err)) + return nil, &multierror.Error{Errors: errs} + } awsConfig := &aws.Config{ Credentials: creds, Region: aws.String(c.Region), @@ -118,7 +134,7 @@ func (c *Config) Client() (interface{}, error) { sess := session.New(awsConfig) client.iamconn = iam.New(sess) - err := c.ValidateCredentials(client.iamconn) + err = c.ValidateCredentials(client.iamconn) if err != nil { errs = append(errs, err) } @@ -179,6 +195,9 @@ func (c *Config) Client() (interface{}, error) { log.Println("[INFO] Initializing EC2 Connection") client.ec2conn = ec2.New(sess) + log.Println("[INFO] Initializing ECR Connection") + client.ecrconn = ecr.New(sess) + log.Println("[INFO] Initializing ECS Connection") client.ecsconn = ecs.New(sess) @@ -223,6 +242,10 @@ func (c *Config) Client() (interface{}, error) { log.Println("[INFO] Initializing CodeCommit SDK connection") client.codecommitconn = codecommit.New(usEast1Sess) + + log.Println("[INFO] Initializing Redshift SDK connection") + client.redshiftconn = redshift.New(sess) + } if len(errs) > 0 { @@ -235,9 +258,9 @@ func (c *Config) Client() (interface{}, error) { // ValidateRegion returns an error if the configured region is not a // valid aws region and nil otherwise. func (c *Config) ValidateRegion() error { - var regions = [11]string{"us-east-1", "us-west-2", "us-west-1", "eu-west-1", + var regions = [12]string{"us-east-1", "us-west-2", "us-west-1", "eu-west-1", "eu-central-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", - "sa-east-1", "cn-north-1", "us-gov-west-1"} + "ap-northeast-2", "sa-east-1", "cn-north-1", "us-gov-west-1"} for _, valid := range regions { if c.Region == valid { @@ -316,3 +339,56 @@ func (c *Config) ValidateAccountId(iamconn *iam.IAM) error { return nil } + +// This function is responsible for reading credentials from the +// environment in the case that they're not explicitly specified +// in the Terraform configuration. +func getCreds(key, secret, token, profile, credsfile string) *awsCredentials.Credentials { + // build a chain provider, lazy-evaulated by aws-sdk + providers := []awsCredentials.Provider{ + &awsCredentials.StaticProvider{Value: awsCredentials.Value{ + AccessKeyID: key, + SecretAccessKey: secret, + SessionToken: token, + }}, + &awsCredentials.EnvProvider{}, + &awsCredentials.SharedCredentialsProvider{ + Filename: credsfile, + Profile: profile, + }, + } + + // We only look in the EC2 metadata API if we can connect + // to the metadata service within a reasonable amount of time + metadataURL := os.Getenv("AWS_METADATA_URL") + if metadataURL == "" { + metadataURL = "http://169.254.169.254:80/latest" + } + c := http.Client{ + Timeout: 100 * time.Millisecond, + } + + r, err := c.Get(metadataURL) + // Flag to determine if we should add the EC2Meta data provider. Default false + var useIAM bool + if err == nil { + // AWS will add a "Server: EC2ws" header value for the metadata request. We + // check the headers for this value to ensure something else didn't just + // happent to be listening on that IP:Port + if r.Header["Server"] != nil && strings.Contains(r.Header["Server"][0], "EC2") { + useIAM = true + } + } + + if useIAM { + log.Printf("[DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider") + providers = append(providers, &ec2rolecreds.EC2RoleProvider{ + Client: ec2metadata.New(session.New(&aws.Config{ + Endpoint: aws.String(metadataURL), + })), + }) + } else { + log.Printf("[DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider") + } + return awsCredentials.NewChainCredentials(providers) +} diff --git a/builtin/providers/aws/config_test.go b/builtin/providers/aws/config_test.go new file mode 100644 index 0000000000..5c58a57290 --- /dev/null +++ b/builtin/providers/aws/config_test.go @@ -0,0 +1,376 @@ +package aws + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "net/http" + "net/http/httptest" + "os" + "testing" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +func TestAWSConfig_shouldError(t *testing.T) { + resetEnv := unsetEnv(t) + defer resetEnv() + cfg := Config{} + + c := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename) + _, err := c.Get() + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() != "NoCredentialProviders" { + t.Fatalf("Expected NoCredentialProviders error") + } + } + if err == nil { + t.Fatalf("Expected an error with empty env, keys, and IAM in AWS Config") + } +} + +func TestAWSConfig_shouldBeStatic(t *testing.T) { + simple := []struct { + Key, Secret, Token string + }{ + { + Key: "test", + Secret: "secret", + }, { + Key: "test", + Secret: "test", + Token: "test", + }, + } + + for _, c := range simple { + cfg := Config{ + AccessKey: c.Key, + SecretKey: c.Secret, + Token: c.Token, + } + + creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename) + if creds == nil { + t.Fatalf("Expected a static creds provider to be returned") + } + v, err := creds.Get() + if err != nil { + t.Fatalf("Error gettings creds: %s", err) + } + if v.AccessKeyID != c.Key { + t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", c.Key, v.AccessKeyID) + } + if v.SecretAccessKey != c.Secret { + t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", c.Secret, v.SecretAccessKey) + } + if v.SessionToken != c.Token { + t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", c.Token, v.SessionToken) + } + } +} + +// TestAWSConfig_shouldIAM is designed to test the scenario of running Terraform +// from an EC2 instance, without environment variables or manually supplied +// credentials. +func TestAWSConfig_shouldIAM(t *testing.T) { + // clear AWS_* environment variables + resetEnv := unsetEnv(t) + defer resetEnv() + + // capture the test server's close method, to call after the test returns + ts := awsEnv(t) + defer ts() + + // An empty config, no key supplied + cfg := Config{} + + creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename) + if creds == nil { + t.Fatalf("Expected a static creds provider to be returned") + } + + v, err := creds.Get() + if err != nil { + t.Fatalf("Error gettings creds: %s", err) + } + if v.AccessKeyID != "somekey" { + t.Fatalf("AccessKeyID mismatch, expected: (somekey), got (%s)", v.AccessKeyID) + } + if v.SecretAccessKey != "somesecret" { + t.Fatalf("SecretAccessKey mismatch, expected: (somesecret), got (%s)", v.SecretAccessKey) + } + if v.SessionToken != "sometoken" { + t.Fatalf("SessionToken mismatch, expected: (sometoken), got (%s)", v.SessionToken) + } +} + +// TestAWSConfig_shouldIAM is designed to test the scenario of running Terraform +// from an EC2 instance, without environment variables or manually supplied +// credentials. +func TestAWSConfig_shouldIgnoreIAM(t *testing.T) { + resetEnv := unsetEnv(t) + defer resetEnv() + // capture the test server's close method, to call after the test returns + ts := awsEnv(t) + defer ts() + simple := []struct { + Key, Secret, Token string + }{ + { + Key: "test", + Secret: "secret", + }, { + Key: "test", + Secret: "test", + Token: "test", + }, + } + + for _, c := range simple { + cfg := Config{ + AccessKey: c.Key, + SecretKey: c.Secret, + Token: c.Token, + } + + creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename) + if creds == nil { + t.Fatalf("Expected a static creds provider to be returned") + } + v, err := creds.Get() + if err != nil { + t.Fatalf("Error gettings creds: %s", err) + } + if v.AccessKeyID != c.Key { + t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", c.Key, v.AccessKeyID) + } + if v.SecretAccessKey != c.Secret { + t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", c.Secret, v.SecretAccessKey) + } + if v.SessionToken != c.Token { + t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", c.Token, v.SessionToken) + } + } +} + +var credentialsFileContents = `[myprofile] +aws_access_key_id = accesskey +aws_secret_access_key = secretkey +` + +func TestAWSConfig_shouldBeShared(t *testing.T) { + file, err := ioutil.TempFile(os.TempDir(), "terraform_aws_cred") + if err != nil { + t.Fatalf("Error writing temporary credentials file: %s", err) + } + _, err = file.WriteString(credentialsFileContents) + if err != nil { + t.Fatalf("Error writing temporary credentials to file: %s", err) + } + err = file.Close() + if err != nil { + t.Fatalf("Error closing temporary credentials file: %s", err) + } + + defer os.Remove(file.Name()) + + resetEnv := unsetEnv(t) + defer resetEnv() + + if err := os.Setenv("AWS_PROFILE", "myprofile"); err != nil { + t.Fatalf("Error resetting env var AWS_PROFILE: %s", err) + } + if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", file.Name()); err != nil { + t.Fatalf("Error resetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err) + } + + creds := getCreds("", "", "", "myprofile", file.Name()) + if creds == nil { + t.Fatalf("Expected a provider chain to be returned") + } + v, err := creds.Get() + if err != nil { + t.Fatalf("Error gettings creds: %s", err) + } + + if v.AccessKeyID != "accesskey" { + t.Fatalf("AccessKeyID mismatch, expected (%s), got (%s)", "accesskey", v.AccessKeyID) + } + + if v.SecretAccessKey != "secretkey" { + t.Fatalf("SecretAccessKey mismatch, expected (%s), got (%s)", "accesskey", v.AccessKeyID) + } +} + +func TestAWSConfig_shouldBeENV(t *testing.T) { + // need to set the environment variables to a dummy string, as we don't know + // what they may be at runtime without hardcoding here + s := "some_env" + resetEnv := setEnv(s, t) + + defer resetEnv() + + cfg := Config{} + creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename) + if creds == nil { + t.Fatalf("Expected a static creds provider to be returned") + } + v, err := creds.Get() + if err != nil { + t.Fatalf("Error gettings creds: %s", err) + } + if v.AccessKeyID != s { + t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", s, v.AccessKeyID) + } + if v.SecretAccessKey != s { + t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", s, v.SecretAccessKey) + } + if v.SessionToken != s { + t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", s, v.SessionToken) + } +} + +// unsetEnv unsets enviornment variables for testing a "clean slate" with no +// credentials in the environment +func unsetEnv(t *testing.T) func() { + // Grab any existing AWS keys and preserve. In some tests we'll unset these, so + // we need to have them and restore them after + e := getEnv() + if err := os.Unsetenv("AWS_ACCESS_KEY_ID"); err != nil { + t.Fatalf("Error unsetting env var AWS_ACCESS_KEY_ID: %s", err) + } + if err := os.Unsetenv("AWS_SECRET_ACCESS_KEY"); err != nil { + t.Fatalf("Error unsetting env var AWS_SECRET_ACCESS_KEY: %s", err) + } + if err := os.Unsetenv("AWS_SESSION_TOKEN"); err != nil { + t.Fatalf("Error unsetting env var AWS_SESSION_TOKEN: %s", err) + } + if err := os.Unsetenv("AWS_PROFILE"); err != nil { + t.Fatalf("Error unsetting env var AWS_TOKEN: %s", err) + } + if err := os.Unsetenv("AWS_SHARED_CREDENTIALS_FILE"); err != nil { + t.Fatalf("Error unsetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err) + } + + return func() { + // re-set all the envs we unset above + if err := os.Setenv("AWS_ACCESS_KEY_ID", e.Key); err != nil { + t.Fatalf("Error resetting env var AWS_ACCESS_KEY_ID: %s", err) + } + if err := os.Setenv("AWS_SECRET_ACCESS_KEY", e.Secret); err != nil { + t.Fatalf("Error resetting env var AWS_SECRET_ACCESS_KEY: %s", err) + } + if err := os.Setenv("AWS_SESSION_TOKEN", e.Token); err != nil { + t.Fatalf("Error resetting env var AWS_SESSION_TOKEN: %s", err) + } + if err := os.Setenv("AWS_PROFILE", e.Profile); err != nil { + t.Fatalf("Error resetting env var AWS_PROFILE: %s", err) + } + if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", e.CredsFilename); err != nil { + t.Fatalf("Error resetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err) + } + } +} + +func setEnv(s string, t *testing.T) func() { + e := getEnv() + // Set all the envs to a dummy value + if err := os.Setenv("AWS_ACCESS_KEY_ID", s); err != nil { + t.Fatalf("Error setting env var AWS_ACCESS_KEY_ID: %s", err) + } + if err := os.Setenv("AWS_SECRET_ACCESS_KEY", s); err != nil { + t.Fatalf("Error setting env var AWS_SECRET_ACCESS_KEY: %s", err) + } + if err := os.Setenv("AWS_SESSION_TOKEN", s); err != nil { + t.Fatalf("Error setting env var AWS_SESSION_TOKEN: %s", err) + } + if err := os.Setenv("AWS_PROFILE", s); err != nil { + t.Fatalf("Error setting env var AWS_PROFILE: %s", err) + } + if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", s); err != nil { + t.Fatalf("Error setting env var AWS_SHARED_CREDENTIALS_FLE: %s", err) + } + + return func() { + // re-set all the envs we unset above + if err := os.Setenv("AWS_ACCESS_KEY_ID", e.Key); err != nil { + t.Fatalf("Error resetting env var AWS_ACCESS_KEY_ID: %s", err) + } + if err := os.Setenv("AWS_SECRET_ACCESS_KEY", e.Secret); err != nil { + t.Fatalf("Error resetting env var AWS_SECRET_ACCESS_KEY: %s", err) + } + if err := os.Setenv("AWS_SESSION_TOKEN", e.Token); err != nil { + t.Fatalf("Error resetting env var AWS_SESSION_TOKEN: %s", err) + } + if err := os.Setenv("AWS_PROFILE", e.Profile); err != nil { + t.Fatalf("Error setting env var AWS_PROFILE: %s", err) + } + if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", s); err != nil { + t.Fatalf("Error setting env var AWS_SHARED_CREDENTIALS_FLE: %s", err) + } + } +} + +// awsEnv establishes a httptest server to mock out the internal AWS Metadata +// service. IAM Credentials are retrieved by the EC2RoleProvider, which makes +// API calls to this internal URL. By replacing the server with a test server, +// we can simulate an AWS environment +func awsEnv(t *testing.T) func() { + routes := routes{} + if err := json.Unmarshal([]byte(aws_routes), &routes); err != nil { + t.Fatalf("Failed to unmarshal JSON in AWS ENV test: %s", err) + } + ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "text/plain") + w.Header().Add("Server", "MockEC2") + for _, e := range routes.Endpoints { + if r.RequestURI == e.Uri { + fmt.Fprintln(w, e.Body) + } + } + })) + + os.Setenv("AWS_METADATA_URL", ts.URL+"/latest") + return ts.Close +} + +func getEnv() *currentEnv { + // Grab any existing AWS keys and preserve. In some tests we'll unset these, so + // we need to have them and restore them after + return ¤tEnv{ + Key: os.Getenv("AWS_ACCESS_KEY_ID"), + Secret: os.Getenv("AWS_SECRET_ACCESS_KEY"), + Token: os.Getenv("AWS_SESSION_TOKEN"), + Profile: os.Getenv("AWS_TOKEN"), + CredsFilename: os.Getenv("AWS_SHARED_CREDENTIALS_FILE"), + } +} + +// struct to preserve the current environment +type currentEnv struct { + Key, Secret, Token, Profile, CredsFilename string +} + +type routes struct { + Endpoints []*endpoint `json:"endpoints"` +} +type endpoint struct { + Uri string `json:"uri"` + Body string `json:"body"` +} + +const aws_routes = ` +{ + "endpoints": [ + { + "uri": "/latest/meta-data/iam/security-credentials", + "body": "test_role" + }, + { + "uri": "/latest/meta-data/iam/security-credentials/test_role", + "body": "{\"Code\":\"Success\",\"LastUpdated\":\"2015-12-11T17:17:25Z\",\"Type\":\"AWS-HMAC\",\"AccessKeyId\":\"somekey\",\"SecretAccessKey\":\"somesecret\",\"Token\":\"sometoken\"}" + } + ] +} +` diff --git a/builtin/providers/aws/hosted_zones.go b/builtin/providers/aws/hosted_zones.go index 7633e06349..fb95505ea1 100644 --- a/builtin/providers/aws/hosted_zones.go +++ b/builtin/providers/aws/hosted_zones.go @@ -8,10 +8,11 @@ var hostedZoneIDsMap = map[string]string{ "us-west-2": "Z3BJ6K6RIION7M", "us-west-1": "Z2F56UZL2M1ACD", "eu-west-1": "Z1BKCTXD74EZPE", - "central-1": "Z21DNDUVLTQW6Q", + "eu-central-1": "Z21DNDUVLTQW6Q", "ap-southeast-1": "Z3O0J2DXBE1FTB", "ap-southeast-2": "Z1WCIGYICN2BYD", "ap-northeast-1": "Z2M4EHUR26P7ZW", + "ap-northeast-2": "Z3W03O7B5YMIYP", "sa-east-1": "Z7KQH4QJS55SO", "us-gov-west-1": "Z31GFT0UA1I2HV", } diff --git a/builtin/providers/aws/network_acl_entry.go b/builtin/providers/aws/network_acl_entry.go index 22b909bceb..5a09746d64 100644 --- a/builtin/providers/aws/network_acl_entry.go +++ b/builtin/providers/aws/network_acl_entry.go @@ -69,6 +69,15 @@ func flattenNetworkAclEntries(list []*ec2.NetworkAclEntry) []map[string]interfac } +func protocolStrings(protocolIntegers map[string]int) map[int]string { + protocolStrings := make(map[int]string, len(protocolIntegers)) + for k, v := range protocolIntegers { + protocolStrings[v] = k + } + + return protocolStrings +} + func protocolIntegers() map[string]int { var protocolIntegers = make(map[string]int) protocolIntegers = map[string]int{ diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go index ba627d2ec4..9829972c8b 100644 --- a/builtin/providers/aws/provider.go +++ b/builtin/providers/aws/provider.go @@ -1,19 +1,10 @@ package aws import ( - "net" - "sync" - "time" - "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/mutexkv" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" - - "github.com/aws/aws-sdk-go/aws/credentials" - "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" - "github.com/aws/aws-sdk-go/aws/ec2metadata" - "github.com/aws/aws-sdk-go/aws/session" ) // Provider returns a terraform.ResourceProvider. @@ -21,95 +12,41 @@ func Provider() terraform.ResourceProvider { // TODO: Move the validation to this, requires conditional schemas // TODO: Move the configuration to this, requires validation - // These variables are closed within the `getCreds` function below. - // This function is responsible for reading credentials from the - // environment in the case that they're not explicitly specified - // in the Terraform configuration. - // - // By using the getCreds function here instead of making the default - // empty, we avoid asking for input on credentials if they're available - // in the environment. - var credVal credentials.Value - var credErr error - var once sync.Once - getCreds := func() { - // Build the list of providers to look for creds in - providers := []credentials.Provider{ - &credentials.EnvProvider{}, - &credentials.SharedCredentialsProvider{}, - } - - // We only look in the EC2 metadata API if we can connect - // to the metadata service within a reasonable amount of time - conn, err := net.DialTimeout("tcp", "169.254.169.254:80", 100*time.Millisecond) - if err == nil { - conn.Close() - providers = append(providers, &ec2rolecreds.EC2RoleProvider{Client: ec2metadata.New(session.New())}) - } - - credVal, credErr = credentials.NewChainCredentials(providers).Get() - - // If we didn't successfully find any credentials, just - // set the error to nil. - if credErr == credentials.ErrNoValidProvidersFoundInChain { - credErr = nil - } - } - - // getCredDefault is a function used by DefaultFunc below to - // get the default value for various parts of the credentials. - // This function properly handles loading the credentials, checking - // for errors, etc. - getCredDefault := func(def interface{}, f func() string) (interface{}, error) { - once.Do(getCreds) - - // If there was an error, that is always first - if credErr != nil { - return nil, credErr - } - - // If the value is empty string, return nil (not set) - val := f() - if val == "" { - return def, nil - } - - return val, nil - } - // The actual provider return &schema.Provider{ Schema: map[string]*schema.Schema{ "access_key": &schema.Schema{ - Type: schema.TypeString, - Required: true, - DefaultFunc: func() (interface{}, error) { - return getCredDefault(nil, func() string { - return credVal.AccessKeyID - }) - }, + Type: schema.TypeString, + Optional: true, + Default: "", Description: descriptions["access_key"], }, "secret_key": &schema.Schema{ - Type: schema.TypeString, - Required: true, - DefaultFunc: func() (interface{}, error) { - return getCredDefault(nil, func() string { - return credVal.SecretAccessKey - }) - }, + Type: schema.TypeString, + Optional: true, + Default: "", Description: descriptions["secret_key"], }, + "profile": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + Description: descriptions["profile"], + }, + + "shared_credentials_file": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + Description: descriptions["shared_credentials_file"], + }, + "token": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - DefaultFunc: func() (interface{}, error) { - return getCredDefault("", func() string { - return credVal.SessionToken - }) - }, + Type: schema.TypeString, + Optional: true, + Default: "", Description: descriptions["token"], }, @@ -174,6 +111,7 @@ func Provider() terraform.ResourceProvider { "aws_autoscaling_group": resourceAwsAutoscalingGroup(), "aws_autoscaling_notification": resourceAwsAutoscalingNotification(), "aws_autoscaling_policy": resourceAwsAutoscalingPolicy(), + "aws_autoscaling_schedule": resourceAwsAutoscalingSchedule(), "aws_cloudformation_stack": resourceAwsCloudFormationStack(), "aws_cloudtrail": resourceAwsCloudTrail(), "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), @@ -190,6 +128,8 @@ func Provider() terraform.ResourceProvider { "aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(), "aws_dynamodb_table": resourceAwsDynamoDbTable(), "aws_ebs_volume": resourceAwsEbsVolume(), + "aws_ecr_repository": resourceAwsEcrRepository(), + "aws_ecr_repository_policy": resourceAwsEcrRepositoryPolicy(), "aws_ecs_cluster": resourceAwsEcsCluster(), "aws_ecs_service": resourceAwsEcsService(), "aws_ecs_task_definition": resourceAwsEcsTaskDefinition(), @@ -223,10 +163,14 @@ func Provider() terraform.ResourceProvider { "aws_kinesis_firehose_delivery_stream": resourceAwsKinesisFirehoseDeliveryStream(), "aws_kinesis_stream": resourceAwsKinesisStream(), "aws_lambda_function": resourceAwsLambdaFunction(), + "aws_lambda_event_source_mapping": resourceAwsLambdaEventSourceMapping(), + "aws_lambda_alias": resourceAwsLambdaAlias(), "aws_launch_configuration": resourceAwsLaunchConfiguration(), "aws_lb_cookie_stickiness_policy": resourceAwsLBCookieStickinessPolicy(), "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), + "aws_nat_gateway": resourceAwsNatGateway(), "aws_network_acl": resourceAwsNetworkAcl(), + "aws_network_acl_rule": resourceAwsNetworkAclRule(), "aws_network_interface": resourceAwsNetworkInterface(), "aws_opsworks_stack": resourceAwsOpsworksStack(), "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), @@ -243,6 +187,10 @@ func Provider() terraform.ResourceProvider { "aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(), "aws_rds_cluster": resourceAwsRDSCluster(), "aws_rds_cluster_instance": resourceAwsRDSClusterInstance(), + "aws_redshift_cluster": resourceAwsRedshiftCluster(), + "aws_redshift_security_group": resourceAwsRedshiftSecurityGroup(), + "aws_redshift_parameter_group": resourceAwsRedshiftParameterGroup(), + "aws_redshift_subnet_group": resourceAwsRedshiftSubnetGroup(), "aws_route53_delegation_set": resourceAwsRoute53DelegationSet(), "aws_route53_record": resourceAwsRoute53Record(), "aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(), @@ -288,6 +236,12 @@ func init() { "secret_key": "The secret key for API operations. You can retrieve this\n" + "from the 'Security & Credentials' section of the AWS console.", + "profile": "The profile for API operations. If not set, the default profile\n" + + "created with `aws configure` will be used.", + + "shared_credentials_file": "The path to the shared credentials file. If not set\n" + + "this defaults to ~/.aws/credentials.", + "token": "session token. A session token is only required if you are\n" + "using temporary security credentials.", @@ -307,6 +261,8 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { config := Config{ AccessKey: d.Get("access_key").(string), SecretKey: d.Get("secret_key").(string), + Profile: d.Get("profile").(string), + CredsFilename: d.Get("shared_credentials_file").(string), Token: d.Get("token").(string), Region: d.Get("region").(string), MaxRetries: d.Get("max_retries").(int), diff --git a/builtin/providers/aws/resource_aws_ami_copy_test.go b/builtin/providers/aws/resource_aws_ami_copy_test.go index 0a469a8e0e..029e9a5abd 100644 --- a/builtin/providers/aws/resource_aws_ami_copy_test.go +++ b/builtin/providers/aws/resource_aws_ami_copy_test.go @@ -169,9 +169,9 @@ resource "aws_subnet" "foo" { resource "aws_instance" "test" { // This AMI has one block device mapping, so we expect to have // one snapshot in our created AMI. - // This is an Amazon Linux HVM AMI. A public HVM AMI is required + // This is an Ubuntu Linux HVM AMI. A public HVM AMI is required // because paravirtual images cannot be copied between accounts. - ami = "ami-5449393e" + ami = "ami-0f8bce65" instance_type = "t2.micro" tags { Name = "terraform-acc-ami-copy-victim" diff --git a/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy_test.go b/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy_test.go index ff13da2856..d1fd59f690 100644 --- a/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy_test.go +++ b/builtin/providers/aws/resource_aws_app_cookie_stickiness_policy_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elb" "github.com/hashicorp/terraform/helper/resource" @@ -40,10 +41,31 @@ func TestAccAWSAppCookieStickinessPolicy_basic(t *testing.T) { } func testAccCheckAppCookieStickinessPolicyDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) - } + conn := testAccProvider.Meta().(*AWSClient).elbconn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_app_cookie_stickiness_policy" { + continue + } + + lbName, _, policyName := resourceAwsAppCookieStickinessPolicyParseId( + rs.Primary.ID) + out, err := conn.DescribeLoadBalancerPolicies( + &elb.DescribeLoadBalancerPoliciesInput{ + LoadBalancerName: aws.String(lbName), + PolicyNames: []*string{aws.String(policyName)}, + }) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && (ec2err.Code() == "PolicyNotFound" || ec2err.Code() == "LoadBalancerNotFound") { + continue + } + return err + } + + if len(out.PolicyDescriptions) > 0 { + return fmt.Errorf("Policy still exists") + } + } return nil } diff --git a/builtin/providers/aws/resource_aws_autoscaling_group.go b/builtin/providers/aws/resource_aws_autoscaling_group.go index d5a87e33b5..b0e21697fd 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group.go @@ -51,8 +51,9 @@ func resourceAwsAutoscalingGroup() *schema.Resource { }, "min_elb_capacity": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Deprecated: "Please use 'wait_for_elb_capacity' instead.", }, "min_size": &schema.Schema{ @@ -96,6 +97,11 @@ func resourceAwsAutoscalingGroup() *schema.Resource { Set: schema.HashString, }, + "placement_group": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "load_balancers": &schema.Schema{ Type: schema.TypeSet, Optional: true, @@ -136,6 +142,11 @@ func resourceAwsAutoscalingGroup() *schema.Resource { }, }, + "wait_for_elb_capacity": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "tag": autoscalingTagsSchema(), }, } @@ -185,6 +196,10 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) autoScalingGroupOpts.HealthCheckGracePeriod = aws.Int64(int64(v.(int))) } + if v, ok := d.GetOk("placement_group"); ok { + autoScalingGroupOpts.PlacementGroup = aws.String(v.(string)) + } + if v, ok := d.GetOk("load_balancers"); ok && v.(*schema.Set).Len() > 0 { autoScalingGroupOpts.LoadBalancerNames = expandStringList( v.(*schema.Set).List()) @@ -232,6 +247,7 @@ func resourceAwsAutoscalingGroupRead(d *schema.ResourceData, meta interface{}) e d.Set("load_balancers", g.LoadBalancerNames) d.Set("min_size", g.MinSize) d.Set("max_size", g.MaxSize) + d.Set("placement_group", g.PlacementGroup) d.Set("name", g.AutoScalingGroupName) d.Set("tag", g.Tags) d.Set("vpc_zone_identifier", strings.Split(*g.VPCZoneIdentifier, ",")) @@ -242,6 +258,7 @@ func resourceAwsAutoscalingGroupRead(d *schema.ResourceData, meta interface{}) e func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).autoscalingconn + shouldWaitForCapacity := false opts := autoscaling.UpdateAutoScalingGroupInput{ AutoScalingGroupName: aws.String(d.Id()), @@ -253,6 +270,7 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("desired_capacity") { opts.DesiredCapacity = aws.Int64(int64(d.Get("desired_capacity").(int))) + shouldWaitForCapacity = true } if d.HasChange("launch_configuration") { @@ -261,6 +279,7 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("min_size") { opts.MinSize = aws.Int64(int64(d.Get("min_size").(int))) + shouldWaitForCapacity = true } if d.HasChange("max_size") { @@ -286,6 +305,10 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) } } + if d.HasChange("placement_group") { + opts.PlacementGroup = aws.String(d.Get("placement_group").(string)) + } + if d.HasChange("termination_policies") { // If the termination policy is set to null, we need to explicitly set // it back to "Default", or the API won't reset it for us. @@ -353,6 +376,10 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) } } + if shouldWaitForCapacity { + waitForASGCapacity(d, meta) + } + return resourceAwsAutoscalingGroupRead(d, meta) } @@ -490,7 +517,7 @@ func resourceAwsAutoscalingGroupDrain(d *schema.ResourceData, meta interface{}) // ASG before continuing. Waits up to `waitForASGCapacityTimeout` for // "desired_capacity", or "min_size" if desired capacity is not specified. // -// If "min_elb_capacity" is specified, will also wait for that number of +// If "wait_for_elb_capacity" is specified, will also wait for that number of // instances to show up InService in all attached ELBs. See "Waiting for // Capacity" in docs for more discussion of the feature. func waitForASGCapacity(d *schema.ResourceData, meta interface{}) error { @@ -498,7 +525,10 @@ func waitForASGCapacity(d *schema.ResourceData, meta interface{}) error { if v := d.Get("desired_capacity").(int); v > 0 { wantASG = v } - wantELB := d.Get("min_elb_capacity").(int) + wantELB := d.Get("wait_for_elb_capacity").(int) + + // Covers deprecated field support + wantELB += d.Get("min_elb_capacity").(int) wait, err := time.ParseDuration(d.Get("wait_for_capacity_timeout").(string)) if err != nil { @@ -561,11 +591,13 @@ func waitForASGCapacity(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] %q Capacity: %d/%d ASG, %d/%d ELB", d.Id(), haveASG, wantASG, haveELB, wantELB) - if haveASG >= wantASG && haveELB >= wantELB { + if haveASG == wantASG && haveELB == wantELB { return nil } - return fmt.Errorf("Still need to wait for more healthy instances. This could mean instances failed to launch. See Scaling History for more information.") + return fmt.Errorf( + "Still waiting for %q instances. Current/Desired: %d/%d ASG, %d/%d ELB", + d.Id(), haveASG, wantASG, haveELB, wantELB) }) } diff --git a/builtin/providers/aws/resource_aws_autoscaling_group_test.go b/builtin/providers/aws/resource_aws_autoscaling_group_test.go index 5f87bc3d08..bab4bde118 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group_test.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group_test.go @@ -161,7 +161,7 @@ func TestAccAWSAutoScalingGroup_WithLoadBalancer(t *testing.T) { CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccAWSAutoScalingGroupConfigWithLoadBalancer, + Config: fmt.Sprintf(testAccAWSAutoScalingGroupConfigWithLoadBalancer), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), testAccCheckAWSAutoScalingGroupAttributesLoadBalancer(&group), @@ -171,6 +171,26 @@ func TestAccAWSAutoScalingGroup_WithLoadBalancer(t *testing.T) { }) } +func TestAccAWSAutoScalingGroup_withPlacementGroup(t *testing.T) { + var group autoscaling.Group + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSAutoScalingGroupConfig_withPlacementGroup, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "placement_group", "test"), + ), + }, + }, + }) +} + func testAccCheckAWSAutoScalingGroupDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).autoscalingconn @@ -260,8 +280,8 @@ func testAccCheckAWSAutoScalingGroupAttributes(group *autoscaling.Group) resourc func testAccCheckAWSAutoScalingGroupAttributesLoadBalancer(group *autoscaling.Group) resource.TestCheckFunc { return func(s *terraform.State) error { - if *group.LoadBalancerNames[0] != "foobar-terraform-test" { - return fmt.Errorf("Bad load_balancers: %#v", group.LoadBalancerNames[0]) + if len(group.LoadBalancerNames) != 1 { + return fmt.Errorf("Bad load_balancers: %v", group.LoadBalancerNames) } return nil @@ -401,6 +421,11 @@ resource "aws_launch_configuration" "foobar" { instance_type = "t1.micro" } +resource "aws_placement_group" "test" { + name = "test" + strategy = "cluster" +} + resource "aws_autoscaling_group" "bar" { availability_zones = ["us-west-2a"] name = "foobar3-terraform-test" @@ -488,7 +513,6 @@ resource "aws_security_group" "foo" { } resource "aws_elb" "bar" { - name = "foobar-terraform-test" subnets = ["${aws_subnet.foo.id}"] security_groups = ["${aws_security_group.foo.id}"] @@ -526,7 +550,7 @@ resource "aws_autoscaling_group" "bar" { min_size = 2 health_check_grace_period = 300 health_check_type = "ELB" - min_elb_capacity = 2 + wait_for_elb_capacity = 2 force_delete = true launch_configuration = "${aws_launch_configuration.foobar.name}" @@ -628,3 +652,36 @@ resource "aws_autoscaling_group" "bar" { launch_configuration = "${aws_launch_configuration.foobar.name}" } ` + +const testAccAWSAutoScalingGroupConfig_withPlacementGroup = ` +resource "aws_launch_configuration" "foobar" { + image_id = "ami-21f78e11" + instance_type = "c3.large" +} + +resource "aws_placement_group" "test" { + name = "test" + strategy = "cluster" +} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["us-west-2a"] + name = "foobar3-terraform-test" + max_size = 1 + min_size = 1 + health_check_grace_period = 300 + health_check_type = "ELB" + desired_capacity = 1 + force_delete = true + termination_policies = ["OldestInstance","ClosestToNextInstanceHour"] + placement_group = "${aws_placement_group.test.name}" + + launch_configuration = "${aws_launch_configuration.foobar.name}" + + tag { + key = "Foo" + value = "foo-bar" + propagate_at_launch = true + } +} +` diff --git a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go index 5c3458acf4..d13ba17aef 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go +++ b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go @@ -33,6 +33,7 @@ func resourceAwsAutoscalingLifecycleHook() *schema.Resource { "default_result": &schema.Schema{ Type: schema.TypeString, Optional: true, + Computed: true, }, "heartbeat_timeout": &schema.Schema{ Type: schema.TypeInt, diff --git a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go index f425570e9c..bb16f49e0a 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go +++ b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook_test.go @@ -32,11 +32,29 @@ func TestAccAWSAutoscalingLifecycleHook_basic(t *testing.T) { }) } +func TestAccAWSAutoscalingLifecycleHook_omitDefaultResult(t *testing.T) { + var hook autoscaling.LifecycleHook + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoscalingLifecycleHookDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSAutoscalingLifecycleHookConfig_omitDefaultResult, + Check: resource.ComposeTestCheckFunc( + testAccCheckLifecycleHookExists("aws_autoscaling_lifecycle_hook.foobar", &hook), + resource.TestCheckResourceAttr("aws_autoscaling_lifecycle_hook.foobar", "default_result", "ABANDON"), + ), + }, + }, + }) +} + func testAccCheckLifecycleHookExists(n string, hook *autoscaling.LifecycleHook) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { - rs = rs return fmt.Errorf("Not found: %s", n) } @@ -166,3 +184,86 @@ EOF role_arn = "${aws_iam_role.foobar.arn}" } `) + +var testAccAWSAutoscalingLifecycleHookConfig_omitDefaultResult = fmt.Sprintf(` +resource "aws_launch_configuration" "foobar" { + name = "terraform-test-foobar5" + image_id = "ami-21f78e11" + instance_type = "t1.micro" +} + +resource "aws_sqs_queue" "foobar" { + name = "foobar" + delay_seconds = 90 + max_message_size = 2048 + message_retention_seconds = 86400 + receive_wait_time_seconds = 10 +} + +resource "aws_iam_role" "foobar" { + name = "foobar" + assume_role_policy = < 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", - s.RootModule().Resources) + conn := testAccProvider.Meta().(*AWSClient).codecommitconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_codecommit_repository" { + continue + } + + _, err := conn.GetRepository(&codecommit.GetRepositoryInput{ + RepositoryName: aws.String(rs.Primary.ID), + }) + + if ae, ok := err.(awserr.Error); ok && ae.Code() == "RepositoryDoesNotExistException" { + continue + } + if err == nil { + return fmt.Errorf("Repository still exists: %s", rs.Primary.ID) + } + return err } return nil } const testAccCodeCommitRepository_basic = ` +provider "aws" { + region = "us-east-1" +} resource "aws_codecommit_repository" "test" { repository_name = "my_test_repository" description = "This is a test description" @@ -102,6 +121,9 @@ resource "aws_codecommit_repository" "test" { ` const testAccCodeCommitRepository_withChanges = ` +provider "aws" { + region = "us-east-1" +} resource "aws_codecommit_repository" "test" { repository_name = "my_test_repository" description = "This is a test description - with changes" diff --git a/builtin/providers/aws/resource_aws_codedeploy_app_test.go b/builtin/providers/aws/resource_aws_codedeploy_app_test.go index 9610a01a74..dd3a4ce7a9 100644 --- a/builtin/providers/aws/resource_aws_codedeploy_app_test.go +++ b/builtin/providers/aws/resource_aws_codedeploy_app_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/codedeploy" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -40,17 +41,19 @@ func testAccCheckAWSCodeDeployAppDestroy(s *terraform.State) error { continue } - resp, err := conn.GetApplication(&codedeploy.GetApplicationInput{ - ApplicationName: aws.String(rs.Primary.ID), + _, err := conn.GetApplication(&codedeploy.GetApplicationInput{ + ApplicationName: aws.String(rs.Primary.Attributes["name"]), }) - if err == nil { - if resp.Application != nil { - return fmt.Errorf("CodeDeploy app still exists:\n%#v", *resp.Application.ApplicationId) + if err != nil { + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "ApplicationDoesNotExistException" { + continue } + return err } - return err + return fmt.Errorf("still exists") } return nil diff --git a/builtin/providers/aws/resource_aws_codedeploy_deployment_group.go b/builtin/providers/aws/resource_aws_codedeploy_deployment_group.go index ee81f1cf3c..457368aed8 100644 --- a/builtin/providers/aws/resource_aws_codedeploy_deployment_group.go +++ b/builtin/providers/aws/resource_aws_codedeploy_deployment_group.go @@ -344,17 +344,6 @@ func onPremisesTagFiltersToMap(list []*codedeploy.TagFilter) []map[string]string return result } -// validateTagFilters confirms the "value" component of a tag filter is one of -// AWS's three allowed types. -func validateTagFilters(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if value != "KEY_ONLY" && value != "VALUE_ONLY" && value != "KEY_AND_VALUE" { - errors = append(errors, fmt.Errorf( - "%q must be one of \"KEY_ONLY\", \"VALUE_ONLY\", or \"KEY_AND_VALUE\"", k)) - } - return -} - func resourceAwsCodeDeployTagFilterHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) diff --git a/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go b/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go index 3b873fe3ba..fa97ca4cc6 100644 --- a/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go +++ b/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/codedeploy" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -45,6 +46,10 @@ func testAccCheckAWSCodeDeployDeploymentGroupDestroy(s *terraform.State) error { DeploymentGroupName: aws.String(rs.Primary.Attributes["deployment_group_name"]), }) + if ae, ok := err.(awserr.Error); ok && ae.Code() == "ApplicationDoesNotExistException" { + continue + } + if err == nil { if resp.DeploymentGroupInfo.DeploymentGroupName != nil { return fmt.Errorf("CodeDeploy deployment group still exists:\n%#v", *resp.DeploymentGroupInfo.DeploymentGroupName) diff --git a/builtin/providers/aws/resource_aws_customer_gateway.go b/builtin/providers/aws/resource_aws_customer_gateway.go index 565ffe144f..8f09f8c004 100644 --- a/builtin/providers/aws/resource_aws_customer_gateway.go +++ b/builtin/providers/aws/resource_aws_customer_gateway.go @@ -68,7 +68,7 @@ func resourceAwsCustomerGatewayCreate(d *schema.ResourceData, meta interface{}) // Wait for the CustomerGateway to be available. stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "available", + Target: []string{"available"}, Refresh: customerGatewayRefreshFunc(conn, *customerGateway.CustomerGatewayId), Timeout: 10 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_customer_gateway_test.go b/builtin/providers/aws/resource_aws_customer_gateway_test.go index 9e3daec6d0..055e9054c1 100644 --- a/builtin/providers/aws/resource_aws_customer_gateway_test.go +++ b/builtin/providers/aws/resource_aws_customer_gateway_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" @@ -46,8 +47,33 @@ func TestAccAWSCustomerGateway_basic(t *testing.T) { } func testAccCheckCustomerGatewayDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_customer_gatewah" { + continue + } + + gatewayFilter := &ec2.Filter{ + Name: aws.String("customer-gateway-id"), + Values: []*string{aws.String(rs.Primary.ID)}, + } + + resp, err := conn.DescribeCustomerGateways(&ec2.DescribeCustomerGatewaysInput{ + Filters: []*ec2.Filter{gatewayFilter}, + }) + + if ae, ok := err.(awserr.Error); ok && ae.Code() == "InvalidCustomerGatewayID.NotFound" { + continue + } + + if err == nil { + if len(resp.CustomerGateways) > 0 { + return fmt.Errorf("Customer gateway still exists: %v", resp.CustomerGateways) + } + } + + return err } return nil diff --git a/builtin/providers/aws/resource_aws_db_instance.go b/builtin/providers/aws/resource_aws_db_instance.go index bd566b8a54..523c89c251 100644 --- a/builtin/providers/aws/resource_aws_db_instance.go +++ b/builtin/providers/aws/resource_aws_db_instance.go @@ -31,20 +31,27 @@ func resourceAwsDbInstance() *schema.Resource { ForceNew: true, }, + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "username": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, + Computed: true, ForceNew: true, }, "password": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, }, "engine": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, + Computed: true, ForceNew: true, StateFunc: func(v interface{}) string { value := v.(string) @@ -66,7 +73,8 @@ func resourceAwsDbInstance() *schema.Resource { "allocated_storage": &schema.Schema{ Type: schema.TypeInt, - Required: true, + Optional: true, + Computed: true, }, "storage_type": &schema.Schema{ @@ -183,6 +191,12 @@ func resourceAwsDbInstance() *schema.Resource { }, }, + "skip_final_snapshot": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "copy_tags_to_snapshot": &schema.Schema{ Type: schema.TypeBool, Optional: true, @@ -285,9 +299,19 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error opts.AvailabilityZone = aws.String(attr.(string)) } + if attr, ok := d.GetOk("storage_type"); ok { + opts.StorageType = aws.String(attr.(string)) + } + if attr, ok := d.GetOk("publicly_accessible"); ok { opts.PubliclyAccessible = aws.Bool(attr.(bool)) } + + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + opts.DBSubnetGroupName = aws.String(attr.(string)) + } + + log.Printf("[DEBUG] DB Instance Replica create configuration: %#v", opts) _, err := conn.CreateDBInstanceReadReplica(&opts) if err != nil { return fmt.Errorf("Error creating DB Instance: %s", err) @@ -362,8 +386,9 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error "[INFO] Waiting for DB Instance to be available") stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "backing-up", "modifying"}, - Target: "available", + Pending: []string{"creating", "backing-up", "modifying", "resetting-master-credentials", + "maintenance", "renaming", "rebooting", "upgrading"}, + Target: []string{"available"}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Timeout: 40 * time.Minute, MinTimeout: 10 * time.Second, @@ -383,6 +408,18 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error } } else { + if _, ok := d.GetOk("allocated_storage"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "allocated_storage": required field is not set`, d.Get("name").(string)) + } + if _, ok := d.GetOk("engine"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "engine": required field is not set`, d.Get("name").(string)) + } + if _, ok := d.GetOk("password"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "password": required field is not set`, d.Get("name").(string)) + } + if _, ok := d.GetOk("username"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "username": required field is not set`, d.Get("name").(string)) + } opts := rds.CreateDBInstanceInput{ AllocatedStorage: aws.Int64(int64(d.Get("allocated_storage").(int))), DBName: aws.String(d.Get("name").(string)), @@ -473,8 +510,9 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error "[INFO] Waiting for DB Instance to be available") stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "backing-up", "modifying"}, - Target: "available", + Pending: []string{"creating", "backing-up", "modifying", "resetting-master-credentials", + "maintenance", "renaming", "rebooting", "upgrading"}, + Target: []string{"available"}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Timeout: 40 * time.Minute, MinTimeout: 10 * time.Second, @@ -548,6 +586,7 @@ func resourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Error building ARN for DB Instance, not setting Tags for DB %s", name) } else { + d.Set("arn", arn) resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ ResourceName: aws.String(arn), }) @@ -603,11 +642,15 @@ func resourceAwsDbInstanceDelete(d *schema.ResourceData, meta interface{}) error opts := rds.DeleteDBInstanceInput{DBInstanceIdentifier: aws.String(d.Id())} - finalSnapshot := d.Get("final_snapshot_identifier").(string) - if finalSnapshot == "" { - opts.SkipFinalSnapshot = aws.Bool(true) - } else { - opts.FinalDBSnapshotIdentifier = aws.String(finalSnapshot) + skipFinalSnapshot := d.Get("skip_final_snapshot").(bool) + opts.SkipFinalSnapshot = aws.Bool(skipFinalSnapshot) + + if !skipFinalSnapshot { + if name, present := d.GetOk("final_snapshot_identifier"); present { + opts.FinalDBSnapshotIdentifier = aws.String(name.(string)) + } else { + return fmt.Errorf("DB Instance FinalSnapshotIdentifier is required when a final snapshot is required") + } } log.Printf("[DEBUG] DB Instance destroy configuration: %v", opts) @@ -620,7 +663,7 @@ func resourceAwsDbInstanceDelete(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"creating", "backing-up", "modifying", "deleting", "available"}, - Target: "", + Target: []string{}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Timeout: 40 * time.Minute, MinTimeout: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_db_instance_test.go b/builtin/providers/aws/resource_aws_db_instance_test.go index a2c2f69cad..6142281d00 100644 --- a/builtin/providers/aws/resource_aws_db_instance_test.go +++ b/builtin/providers/aws/resource_aws_db_instance_test.go @@ -2,6 +2,8 @@ package aws import ( "fmt" + "log" + "math/rand" "testing" "time" @@ -67,6 +69,42 @@ func TestAccAWSDBInstanceReplica(t *testing.T) { }) } +func TestAccAWSDBInstanceSnapshot(t *testing.T) { + var snap rds.DBInstance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceSnapshot, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccSnapshotInstanceConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.snapshot", &snap), + ), + }, + }, + }) +} + +func TestAccAWSDBInstanceNoSnapshot(t *testing.T) { + var nosnap rds.DBInstance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceNoSnapshot, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNoSnapshotInstanceConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.no_snapshot", &nosnap), + ), + }, + }, + }) +} + func testAccCheckAWSDBInstanceDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).rdsconn @@ -82,6 +120,10 @@ func testAccCheckAWSDBInstanceDestroy(s *terraform.State) error { DBInstanceIdentifier: aws.String(rs.Primary.ID), }) + if ae, ok := err.(awserr.Error); ok && ae.Code() == "DBInstanceNotFound" { + continue + } + if err == nil { if len(resp.DBInstances) != 0 && *resp.DBInstances[0].DBInstanceIdentifier == rs.Primary.ID { @@ -132,6 +174,104 @@ func testAccCheckAWSDBInstanceReplicaAttributes(source, replica *rds.DBInstance) } } +func testAccCheckAWSDBInstanceSnapshot(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).rdsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_db_instance" { + continue + } + + var err error + resp, err := conn.DescribeDBInstances( + &rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + newerr, _ := err.(awserr.Error) + if newerr.Code() != "DBInstanceNotFound" { + return err + } + + } else { + if len(resp.DBInstances) != 0 && + *resp.DBInstances[0].DBInstanceIdentifier == rs.Primary.ID { + return fmt.Errorf("DB Instance still exists") + } + } + + log.Printf("[INFO] Trying to locate the DBInstance Final Snapshot") + snapshot_identifier := "foobarbaz-test-terraform-final-snapshot-1" + _, snapErr := conn.DescribeDBSnapshots( + &rds.DescribeDBSnapshotsInput{ + DBSnapshotIdentifier: aws.String(snapshot_identifier), + }) + + if snapErr != nil { + newerr, _ := snapErr.(awserr.Error) + if newerr.Code() == "DBSnapshotNotFound" { + return fmt.Errorf("Snapshot %s not found", snapshot_identifier) + } + } else { + log.Printf("[INFO] Deleting the Snapshot %s", snapshot_identifier) + _, snapDeleteErr := conn.DeleteDBSnapshot( + &rds.DeleteDBSnapshotInput{ + DBSnapshotIdentifier: aws.String(snapshot_identifier), + }) + if snapDeleteErr != nil { + return err + } + } + } + + return nil +} + +func testAccCheckAWSDBInstanceNoSnapshot(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).rdsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_db_instance" { + continue + } + + var err error + resp, err := conn.DescribeDBInstances( + &rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + newerr, _ := err.(awserr.Error) + if newerr.Code() != "DBInstanceNotFound" { + return err + } + + } else { + if len(resp.DBInstances) != 0 && + *resp.DBInstances[0].DBInstanceIdentifier == rs.Primary.ID { + return fmt.Errorf("DB Instance still exists") + } + } + + snapshot_identifier := "foobarbaz-test-terraform-final-snapshot-2" + _, snapErr := conn.DescribeDBSnapshots( + &rds.DescribeDBSnapshotsInput{ + DBSnapshotIdentifier: aws.String(snapshot_identifier), + }) + + if snapErr != nil { + newerr, _ := snapErr.(awserr.Error) + if newerr.Code() != "DBSnapshotNotFound" { + return fmt.Errorf("Snapshot %s found and it shouldn't have been", snapshot_identifier) + } + } + } + + return nil +} + func testAccCheckAWSDBInstanceExists(n string, v *rds.DBInstance) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -226,3 +366,51 @@ func testAccReplicaInstanceConfig(val int) string { } `, val, val) } + +var testAccSnapshotInstanceConfig = ` +provider "aws" { + region = "us-east-1" +} +resource "aws_db_instance" "snapshot" { + identifier = "foobarbaz-test-terraform-snapshot-1" + + allocated_storage = 5 + engine = "mysql" + engine_version = "5.6.21" + instance_class = "db.t1.micro" + name = "baz" + password = "barbarbarbar" + username = "foo" + security_group_names = ["default"] + backup_retention_period = 1 + + parameter_group_name = "default.mysql5.6" + + skip_final_snapshot = false + final_snapshot_identifier = "foobarbaz-test-terraform-final-snapshot-1" +} +` + +var testAccNoSnapshotInstanceConfig = ` +provider "aws" { + region = "us-east-1" +} +resource "aws_db_instance" "no_snapshot" { + identifier = "foobarbaz-test-terraform-snapshot-2" + + allocated_storage = 5 + engine = "mysql" + engine_version = "5.6.21" + instance_class = "db.t1.micro" + name = "baz" + password = "barbarbarbar" + username = "foo" + security_group_names = ["default"] + backup_retention_period = 1 + + parameter_group_name = "default.mysql5.6" + + skip_final_snapshot = true + final_snapshot_identifier = "foobarbaz-test-terraform-final-snapshot-2" +} +` diff --git a/builtin/providers/aws/resource_aws_db_parameter_group.go b/builtin/providers/aws/resource_aws_db_parameter_group.go index b4f07e43de..2fde74cb50 100644 --- a/builtin/providers/aws/resource_aws_db_parameter_group.go +++ b/builtin/providers/aws/resource_aws_db_parameter_group.go @@ -4,7 +4,6 @@ import ( "bytes" "fmt" "log" - "regexp" "strings" "time" @@ -14,6 +13,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/rds" ) @@ -24,6 +24,10 @@ func resourceAwsDbParameterGroup() *schema.Resource { Update: resourceAwsDbParameterGroupUpdate, Delete: resourceAwsDbParameterGroupDelete, Schema: map[string]*schema.Schema{ + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, "name": &schema.Schema{ Type: schema.TypeString, ForceNew: true, @@ -71,17 +75,21 @@ func resourceAwsDbParameterGroup() *schema.Resource { }, Set: resourceAwsDbParameterHash, }, + + "tags": tagsSchema(), }, } } func resourceAwsDbParameterGroupCreate(d *schema.ResourceData, meta interface{}) error { rdsconn := meta.(*AWSClient).rdsconn + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) createOpts := rds.CreateDBParameterGroupInput{ DBParameterGroupName: aws.String(d.Get("name").(string)), DBParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), + Tags: tags, } log.Printf("[DEBUG] Create DB Parameter Group: %#v", createOpts) @@ -136,6 +144,31 @@ func resourceAwsDbParameterGroupRead(d *schema.ResourceData, meta interface{}) e d.Set("parameter", flattenParameters(describeParametersResp.Parameters)) + paramGroup := describeResp.DBParameterGroups[0] + arn, err := buildRDSPGARN(d, meta) + if err != nil { + name := "" + if paramGroup.DBParameterGroupName != nil && *paramGroup.DBParameterGroupName != "" { + name = *paramGroup.DBParameterGroupName + } + log.Printf("[DEBUG] Error building ARN for DB Parameter Group, not setting Tags for Param Group %s", name) + } else { + d.Set("arn", arn) + resp, err := rdsconn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + d.Set("tags", tagsToMapRDS(dt)) + } + return nil } @@ -177,6 +210,14 @@ func resourceAwsDbParameterGroupUpdate(d *schema.ResourceData, meta interface{}) d.SetPartial("parameter") } + if arn, err := buildRDSPGARN(d, meta); err == nil { + if err := setTagsRDS(rdsconn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") + } + } + d.Partial(false) return resourceAwsDbParameterGroupRead(d, meta) @@ -185,7 +226,7 @@ func resourceAwsDbParameterGroupUpdate(d *schema.ResourceData, meta interface{}) func resourceAwsDbParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "destroyed", + Target: []string{"destroyed"}, Refresh: resourceAwsDbParameterGroupDeleteRefreshFunc(d, meta), Timeout: 3 * time.Minute, MinTimeout: 1 * time.Second, @@ -230,28 +271,16 @@ func resourceAwsDbParameterHash(v interface{}) int { return hashcode.String(buf.String()) } -func validateDbParamGroupName(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "only lowercase alphanumeric characters and hyphens allowed in %q", k)) +func buildRDSPGARN(d *schema.ResourceData, meta interface{}) (string, error) { + iamconn := meta.(*AWSClient).iamconn + region := meta.(*AWSClient).region + // An zero value GetUserInput{} defers to the currently logged in user + resp, err := iamconn.GetUser(&iam.GetUserInput{}) + if err != nil { + return "", err } - if !regexp.MustCompile(`^[a-z]`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "first character of %q must be a letter", k)) - } - if regexp.MustCompile(`--`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot contain two consecutive hyphens", k)) - } - if regexp.MustCompile(`-$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot end with a hyphen", k)) - } - if len(value) > 255 { - errors = append(errors, fmt.Errorf( - "%q cannot be greater than 255 characters", k)) - } - return - + userARN := *resp.User.Arn + accountID := strings.Split(userARN, ":")[4] + arn := fmt.Sprintf("arn:aws:rds:%s:%s:pg:%s", region, accountID, d.Id()) + return arn, nil } diff --git a/builtin/providers/aws/resource_aws_db_parameter_group_test.go b/builtin/providers/aws/resource_aws_db_parameter_group_test.go index d0042df232..c2a8b9538f 100644 --- a/builtin/providers/aws/resource_aws_db_parameter_group_test.go +++ b/builtin/providers/aws/resource_aws_db_parameter_group_test.go @@ -44,6 +44,8 @@ func TestAccAWSDBParameterGroup_basic(t *testing.T) { "aws_db_parameter_group.bar", "parameter.2478663599.name", "character_set_client"), resource.TestCheckResourceAttr( "aws_db_parameter_group.bar", "parameter.2478663599.value", "utf8"), + resource.TestCheckResourceAttr( + "aws_db_parameter_group.bar", "tags.#", "1"), ), }, resource.TestStep{ @@ -77,6 +79,8 @@ func TestAccAWSDBParameterGroup_basic(t *testing.T) { "aws_db_parameter_group.bar", "parameter.2478663599.name", "character_set_client"), resource.TestCheckResourceAttr( "aws_db_parameter_group.bar", "parameter.2478663599.value", "utf8"), + resource.TestCheckResourceAttr( + "aws_db_parameter_group.bar", "tags.#", "2"), ), }, }, @@ -174,7 +178,7 @@ func testAccCheckAWSDBParameterGroupDestroy(s *terraform.State) error { if !ok { return err } - if newerr.Code() != "InvalidDBParameterGroup.NotFound" { + if newerr.Code() != "DBParameterGroupNotFound" { return err } } @@ -262,6 +266,9 @@ resource "aws_db_parameter_group" "bar" { name = "character_set_results" value = "utf8" } + tags { + foo = "bar" + } } ` @@ -290,6 +297,10 @@ resource "aws_db_parameter_group" "bar" { name = "collation_connection" value = "utf8_unicode_ci" } + tags { + foo = "bar" + baz = "foo" + } } ` diff --git a/builtin/providers/aws/resource_aws_db_security_group.go b/builtin/providers/aws/resource_aws_db_security_group.go index 367400ae77..86bce46cd9 100644 --- a/builtin/providers/aws/resource_aws_db_security_group.go +++ b/builtin/providers/aws/resource_aws_db_security_group.go @@ -4,10 +4,12 @@ import ( "bytes" "fmt" "log" + "strings" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/hashcode" @@ -19,9 +21,15 @@ func resourceAwsDbSecurityGroup() *schema.Resource { return &schema.Resource{ Create: resourceAwsDbSecurityGroupCreate, Read: resourceAwsDbSecurityGroupRead, + Update: resourceAwsDbSecurityGroupUpdate, Delete: resourceAwsDbSecurityGroupDelete, Schema: map[string]*schema.Schema{ + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "name": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -66,12 +74,15 @@ func resourceAwsDbSecurityGroup() *schema.Resource { }, Set: resourceAwsDbSecurityGroupIngressHash, }, + + "tags": tagsSchema(), }, } } func resourceAwsDbSecurityGroupCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).rdsconn + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) var err error var errs []error @@ -79,6 +90,7 @@ func resourceAwsDbSecurityGroupCreate(d *schema.ResourceData, meta interface{}) opts := rds.CreateDBSecurityGroupInput{ DBSecurityGroupName: aws.String(d.Get("name").(string)), DBSecurityGroupDescription: aws.String(d.Get("description").(string)), + Tags: tags, } log.Printf("[DEBUG] DB Security Group create configuration: %#v", opts) @@ -113,7 +125,7 @@ func resourceAwsDbSecurityGroupCreate(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"authorizing"}, - Target: "authorized", + Target: []string{"authorized"}, Refresh: resourceAwsDbSecurityGroupStateRefreshFunc(d, meta), Timeout: 10 * time.Minute, } @@ -157,9 +169,50 @@ func resourceAwsDbSecurityGroupRead(d *schema.ResourceData, meta interface{}) er d.Set("ingress", rules) + conn := meta.(*AWSClient).rdsconn + arn, err := buildRDSSecurityGroupARN(d, meta) + if err != nil { + name := "" + if sg.DBSecurityGroupName != nil && *sg.DBSecurityGroupName != "" { + name = *sg.DBSecurityGroupName + } + log.Printf("[DEBUG] Error building ARN for DB Security Group, not setting Tags for DB Security Group %s", name) + } else { + d.Set("arn", arn) + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + d.Set("tags", tagsToMapRDS(dt)) + } + return nil } +func resourceAwsDbSecurityGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + d.Partial(true) + if arn, err := buildRDSSecurityGroupARN(d, meta); err == nil { + if err := setTagsRDS(conn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") + } + } + d.Partial(false) + + return resourceAwsDbSecurityGroupRead(d, meta) +} + func resourceAwsDbSecurityGroupDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).rdsconn @@ -290,3 +343,17 @@ func resourceAwsDbSecurityGroupStateRefreshFunc( return v, "authorized", nil } } + +func buildRDSSecurityGroupARN(d *schema.ResourceData, meta interface{}) (string, error) { + iamconn := meta.(*AWSClient).iamconn + region := meta.(*AWSClient).region + // An zero value GetUserInput{} defers to the currently logged in user + resp, err := iamconn.GetUser(&iam.GetUserInput{}) + if err != nil { + return "", err + } + userARN := *resp.User.Arn + accountID := strings.Split(userARN, ":")[4] + arn := fmt.Sprintf("arn:aws:rds:%s:%s:secgrp:%s", region, accountID, d.Id()) + return arn, nil +} diff --git a/builtin/providers/aws/resource_aws_db_security_group_test.go b/builtin/providers/aws/resource_aws_db_security_group_test.go index bf1db6e37b..7ab269fb36 100644 --- a/builtin/providers/aws/resource_aws_db_security_group_test.go +++ b/builtin/providers/aws/resource_aws_db_security_group_test.go @@ -32,6 +32,8 @@ func TestAccAWSDBSecurityGroup_basic(t *testing.T) { "aws_db_security_group.bar", "ingress.3363517775.cidr", "10.0.0.1/24"), resource.TestCheckResourceAttr( "aws_db_security_group.bar", "ingress.#", "1"), + resource.TestCheckResourceAttr( + "aws_db_security_group.bar", "tags.#", "1"), ), }, }, @@ -64,7 +66,7 @@ func testAccCheckAWSDBSecurityGroupDestroy(s *terraform.State) error { if !ok { return err } - if newerr.Code() != "InvalidDBSecurityGroup.NotFound" { + if newerr.Code() != "DBSecurityGroupNotFound" { return err } } @@ -149,5 +151,9 @@ resource "aws_db_security_group" "bar" { ingress { cidr = "10.0.0.1/24" } + + tags { + foo = "bar" + } } ` diff --git a/builtin/providers/aws/resource_aws_db_subnet_group.go b/builtin/providers/aws/resource_aws_db_subnet_group.go index cbfed609a9..aec1d23ffb 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group.go @@ -23,26 +23,16 @@ func resourceAwsDbSubnetGroup() *schema.Resource { Delete: resourceAwsDbSubnetGroupDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "arn": &schema.Schema{ Type: schema.TypeString, - ForceNew: true, - Required: true, - ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`^[ .0-9A-Za-z-_]+$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "only alphanumeric characters, hyphens, underscores, periods, and spaces allowed in %q", k)) - } - if len(value) > 255 { - errors = append(errors, fmt.Errorf( - "%q cannot be longer than 255 characters", k)) - } - if regexp.MustCompile(`(?i)^default$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q is not allowed as %q", "Default", k)) - } - return - }, + Computed: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Required: true, + ValidateFunc: validateSubnetGroupName, }, "description": &schema.Schema{ @@ -126,8 +116,8 @@ func resourceAwsDbSubnetGroupRead(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Unable to find DB Subnet Group: %#v", describeResp.DBSubnetGroups) } - d.Set("name", d.Id()) - d.Set("description", *subnetGroup.DBSubnetGroupDescription) + d.Set("name", subnetGroup.DBSubnetGroupName) + d.Set("description", subnetGroup.DBSubnetGroupDescription) subnets := make([]string, 0, len(subnetGroup.Subnets)) for _, s := range subnetGroup.Subnets { @@ -142,6 +132,7 @@ func resourceAwsDbSubnetGroupRead(d *schema.ResourceData, meta interface{}) erro if err != nil { log.Printf("[DEBUG] Error building ARN for DB Subnet Group, not setting Tags for group %s", *subnetGroup.DBSubnetGroupName) } else { + d.Set("arn", arn) resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ ResourceName: aws.String(arn), }) @@ -198,7 +189,7 @@ func resourceAwsDbSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) er func resourceAwsDbSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "destroyed", + Target: []string{"destroyed"}, Refresh: resourceAwsDbSubnetGroupDeleteRefreshFunc(d, meta), Timeout: 3 * time.Minute, MinTimeout: 1 * time.Second, @@ -246,3 +237,20 @@ func buildRDSsubgrpARN(d *schema.ResourceData, meta interface{}) (string, error) arn := fmt.Sprintf("arn:aws:rds:%s:%s:subgrp:%s", region, accountID, d.Id()) return arn, nil } + +func validateSubnetGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[ .0-9a-z-_]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters, hyphens, underscores, periods, and spaces allowed in %q", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 255 characters", k)) + } + if regexp.MustCompile(`(?i)^default$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q is not allowed as %q", "Default", k)) + } + return +} diff --git a/builtin/providers/aws/resource_aws_db_subnet_group_test.go b/builtin/providers/aws/resource_aws_db_subnet_group_test.go index d943294a97..b3049f035f 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group_test.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group_test.go @@ -66,6 +66,38 @@ func TestAccAWSDBSubnetGroup_withUndocumentedCharacters(t *testing.T) { }) } +func TestResourceAWSDBSubnetGroupNameValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting", + ErrCount: 1, + }, + { + Value: "testing?", + ErrCount: 1, + }, + { + Value: "default", + ErrCount: 1, + }, + { + Value: randomString(300), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateSubnetGroupName(tc.Value, "aws_db_subnet_group") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the DB Subnet Group name to trigger a validation error") + } + } +} + func testAccCheckDBSubnetGroupDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).rdsconn @@ -149,7 +181,7 @@ resource "aws_subnet" "bar" { } resource "aws_db_subnet_group" "foo" { - name = "FOO" + name = "foo" description = "foo description" subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] tags { diff --git a/builtin/providers/aws/resource_aws_directory_service_directory.go b/builtin/providers/aws/resource_aws_directory_service_directory.go index 1fdb9491ee..a57527b763 100644 --- a/builtin/providers/aws/resource_aws_directory_service_directory.go +++ b/builtin/providers/aws/resource_aws_directory_service_directory.go @@ -8,10 +8,17 @@ import ( "github.com/hashicorp/terraform/helper/schema" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/hashicorp/terraform/helper/resource" ) +var directoryCreationFuncs = map[string]func(*directoryservice.DirectoryService, *schema.ResourceData) (string, error){ + "SimpleAD": createSimpleDirectoryService, + "MicrosoftAD": createActiveDirectoryService, + "ADConnector": createDirectoryConnector, +} + func resourceAwsDirectoryServiceDirectory() *schema.Resource { return &schema.Resource{ Create: resourceAwsDirectoryServiceDirectoryCreate, @@ -32,7 +39,7 @@ func resourceAwsDirectoryServiceDirectory() *schema.Resource { }, "size": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, ForceNew: true, }, "alias": &schema.Schema{ @@ -54,7 +61,8 @@ func resourceAwsDirectoryServiceDirectory() *schema.Resource { }, "vpc_settings": &schema.Schema{ Type: schema.TypeList, - Required: true, + Optional: true, + ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "subnet_ids": &schema.Schema{ @@ -72,6 +80,39 @@ func resourceAwsDirectoryServiceDirectory() *schema.Resource { }, }, }, + "connect_settings": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "customer_username": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "customer_dns_ips": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "subnet_ids": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "vpc_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + }, + }, "enable_sso": &schema.Schema{ Type: schema.TypeBool, Optional: true, @@ -89,14 +130,120 @@ func resourceAwsDirectoryServiceDirectory() *schema.Resource { }, "type": &schema.Schema{ Type: schema.TypeString, - Computed: true, + Optional: true, + Default: "SimpleAD", + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + validTypes := []string{"SimpleAD", "MicrosoftAD"} + value := v.(string) + for validType, _ := range directoryCreationFuncs { + if validType == value { + return + } + } + es = append(es, fmt.Errorf("%q must be one of %q", k, validTypes)) + return + }, }, }, } } -func resourceAwsDirectoryServiceDirectoryCreate(d *schema.ResourceData, meta interface{}) error { - dsconn := meta.(*AWSClient).dsconn +func buildVpcSettings(d *schema.ResourceData) (vpcSettings *directoryservice.DirectoryVpcSettings, err error) { + if v, ok := d.GetOk("vpc_settings"); !ok { + return nil, fmt.Errorf("vpc_settings is required for type = SimpleAD or MicrosoftAD") + } else { + settings := v.([]interface{}) + + if len(settings) > 1 { + return nil, fmt.Errorf("Only a single vpc_settings block is expected") + } else if len(settings) == 1 { + s := settings[0].(map[string]interface{}) + var subnetIds []*string + for _, id := range s["subnet_ids"].(*schema.Set).List() { + subnetIds = append(subnetIds, aws.String(id.(string))) + } + + vpcSettings = &directoryservice.DirectoryVpcSettings{ + SubnetIds: subnetIds, + VpcId: aws.String(s["vpc_id"].(string)), + } + } + } + + return vpcSettings, nil +} + +func buildConnectSettings(d *schema.ResourceData) (connectSettings *directoryservice.DirectoryConnectSettings, err error) { + if v, ok := d.GetOk("connect_settings"); !ok { + return nil, fmt.Errorf("connect_settings is required for type = ADConnector") + } else { + settings := v.([]interface{}) + + if len(settings) > 1 { + return nil, fmt.Errorf("Only a single connect_settings block is expected") + } else if len(settings) == 1 { + s := settings[0].(map[string]interface{}) + + var subnetIds []*string + for _, id := range s["subnet_ids"].(*schema.Set).List() { + subnetIds = append(subnetIds, aws.String(id.(string))) + } + + var customerDnsIps []*string + for _, id := range s["customer_dns_ips"].(*schema.Set).List() { + customerDnsIps = append(customerDnsIps, aws.String(id.(string))) + } + + connectSettings = &directoryservice.DirectoryConnectSettings{ + CustomerDnsIps: customerDnsIps, + CustomerUserName: aws.String(s["customer_username"].(string)), + SubnetIds: subnetIds, + VpcId: aws.String(s["vpc_id"].(string)), + } + } + } + + return connectSettings, nil +} + +func createDirectoryConnector(dsconn *directoryservice.DirectoryService, d *schema.ResourceData) (directoryId string, err error) { + if _, ok := d.GetOk("size"); !ok { + return "", fmt.Errorf("size is required for type = ADConnector") + } + + input := directoryservice.ConnectDirectoryInput{ + Name: aws.String(d.Get("name").(string)), + Password: aws.String(d.Get("password").(string)), + Size: aws.String(d.Get("size").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + if v, ok := d.GetOk("short_name"); ok { + input.ShortName = aws.String(v.(string)) + } + + input.ConnectSettings, err = buildConnectSettings(d) + if err != nil { + return "", err + } + + log.Printf("[DEBUG] Creating Directory Connector: %s", input) + out, err := dsconn.ConnectDirectory(&input) + if err != nil { + return "", err + } + log.Printf("[DEBUG] Directory Connector created: %s", out) + + return *out.DirectoryId, nil +} + +func createSimpleDirectoryService(dsconn *directoryservice.DirectoryService, d *schema.ResourceData) (directoryId string, err error) { + if _, ok := d.GetOk("size"); !ok { + return "", fmt.Errorf("size is required for type = SimpleAD") + } input := directoryservice.CreateDirectoryInput{ Name: aws.String(d.Get("name").(string)), @@ -111,39 +258,70 @@ func resourceAwsDirectoryServiceDirectoryCreate(d *schema.ResourceData, meta int input.ShortName = aws.String(v.(string)) } - if v, ok := d.GetOk("vpc_settings"); ok { - settings := v.([]interface{}) - - if len(settings) > 1 { - return fmt.Errorf("Only a single vpc_settings block is expected") - } else if len(settings) == 1 { - s := settings[0].(map[string]interface{}) - var subnetIds []*string - for _, id := range s["subnet_ids"].(*schema.Set).List() { - subnetIds = append(subnetIds, aws.String(id.(string))) - } - - vpcSettings := directoryservice.DirectoryVpcSettings{ - SubnetIds: subnetIds, - VpcId: aws.String(s["vpc_id"].(string)), - } - input.VpcSettings = &vpcSettings - } + input.VpcSettings, err = buildVpcSettings(d) + if err != nil { + return "", err } - log.Printf("[DEBUG] Creating Directory Service: %s", input) + log.Printf("[DEBUG] Creating Simple Directory Service: %s", input) out, err := dsconn.CreateDirectory(&input) + if err != nil { + return "", err + } + log.Printf("[DEBUG] Simple Directory Service created: %s", out) + + return *out.DirectoryId, nil +} + +func createActiveDirectoryService(dsconn *directoryservice.DirectoryService, d *schema.ResourceData) (directoryId string, err error) { + input := directoryservice.CreateMicrosoftADInput{ + Name: aws.String(d.Get("name").(string)), + Password: aws.String(d.Get("password").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + if v, ok := d.GetOk("short_name"); ok { + input.ShortName = aws.String(v.(string)) + } + + input.VpcSettings, err = buildVpcSettings(d) + if err != nil { + return "", err + } + + log.Printf("[DEBUG] Creating Microsoft AD Directory Service: %s", input) + out, err := dsconn.CreateMicrosoftAD(&input) + if err != nil { + return "", err + } + log.Printf("[DEBUG] Microsoft AD Directory Service created: %s", out) + + return *out.DirectoryId, nil +} + +func resourceAwsDirectoryServiceDirectoryCreate(d *schema.ResourceData, meta interface{}) error { + dsconn := meta.(*AWSClient).dsconn + + creationFunc, ok := directoryCreationFuncs[d.Get("type").(string)] + if !ok { + // Shouldn't happen as this is validated above + return fmt.Errorf("Unsupported directory type: %s", d.Get("type")) + } + + directoryId, err := creationFunc(dsconn, d) if err != nil { return err } - log.Printf("[DEBUG] Directory Service created: %s", out) - d.SetId(*out.DirectoryId) + + d.SetId(directoryId) // Wait for creation log.Printf("[DEBUG] Waiting for DS (%q) to become available", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"Requested", "Creating", "Created"}, - Target: "Active", + Target: []string{"Active"}, Refresh: func() (interface{}, string, error) { resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ DirectoryIds: []*string{aws.String(d.Id())}, @@ -158,7 +336,7 @@ func resourceAwsDirectoryServiceDirectoryCreate(d *schema.ResourceData, meta int d.Id(), *ds.Stage) return ds, *ds.Stage, nil }, - Timeout: 10 * time.Minute, + Timeout: 30 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { return fmt.Errorf( @@ -233,14 +411,22 @@ func resourceAwsDirectoryServiceDirectoryRead(d *schema.ResourceData, meta inter if dir.Description != nil { d.Set("description", *dir.Description) } - d.Set("dns_ip_addresses", schema.NewSet(schema.HashString, flattenStringList(dir.DnsIpAddrs))) + + if *dir.Type == "ADConnector" { + d.Set("dns_ip_addresses", schema.NewSet(schema.HashString, flattenStringList(dir.ConnectSettings.ConnectIps))) + } else { + d.Set("dns_ip_addresses", schema.NewSet(schema.HashString, flattenStringList(dir.DnsIpAddrs))) + } d.Set("name", *dir.Name) if dir.ShortName != nil { d.Set("short_name", *dir.ShortName) } - d.Set("size", *dir.Size) + if dir.Size != nil { + d.Set("size", *dir.Size) + } d.Set("type", *dir.Type) d.Set("vpc_settings", flattenDSVpcSettings(dir.VpcSettings)) + d.Set("connect_settings", flattenDSConnectSettings(dir.DnsIpAddrs, dir.ConnectSettings)) d.Set("enable_sso", *dir.SsoEnabled) return nil @@ -252,6 +438,8 @@ func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta int input := directoryservice.DeleteDirectoryInput{ DirectoryId: aws.String(d.Id()), } + + log.Printf("[DEBUG] Delete Directory input: %s", input) _, err := dsconn.DeleteDirectory(&input) if err != nil { return err @@ -261,17 +449,20 @@ func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta int log.Printf("[DEBUG] Waiting for DS (%q) to be deleted", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"Deleting"}, - Target: "", + Target: []string{"Deleted"}, Refresh: func() (interface{}, string, error) { resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ DirectoryIds: []*string{aws.String(d.Id())}, }) if err != nil { - return nil, "", err + if dserr, ok := err.(awserr.Error); ok && dserr.Code() == "EntityDoesNotExistException" { + return 42, "Deleted", nil + } + return nil, "error", err } if len(resp.DirectoryDescriptions) == 0 { - return nil, "", nil + return 42, "Deleted", nil } ds := resp.DirectoryDescriptions[0] @@ -279,7 +470,7 @@ func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta int d.Id(), *ds.Stage) return ds, *ds.Stage, nil }, - Timeout: 10 * time.Minute, + Timeout: 30 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { return fmt.Errorf( diff --git a/builtin/providers/aws/resource_aws_directory_service_directory_test.go b/builtin/providers/aws/resource_aws_directory_service_directory_test.go index b10174bdb0..779e56df4d 100644 --- a/builtin/providers/aws/resource_aws_directory_service_directory_test.go +++ b/builtin/providers/aws/resource_aws_directory_service_directory_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/hashicorp/terraform/helper/resource" @@ -27,6 +28,38 @@ func TestAccAWSDirectoryServiceDirectory_basic(t *testing.T) { }) } +func TestAccAWSDirectoryServiceDirectory_microsoft(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_microsoft, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.bar"), + ), + }, + }, + }) +} + +func TestAccAWSDirectoryServiceDirectory_connector(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDirectoryServiceDirectoryConfig_connector, + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceDirectoryExists("aws_directory_service_directory.connector"), + ), + }, + }, + }) +} + func TestAccAWSDirectoryServiceDirectory_withAliasAndSso(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -65,12 +98,33 @@ func TestAccAWSDirectoryServiceDirectory_withAliasAndSso(t *testing.T) { } func testAccCheckDirectoryServiceDirectoryDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", - s.RootModule().Resources) + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_directory_service_directory" { + continue + } + + input := directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(rs.Primary.ID)}, + } + out, err := dsconn.DescribeDirectories(&input) + if err != nil { + // EntityDoesNotExistException means it's gone, this is good + if dserr, ok := err.(awserr.Error); ok && dserr.Code() == "EntityDoesNotExistException" { + return nil + } + return err + } + + if out != nil && len(out.DirectoryDescriptions) > 0 { + return fmt.Errorf("Expected AWS Directory Service Directory to be gone, but was still found") + } + + return nil } - return nil + return fmt.Errorf("Default error in Service Directory Test") } func testAccCheckServiceDirectoryExists(name string) resource.TestCheckFunc { @@ -192,6 +246,76 @@ resource "aws_subnet" "bar" { } ` +const testAccDirectoryServiceDirectoryConfig_connector = ` +resource "aws_directory_service_directory" "bar" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_directory_service_directory" "connector" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + size = "Small" + type = "ADConnector" + + connect_settings { + customer_dns_ips = ["${aws_directory_service_directory.bar.dns_ip_addresses}"] + customer_username = "Administrator" + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +` + +const testAccDirectoryServiceDirectoryConfig_microsoft = ` +resource "aws_directory_service_directory" "bar" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + type = "MicrosoftAD" + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" +} +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" +} +` + var randomInteger = genRandInt() var testAccDirectoryServiceDirectoryConfig_withAlias = fmt.Sprintf(` resource "aws_directory_service_directory" "bar_a" { diff --git a/builtin/providers/aws/resource_aws_dynamodb_table.go b/builtin/providers/aws/resource_aws_dynamodb_table.go index 88146662b5..775532f0d0 100644 --- a/builtin/providers/aws/resource_aws_dynamodb_table.go +++ b/builtin/providers/aws/resource_aws_dynamodb_table.go @@ -4,8 +4,10 @@ import ( "bytes" "fmt" "log" + "strings" "time" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/aws/aws-sdk-go/aws" @@ -158,6 +160,21 @@ func resourceAwsDynamoDbTable() *schema.Resource { return hashcode.String(buf.String()) }, }, + "stream_enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "stream_view_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: func(v interface{}) string { + value := v.(string) + return strings.ToUpper(value) + }, + ValidateFunc: validateStreamViewType, + }, }, } } @@ -263,6 +280,16 @@ func resourceAwsDynamoDbTableCreate(d *schema.ResourceData, meta interface{}) er req.GlobalSecondaryIndexes = globalSecondaryIndexes } + if _, ok := d.GetOk("stream_enabled"); ok { + + req.StreamSpecification = &dynamodb.StreamSpecification{ + StreamEnabled: aws.Bool(d.Get("stream_enabled").(bool)), + StreamViewType: aws.String(d.Get("stream_view_type").(string)), + } + + fmt.Printf("[DEBUG] Adding StreamSpecifications to the table") + } + attemptCount := 1 for attemptCount <= DYNAMODB_MAX_THROTTLE_RETRIES { output, err := dynamodbconn.CreateTable(req) @@ -340,6 +367,25 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er waitForTableToBeActive(d.Id(), meta) } + if d.HasChange("stream_enabled") || d.HasChange("stream_view_type") { + req := &dynamodb.UpdateTableInput{ + TableName: aws.String(d.Id()), + } + + req.StreamSpecification = &dynamodb.StreamSpecification{ + StreamEnabled: aws.Bool(d.Get("stream_enabled").(bool)), + StreamViewType: aws.String(d.Get("stream_view_type").(string)), + } + + _, err := dynamodbconn.UpdateTable(req) + + if err != nil { + return err + } + + waitForTableToBeActive(d.Id(), meta) + } + if d.HasChange("global_secondary_index") { log.Printf("[DEBUG] Changed GSI data") req := &dynamodb.UpdateTableInput{ @@ -587,6 +633,11 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro log.Printf("[DEBUG] Added GSI: %s - Read: %d / Write: %d", gsi["name"], gsi["read_capacity"], gsi["write_capacity"]) } + if table.StreamSpecification != nil { + d.Set("stream_view_type", table.StreamSpecification.StreamViewType) + d.Set("stream_enabled", table.StreamSpecification.StreamEnabled) + } + err = d.Set("global_secondary_index", gsiList) if err != nil { return err @@ -610,6 +661,37 @@ func resourceAwsDynamoDbTableDelete(d *schema.ResourceData, meta interface{}) er if err != nil { return err } + + params := &dynamodb.DescribeTableInput{ + TableName: aws.String(d.Id()), + } + + err = resource.Retry(10*time.Minute, func() error { + t, err := dynamodbconn.DescribeTable(params) + if err != nil { + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "ResourceNotFoundException" { + return nil + } + // Didn't recognize the error, so shouldn't retry. + return resource.RetryError{Err: err} + } + + if t != nil { + if t.Table.TableStatus != nil && strings.ToLower(*t.Table.TableStatus) == "deleting" { + log.Printf("[DEBUG] AWS Dynamo DB table (%s) is still deleting", d.Id()) + return fmt.Errorf("still deleting") + } + } + + // we should be not found or deleting, so error here + return resource.RetryError{Err: fmt.Errorf("[ERR] Error deleting Dynamo DB table, unexpected state: %s", t)} + }) + + // check error from retry + if err != nil { + return err + } + return nil } diff --git a/builtin/providers/aws/resource_aws_dynamodb_table_test.go b/builtin/providers/aws/resource_aws_dynamodb_table_test.go index adf457f0a6..114837ce38 100644 --- a/builtin/providers/aws/resource_aws_dynamodb_table_test.go +++ b/builtin/providers/aws/resource_aws_dynamodb_table_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "log" "testing" "github.com/aws/aws-sdk-go/aws" @@ -11,7 +12,7 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSDynamoDbTable(t *testing.T) { +func TestAccAWSDynamoDbTable_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -33,6 +34,66 @@ func TestAccAWSDynamoDbTable(t *testing.T) { }) } +func TestAccAWSDynamoDbTable_streamSpecification(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSDynamoDbConfigStreamSpecification, + Check: resource.ComposeTestCheckFunc( + testAccCheckInitialAWSDynamoDbTableExists("aws_dynamodb_table.basic-dynamodb-table"), + resource.TestCheckResourceAttr( + "aws_dynamodb_table.basic-dynamodb-table", "stream_enabled", "true"), + resource.TestCheckResourceAttr( + "aws_dynamodb_table.basic-dynamodb-table", "stream_view_type", "KEYS_ONLY"), + ), + }, + }, + }) +} + +func TestResourceAWSDynamoDbTableStreamViewType_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "KEYS-ONLY", + ErrCount: 1, + }, + { + Value: "RANDOM-STRING", + ErrCount: 1, + }, + { + Value: "KEYS_ONLY", + ErrCount: 0, + }, + { + Value: "NEW_AND_OLD_IMAGES", + ErrCount: 0, + }, + { + Value: "NEW_IMAGE", + ErrCount: 0, + }, + { + Value: "OLD_IMAGE", + ErrCount: 0, + }, + } + + for _, tc := range cases { + _, errors := validateStreamViewType(tc.Value, "aws_dynamodb_table_stream_view_type") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the DynamoDB stream_view_type to trigger a validation error") + } + } +} + func testAccCheckAWSDynamoDbTableDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).dynamodbconn @@ -41,21 +102,23 @@ func testAccCheckAWSDynamoDbTableDestroy(s *terraform.State) error { continue } - fmt.Printf("[DEBUG] Checking if DynamoDB table %s exists", rs.Primary.ID) + log.Printf("[DEBUG] Checking if DynamoDB table %s exists", rs.Primary.ID) // Check if queue exists by checking for its attributes params := &dynamodb.DescribeTableInput{ TableName: aws.String(rs.Primary.ID), } + _, err := conn.DescribeTable(params) if err == nil { return fmt.Errorf("DynamoDB table %s still exists. Failing!", rs.Primary.ID) } // Verify the error is what we want - _, ok := err.(awserr.Error) - if !ok { - return err + if dbErr, ok := err.(awserr.Error); ok && dbErr.Code() == "ResourceNotFoundException" { + return nil } + + return err } return nil @@ -295,3 +358,44 @@ resource "aws_dynamodb_table" "basic-dynamodb-table" { } } ` + +const testAccAWSDynamoDbConfigStreamSpecification = ` +resource "aws_dynamodb_table" "basic-dynamodb-table" { + name = "TerraformTestStreamTable" + read_capacity = 10 + write_capacity = 20 + hash_key = "TestTableHashKey" + range_key = "TestTableRangeKey" + attribute { + name = "TestTableHashKey" + type = "S" + } + attribute { + name = "TestTableRangeKey" + type = "S" + } + attribute { + name = "TestLSIRangeKey" + type = "N" + } + attribute { + name = "TestGSIRangeKey" + type = "S" + } + local_secondary_index { + name = "TestTableLSI" + range_key = "TestLSIRangeKey" + projection_type = "ALL" + } + global_secondary_index { + name = "InitialTestTableGSI" + hash_key = "TestTableHashKey" + range_key = "TestGSIRangeKey" + write_capacity = 10 + read_capacity = 10 + projection_type = "KEYS_ONLY" + } + stream_enabled = true + stream_view_type = "KEYS_ONLY" +} +` diff --git a/builtin/providers/aws/resource_aws_ebs_volume.go b/builtin/providers/aws/resource_aws_ebs_volume.go index 1680b4f533..5abea1f2ff 100644 --- a/builtin/providers/aws/resource_aws_ebs_volume.go +++ b/builtin/providers/aws/resource_aws_ebs_volume.go @@ -76,9 +76,6 @@ func resourceAwsEbsVolumeCreate(d *schema.ResourceData, meta interface{}) error if value, ok := d.GetOk("encrypted"); ok { request.Encrypted = aws.Bool(value.(bool)) } - if value, ok := d.GetOk("iops"); ok { - request.Iops = aws.Int64(int64(value.(int))) - } if value, ok := d.GetOk("kms_key_id"); ok { request.KmsKeyId = aws.String(value.(string)) } @@ -88,22 +85,39 @@ func resourceAwsEbsVolumeCreate(d *schema.ResourceData, meta interface{}) error if value, ok := d.GetOk("snapshot_id"); ok { request.SnapshotId = aws.String(value.(string)) } + + // IOPs are only valid, and required for, storage type io1. The current minimu + // is 100. Instead of a hard validation we we only apply the IOPs to the + // request if the type is io1, and log a warning otherwise. This allows users + // to "disable" iops. See https://github.com/hashicorp/terraform/pull/4146 + var t string if value, ok := d.GetOk("type"); ok { - request.VolumeType = aws.String(value.(string)) + t = value.(string) + request.VolumeType = aws.String(t) } + iops := d.Get("iops").(int) + if t != "io1" && iops > 0 { + log.Printf("[WARN] IOPs is only valid for storate type io1 for EBS Volumes") + } else if t == "io1" { + // We add the iops value without validating it's size, to allow AWS to + // enforce a size requirement (currently 100) + request.Iops = aws.Int64(int64(iops)) + } + + log.Printf( + "[DEBUG] EBS Volume create opts: %s", request) result, err := conn.CreateVolume(request) if err != nil { return fmt.Errorf("Error creating EC2 volume: %s", err) } - log.Printf( - "[DEBUG] Waiting for Volume (%s) to become available", - d.Id()) + log.Println( + "[DEBUG] Waiting for Volume to become available") stateConf := &resource.StateChangeConf{ Pending: []string{"creating"}, - Target: "available", + Target: []string{"available"}, Refresh: volumeStateRefreshFunc(conn, *result.VolumeId), Timeout: 5 * time.Minute, Delay: 10 * time.Second, @@ -199,9 +213,6 @@ func readVolume(d *schema.ResourceData, volume *ec2.Volume) error { if volume.Encrypted != nil { d.Set("encrypted", *volume.Encrypted) } - if volume.Iops != nil { - d.Set("iops", *volume.Iops) - } if volume.KmsKeyId != nil { d.Set("kms_key_id", *volume.KmsKeyId) } @@ -214,6 +225,17 @@ func readVolume(d *schema.ResourceData, volume *ec2.Volume) error { if volume.VolumeType != nil { d.Set("type", *volume.VolumeType) } + + if volume.VolumeType != nil && *volume.VolumeType == "io1" { + // Only set the iops attribute if the volume type is io1. Setting otherwise + // can trigger a refresh/plan loop based on the computed value that is given + // from AWS, and prevent us from specifying 0 as a valid iops. + // See https://github.com/hashicorp/terraform/pull/4146 + if volume.Iops != nil { + d.Set("iops", *volume.Iops) + } + } + if volume.Tags != nil { d.Set("tags", tagsToMap(volume.Tags)) } diff --git a/builtin/providers/aws/resource_aws_ebs_volume_test.go b/builtin/providers/aws/resource_aws_ebs_volume_test.go index aab92eb011..940c8157ca 100644 --- a/builtin/providers/aws/resource_aws_ebs_volume_test.go +++ b/builtin/providers/aws/resource_aws_ebs_volume_test.go @@ -26,6 +26,22 @@ func TestAccAWSEBSVolume_basic(t *testing.T) { }) } +func TestAccAWSEBSVolume_NoIops(t *testing.T) { + var v ec2.Volume + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAwsEbsVolumeConfigWithNoIops, + Check: resource.ComposeTestCheckFunc( + testAccCheckVolumeExists("aws_ebs_volume.iops_test", &v), + ), + }, + }, + }) +} + func TestAccAWSEBSVolume_withTags(t *testing.T) { var v ec2.Volume resource.Test(t, resource.TestCase{ @@ -86,3 +102,15 @@ resource "aws_ebs_volume" "tags_test" { } } ` + +const testAccAwsEbsVolumeConfigWithNoIops = ` +resource "aws_ebs_volume" "iops_test" { + availability_zone = "us-west-2a" + size = 10 + type = "gp2" + iops = 0 + tags { + Name = "TerraformTest" + } +} +` diff --git a/builtin/providers/aws/resource_aws_ecr_repository.go b/builtin/providers/aws/resource_aws_ecr_repository.go new file mode 100644 index 0000000000..ca94bcdb3b --- /dev/null +++ b/builtin/providers/aws/resource_aws_ecr_repository.go @@ -0,0 +1,106 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ecr" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsEcrRepository() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEcrRepositoryCreate, + Read: resourceAwsEcrRepositoryRead, + Delete: resourceAwsEcrRepositoryDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "arn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "registry_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsEcrRepositoryCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecrconn + + input := ecr.CreateRepositoryInput{ + RepositoryName: aws.String(d.Get("name").(string)), + } + + log.Printf("[DEBUG] Creating ECR resository: %s", input) + out, err := conn.CreateRepository(&input) + if err != nil { + return err + } + + repository := *out.Repository + + log.Printf("[DEBUG] ECR repository created: %q", *repository.RepositoryArn) + + d.SetId(*repository.RepositoryName) + d.Set("arn", *repository.RepositoryArn) + d.Set("registry_id", *repository.RegistryId) + + return resourceAwsEcrRepositoryRead(d, meta) +} + +func resourceAwsEcrRepositoryRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecrconn + + log.Printf("[DEBUG] Reading repository %s", d.Id()) + out, err := conn.DescribeRepositories(&ecr.DescribeRepositoriesInput{ + RegistryId: aws.String(d.Get("registry_id").(string)), + RepositoryNames: []*string{aws.String(d.Id())}, + }) + if err != nil { + if ecrerr, ok := err.(awserr.Error); ok && ecrerr.Code() == "RepositoryNotFoundException" { + d.SetId("") + return nil + } + return err + } + + repository := out.Repositories[0] + + log.Printf("[DEBUG] Received repository %s", out) + + d.SetId(*repository.RepositoryName) + d.Set("arn", *repository.RepositoryArn) + d.Set("registry_id", *repository.RegistryId) + + return nil +} + +func resourceAwsEcrRepositoryDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecrconn + + _, err := conn.DeleteRepository(&ecr.DeleteRepositoryInput{ + RepositoryName: aws.String(d.Id()), + RegistryId: aws.String(d.Get("registry_id").(string)), + Force: aws.Bool(true), + }) + if err != nil { + if ecrerr, ok := err.(awserr.Error); ok && ecrerr.Code() == "RepositoryNotFoundException" { + d.SetId("") + return nil + } + return err + } + + log.Printf("[DEBUG] repository %q deleted.", d.Get("arn").(string)) + + return nil +} diff --git a/builtin/providers/aws/resource_aws_ecr_repository_policy.go b/builtin/providers/aws/resource_aws_ecr_repository_policy.go new file mode 100644 index 0000000000..8932ea557b --- /dev/null +++ b/builtin/providers/aws/resource_aws_ecr_repository_policy.go @@ -0,0 +1,141 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ecr" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsEcrRepositoryPolicy() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEcrRepositoryPolicyCreate, + Read: resourceAwsEcrRepositoryPolicyRead, + Update: resourceAwsEcrRepositoryPolicyUpdate, + Delete: resourceAwsEcrRepositoryPolicyDelete, + + Schema: map[string]*schema.Schema{ + "repository": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "policy": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "registry_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsEcrRepositoryPolicyCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecrconn + + input := ecr.SetRepositoryPolicyInput{ + RepositoryName: aws.String(d.Get("repository").(string)), + PolicyText: aws.String(d.Get("policy").(string)), + } + + log.Printf("[DEBUG] Creating ECR resository policy: %s", input) + out, err := conn.SetRepositoryPolicy(&input) + if err != nil { + return err + } + + repositoryPolicy := *out + + log.Printf("[DEBUG] ECR repository policy created: %s", *repositoryPolicy.RepositoryName) + + d.SetId(*repositoryPolicy.RepositoryName) + d.Set("registry_id", *repositoryPolicy.RegistryId) + + return resourceAwsEcrRepositoryPolicyRead(d, meta) +} + +func resourceAwsEcrRepositoryPolicyRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecrconn + + log.Printf("[DEBUG] Reading repository policy %s", d.Id()) + out, err := conn.GetRepositoryPolicy(&ecr.GetRepositoryPolicyInput{ + RegistryId: aws.String(d.Get("registry_id").(string)), + RepositoryName: aws.String(d.Id()), + }) + if err != nil { + if ecrerr, ok := err.(awserr.Error); ok { + switch ecrerr.Code() { + case "RepositoryNotFoundException", "RepositoryPolicyNotFoundException": + d.SetId("") + return nil + default: + return err + } + } + return err + } + + log.Printf("[DEBUG] Received repository policy %s", out) + + repositoryPolicy := out + + d.SetId(*repositoryPolicy.RepositoryName) + d.Set("registry_id", *repositoryPolicy.RegistryId) + + return nil +} + +func resourceAwsEcrRepositoryPolicyUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecrconn + + if !d.HasChange("policy") { + return nil + } + + input := ecr.SetRepositoryPolicyInput{ + RepositoryName: aws.String(d.Get("repository").(string)), + RegistryId: aws.String(d.Get("registry_id").(string)), + PolicyText: aws.String(d.Get("policy").(string)), + } + + out, err := conn.SetRepositoryPolicy(&input) + if err != nil { + return err + } + + repositoryPolicy := *out + + d.SetId(*repositoryPolicy.RepositoryName) + d.Set("registry_id", *repositoryPolicy.RegistryId) + + return nil +} + +func resourceAwsEcrRepositoryPolicyDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecrconn + + _, err := conn.DeleteRepositoryPolicy(&ecr.DeleteRepositoryPolicyInput{ + RepositoryName: aws.String(d.Id()), + RegistryId: aws.String(d.Get("registry_id").(string)), + }) + if err != nil { + if ecrerr, ok := err.(awserr.Error); ok { + switch ecrerr.Code() { + case "RepositoryNotFoundException", "RepositoryPolicyNotFoundException": + d.SetId("") + return nil + default: + return err + } + } + return err + } + + log.Printf("[DEBUG] repository policy %s deleted.", d.Id()) + + return nil +} diff --git a/builtin/providers/aws/resource_aws_ecr_repository_policy_test.go b/builtin/providers/aws/resource_aws_ecr_repository_policy_test.go new file mode 100644 index 0000000000..9ff1bffd5f --- /dev/null +++ b/builtin/providers/aws/resource_aws_ecr_repository_policy_test.go @@ -0,0 +1,92 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ecr" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSEcrRepositoryPolicy_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcrRepositoryPolicyDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSEcrRepositoryPolicy, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcrRepositoryPolicyExists("aws_ecr_repository_policy.default"), + ), + }, + }, + }) +} + +func testAccCheckAWSEcrRepositoryPolicyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ecrconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ecr_repository_policy" { + continue + } + + _, err := conn.GetRepositoryPolicy(&ecr.GetRepositoryPolicyInput{ + RegistryId: aws.String(rs.Primary.Attributes["registry_id"]), + RepositoryName: aws.String(rs.Primary.Attributes["repository"]), + }) + if err != nil { + if ecrerr, ok := err.(awserr.Error); ok && ecrerr.Code() == "RepositoryNotFoundException" { + return nil + } + return err + } + } + + return nil +} + +func testAccCheckAWSEcrRepositoryPolicyExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +var testAccAWSEcrRepositoryPolicy = ` +# ECR initially only available in us-east-1 +# https://aws.amazon.com/blogs/aws/ec2-container-registry-now-generally-available/ +provider "aws" { + region = "us-east-1" +} +resource "aws_ecr_repository" "foo" { + name = "bar" +} + +resource "aws_ecr_repository_policy" "default" { + repository = "${aws_ecr_repository.foo.name}" + policy = < 0 { - return fmt.Errorf("ECS service still exists:\n%#v", out.Services) + var activeServices []*ecs.Service + for _, svc := range out.Services { + if *svc.Status != "INACTIVE" { + activeServices = append(activeServices, svc) + } + } + if len(activeServices) == 0 { + return nil + } + + return fmt.Errorf("ECS service still exists:\n%#v", activeServices) } + return nil } return err @@ -356,7 +391,6 @@ EOF } resource "aws_elb" "main" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -384,6 +418,107 @@ resource "aws_ecs_service" "ghost" { } ` +var tpl_testAccAWSEcsService_withLbChanges = ` +resource "aws_ecs_cluster" "main" { + name = "terraformecstest12" +} + +resource "aws_ecs_task_definition" "with_lb_changes" { + family = "ghost_lbd" + container_definitions = < 0 { + return fmt.Errorf("still exists") + } + } else { + req := &ec2.DescribeAddressesInput{ + PublicIps: []*string{aws.String(rs.Primary.ID)}, + } + describe, err := conn.DescribeAddresses(req) + if err != nil { + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "InvalidAllocationID.NotFound" { + continue + } + return err + } - if providerErr.Code() != "InvalidAllocationID.NotFound" { - return fmt.Errorf("Unexpected error: %s", err) + if len(describe.Addresses) > 0 { + return fmt.Errorf("still exists") + } } } diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster.go b/builtin/providers/aws/resource_aws_elasticache_cluster.go index cffcdab2de..8e34b01ca1 100644 --- a/builtin/providers/aws/resource_aws_elasticache_cluster.go +++ b/builtin/providers/aws/resource_aws_elasticache_cluster.go @@ -120,6 +120,10 @@ func resourceAwsElasticacheCluster() *schema.Resource { Type: schema.TypeInt, Computed: true, }, + "availability_zone": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, }, }, }, @@ -162,6 +166,30 @@ func resourceAwsElasticacheCluster() *schema.Resource { }, }, + "az_mode": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "availability_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "availability_zones": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: func(v interface{}) int { + return hashcode.String(v.(string)) + }, + }, + "tags": tagsSchema(), // apply_immediately is used to determine when the update modifications @@ -234,6 +262,20 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{ log.Printf("[DEBUG] Restoring Redis cluster from S3 snapshot: %#v", s) } + if v, ok := d.GetOk("az_mode"); ok { + req.AZMode = aws.String(v.(string)) + } + + if v, ok := d.GetOk("availability_zone"); ok { + req.PreferredAvailabilityZone = aws.String(v.(string)) + } + + preferred_azs := d.Get("availability_zones").(*schema.Set).List() + if len(preferred_azs) > 0 { + azs := expandStringList(preferred_azs) + req.PreferredAvailabilityZones = azs + } + resp, err := conn.CreateCacheCluster(req) if err != nil { return fmt.Errorf("Error creating Elasticache: %s", err) @@ -248,7 +290,7 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{ pending := []string{"creating"} stateConf := &resource.StateChangeConf{ Pending: pending, - Target: "available", + Target: []string{"available"}, Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending), Timeout: 10 * time.Minute, Delay: 10 * time.Second, @@ -306,6 +348,7 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{}) d.Set("notification_topic_arn", c.NotificationConfiguration.TopicArn) } } + d.Set("availability_zone", c.PreferredAvailabilityZone) if err := setCacheNodeData(d, c); err != nil { return err @@ -395,8 +438,21 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ } if d.HasChange("num_cache_nodes") { + oraw, nraw := d.GetChange("num_cache_nodes") + o := oraw.(int) + n := nraw.(int) + if v, ok := d.GetOk("az_mode"); ok && v.(string) == "cross-az" && n == 1 { + return fmt.Errorf("[WARN] Error updateing Elasticache cluster (%s), error: Cross-AZ mode is not supported in a single cache node.", d.Id()) + } + if n < o { + log.Printf("[INFO] Cluster %s is marked for Decreasing cache nodes from %d to %d", d.Id(), o, n) + nodesToRemove := getCacheNodesToRemove(d, o, o-n) + req.CacheNodeIdsToRemove = nodesToRemove + } + req.NumCacheNodes = aws.Int64(int64(d.Get("num_cache_nodes").(int))) requestUpdate = true + } if requestUpdate { @@ -410,7 +466,7 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ pending := []string{"modifying", "rebooting cache cluster nodes", "snapshotting"} stateConf := &resource.StateChangeConf{ Pending: pending, - Target: "available", + Target: []string{"available"}, Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending), Timeout: 5 * time.Minute, Delay: 5 * time.Second, @@ -426,6 +482,16 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ return resourceAwsElasticacheClusterRead(d, meta) } +func getCacheNodesToRemove(d *schema.ResourceData, oldNumberOfNodes int, cacheNodesToRemove int) []*string { + nodesIdsToRemove := []*string{} + for i := oldNumberOfNodes; i > oldNumberOfNodes-cacheNodesToRemove && i > 0; i-- { + s := fmt.Sprintf("%04d", i) + nodesIdsToRemove = append(nodesIdsToRemove, &s) + } + + return nodesIdsToRemove +} + func setCacheNodeData(d *schema.ResourceData, c *elasticache.CacheCluster) error { sortedCacheNodes := make([]*elasticache.CacheNode, len(c.CacheNodes)) copy(sortedCacheNodes, c.CacheNodes) @@ -434,13 +500,14 @@ func setCacheNodeData(d *schema.ResourceData, c *elasticache.CacheCluster) error cacheNodeData := make([]map[string]interface{}, 0, len(sortedCacheNodes)) for _, node := range sortedCacheNodes { - if node.CacheNodeId == nil || node.Endpoint == nil || node.Endpoint.Address == nil || node.Endpoint.Port == nil { + if node.CacheNodeId == nil || node.Endpoint == nil || node.Endpoint.Address == nil || node.Endpoint.Port == nil || node.CustomerAvailabilityZone == nil { return fmt.Errorf("Unexpected nil pointer in: %s", node) } cacheNodeData = append(cacheNodeData, map[string]interface{}{ - "id": *node.CacheNodeId, - "address": *node.Endpoint.Address, - "port": int(*node.Endpoint.Port), + "id": *node.CacheNodeId, + "address": *node.Endpoint.Address, + "port": int(*node.Endpoint.Port), + "availability_zone": *node.CustomerAvailabilityZone, }) } @@ -470,7 +537,7 @@ func resourceAwsElasticacheClusterDelete(d *schema.ResourceData, meta interface{ log.Printf("[DEBUG] Waiting for deletion: %v", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"creating", "available", "deleting", "incompatible-parameters", "incompatible-network", "restore-failed"}, - Target: "", + Target: []string{}, Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "", []string{}), Timeout: 10 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go index a17c5d9b1e..3cbc4790af 100644 --- a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go +++ b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go @@ -8,6 +8,7 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -72,6 +73,41 @@ func TestAccAWSElasticacheCluster_snapshotsWithUpdates(t *testing.T) { }) } +func TestAccAWSElasticacheCluster_decreasingCacheNodes(t *testing.T) { + var ec elasticache.CacheCluster + + ri := genRandInt() + preConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfigDecreasingNodes, ri, ri, ri) + postConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfigDecreasingNodes_update, ri, ri, ri) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: preConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), + testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "num_cache_nodes", "3"), + ), + }, + + resource.TestStep{ + Config: postConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), + testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "num_cache_nodes", "1"), + ), + }, + }, + }) +} + func TestAccAWSElasticacheCluster_vpc(t *testing.T) { var csg elasticache.CacheSubnetGroup var ec elasticache.CacheCluster @@ -86,6 +122,29 @@ func TestAccAWSElasticacheCluster_vpc(t *testing.T) { testAccCheckAWSElasticacheSubnetGroupExists("aws_elasticache_subnet_group.bar", &csg), testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), testAccCheckAWSElasticacheClusterAttributes(&ec), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "availability_zone", "us-west-2a"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_multiAZInVpc(t *testing.T) { + var csg elasticache.CacheSubnetGroup + var ec elasticache.CacheCluster + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSElasticacheClusterMultiAZInVPCConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheSubnetGroupExists("aws_elasticache_subnet_group.bar", &csg), + testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "availability_zone", "Multiple"), ), }, }, @@ -117,6 +176,10 @@ func testAccCheckAWSElasticacheClusterDestroy(s *terraform.State) error { CacheClusterId: aws.String(rs.Primary.ID), }) if err != nil { + // Verify the error is what we want + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "CacheClusterNotFound" { + continue + } return err } if len(res.CacheClusters) > 0 { @@ -260,6 +323,71 @@ resource "aws_elasticache_cluster" "bar" { } ` +var testAccAWSElasticacheClusterConfigDecreasingNodes = ` +provider "aws" { + region = "us-east-1" +} +resource "aws_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + ingress { + from_port = -1 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_elasticache_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + security_group_names = ["${aws_security_group.bar.name}"] +} + +resource "aws_elasticache_cluster" "bar" { + cluster_id = "tf-test-%03d" + engine = "memcached" + node_type = "cache.m1.small" + num_cache_nodes = 3 + port = 11211 + parameter_group_name = "default.memcached1.4" + security_group_names = ["${aws_elasticache_security_group.bar.name}"] +} +` + +var testAccAWSElasticacheClusterConfigDecreasingNodes_update = ` +provider "aws" { + region = "us-east-1" +} +resource "aws_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + ingress { + from_port = -1 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_elasticache_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + security_group_names = ["${aws_security_group.bar.name}"] +} + +resource "aws_elasticache_cluster" "bar" { + cluster_id = "tf-test-%03d" + engine = "memcached" + node_type = "cache.m1.small" + num_cache_nodes = 1 + port = 11211 + parameter_group_name = "default.memcached1.4" + security_group_names = ["${aws_elasticache_security_group.bar.name}"] + apply_immediately = true +} +` + var testAccAWSElasticacheClusterInVPCConfig = fmt.Sprintf(` resource "aws_vpc" "foo" { cidr_block = "192.168.0.0/16" @@ -309,9 +437,74 @@ resource "aws_elasticache_cluster" "bar" { security_group_ids = ["${aws_security_group.bar.id}"] parameter_group_name = "default.redis2.8" notification_topic_arn = "${aws_sns_topic.topic_example.arn}" + availability_zone = "us-west-2a" } resource "aws_sns_topic" "topic_example" { name = "tf-ecache-cluster-test" } `, genRandInt(), genRandInt(), genRandInt()) + +var testAccAWSElasticacheClusterMultiAZInVPCConfig = fmt.Sprintf(` +resource "aws_vpc" "foo" { + cidr_block = "192.168.0.0/16" + tags { + Name = "tf-test" + } +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "192.168.0.0/20" + availability_zone = "us-west-2a" + tags { + Name = "tf-test-%03d" + } +} + +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "192.168.16.0/20" + availability_zone = "us-west-2b" + tags { + Name = "tf-test-%03d" + } +} + +resource "aws_elasticache_subnet_group" "bar" { + name = "tf-test-cache-subnet-%03d" + description = "tf-test-cache-subnet-group-descr" + subnet_ids = [ + "${aws_subnet.foo.id}", + "${aws_subnet.bar.id}" + ] +} + +resource "aws_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + vpc_id = "${aws_vpc.foo.id}" + ingress { + from_port = -1 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_elasticache_cluster" "bar" { + cluster_id = "tf-test-%03d" + engine = "memcached" + node_type = "cache.m1.small" + num_cache_nodes = 2 + port = 11211 + subnet_group_name = "${aws_elasticache_subnet_group.bar.name}" + security_group_ids = ["${aws_security_group.bar.id}"] + parameter_group_name = "default.memcached1.4" + az_mode = "cross-az" + availability_zones = [ + "us-west-2a", + "us-west-2b" + ] +} +`, genRandInt(), genRandInt(), genRandInt(), genRandInt(), genRandInt()) diff --git a/builtin/providers/aws/resource_aws_elasticache_parameter_group.go b/builtin/providers/aws/resource_aws_elasticache_parameter_group.go index c730ff94f9..43f9985a83 100644 --- a/builtin/providers/aws/resource_aws_elasticache_parameter_group.go +++ b/builtin/providers/aws/resource_aws_elasticache_parameter_group.go @@ -169,7 +169,7 @@ func resourceAwsElasticacheParameterGroupUpdate(d *schema.ResourceData, meta int func resourceAwsElasticacheParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "destroyed", + Target: []string{"destroyed"}, Refresh: resourceAwsElasticacheParameterGroupDeleteRefreshFunc(d, meta), Timeout: 3 * time.Minute, MinTimeout: 1 * time.Second, diff --git a/builtin/providers/aws/resource_aws_elasticache_parameter_group_test.go b/builtin/providers/aws/resource_aws_elasticache_parameter_group_test.go index e61e64b3c7..d1df02c7f2 100644 --- a/builtin/providers/aws/resource_aws_elasticache_parameter_group_test.go +++ b/builtin/providers/aws/resource_aws_elasticache_parameter_group_test.go @@ -112,7 +112,7 @@ func testAccCheckAWSElasticacheParameterGroupDestroy(s *terraform.State) error { if !ok { return err } - if newerr.Code() != "InvalidCacheParameterGroup.NotFound" { + if newerr.Code() != "CacheParameterGroupNotFound" { return err } } diff --git a/builtin/providers/aws/resource_aws_elasticache_security_group_test.go b/builtin/providers/aws/resource_aws_elasticache_security_group_test.go index 87644242fb..452e7b896e 100644 --- a/builtin/providers/aws/resource_aws_elasticache_security_group_test.go +++ b/builtin/providers/aws/resource_aws_elasticache_security_group_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -36,12 +37,14 @@ func testAccCheckAWSElasticacheSecurityGroupDestroy(s *terraform.State) error { res, err := conn.DescribeCacheSecurityGroups(&elasticache.DescribeCacheSecurityGroupsInput{ CacheSecurityGroupName: aws.String(rs.Primary.ID), }) - if err != nil { - return err + if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "CacheSecurityGroupNotFound" { + continue } + if len(res.CacheSecurityGroups) > 0 { - return fmt.Errorf("still exist.") + return fmt.Errorf("cache security group still exists") } + return err } return nil } @@ -69,6 +72,9 @@ func testAccCheckAWSElasticacheSecurityGroupExists(n string) resource.TestCheckF } var testAccAWSElasticacheSecurityGroupConfig = fmt.Sprintf(` +provider "aws" { + region = "us-east-1" +} resource "aws_security_group" "bar" { name = "tf-test-security-group-%03d" description = "tf-test-security-group-descr" diff --git a/builtin/providers/aws/resource_aws_elasticache_subnet_group_test.go b/builtin/providers/aws/resource_aws_elasticache_subnet_group_test.go index b3035c767c..55fe25cbca 100644 --- a/builtin/providers/aws/resource_aws_elasticache_subnet_group_test.go +++ b/builtin/providers/aws/resource_aws_elasticache_subnet_group_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -71,6 +72,10 @@ func testAccCheckAWSElasticacheSubnetGroupDestroy(s *terraform.State) error { CacheSubnetGroupName: aws.String(rs.Primary.ID), }) if err != nil { + // Verify the error is what we want + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "CacheSubnetGroupNotFoundFault" { + continue + } return err } if len(res.CacheSubnetGroups) > 0 { diff --git a/builtin/providers/aws/resource_aws_elasticsearch_domain.go b/builtin/providers/aws/resource_aws_elasticsearch_domain.go index 8f2d6c9c9f..c5666424b4 100644 --- a/builtin/providers/aws/resource_aws_elasticsearch_domain.go +++ b/builtin/providers/aws/resource_aws_elasticsearch_domain.go @@ -247,7 +247,9 @@ func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{} ds := out.DomainStatus - d.Set("access_policies", *ds.AccessPolicies) + if ds.AccessPolicies != nil && *ds.AccessPolicies != "" { + d.Set("access_policies", normalizeJson(*ds.AccessPolicies)) + } err = d.Set("advanced_options", pointersMapToStringList(ds.AdvancedOptions)) if err != nil { return err diff --git a/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go b/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go index dee675d0d0..e17c0c0e89 100644 --- a/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go +++ b/builtin/providers/aws/resource_aws_elasticsearch_domain_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -85,8 +86,12 @@ func testAccCheckESDomainDestroy(s *terraform.State) error { } _, err := conn.DescribeElasticsearchDomain(opts) + // Verify the error is what we want if err != nil { - return fmt.Errorf("Error describing ES domains: %q", err.Error()) + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "ResourceNotFoundException" { + continue + } + return err } } return nil diff --git a/builtin/providers/aws/resource_aws_elb.go b/builtin/providers/aws/resource_aws_elb.go index faf0b8addb..cfcca6aa94 100644 --- a/builtin/providers/aws/resource_aws_elb.go +++ b/builtin/providers/aws/resource_aws_elb.go @@ -4,7 +4,6 @@ import ( "bytes" "fmt" "log" - "regexp" "strings" "time" @@ -49,7 +48,6 @@ func resourceAwsElb() *schema.Resource { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, - ForceNew: true, Computed: true, Set: schema.HashString, }, @@ -85,7 +83,6 @@ func resourceAwsElb() *schema.Resource { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, - ForceNew: true, Computed: true, Set: schema.HashString, }, @@ -339,10 +336,10 @@ func resourceAwsElbRead(d *schema.ResourceData, meta interface{}) error { d.Set("dns_name", *lb.DNSName) d.Set("zone_id", *lb.CanonicalHostedZoneNameID) d.Set("internal", *lb.Scheme == "internal") - d.Set("availability_zones", lb.AvailabilityZones) + d.Set("availability_zones", flattenStringList(lb.AvailabilityZones)) d.Set("instances", flattenInstances(lb.Instances)) d.Set("listener", flattenListeners(lb.ListenerDescriptions)) - d.Set("security_groups", lb.SecurityGroups) + d.Set("security_groups", flattenStringList(lb.SecurityGroups)) if lb.SourceSecurityGroup != nil { d.Set("source_security_group", lb.SourceSecurityGroup.GroupName) @@ -350,15 +347,15 @@ func resourceAwsElbRead(d *schema.ResourceData, meta interface{}) error { var elbVpc string if lb.VPCId != nil { elbVpc = *lb.VPCId - } - sgId, err := sourceSGIdByName(meta, *lb.SourceSecurityGroup.GroupName, elbVpc) - if err != nil { - return fmt.Errorf("[WARN] Error looking up ELB Security Group ID: %s", err) - } else { - d.Set("source_security_group_id", sgId) + sgId, err := sourceSGIdByName(meta, *lb.SourceSecurityGroup.GroupName, elbVpc) + if err != nil { + return fmt.Errorf("[WARN] Error looking up ELB Security Group ID: %s", err) + } else { + d.Set("source_security_group_id", sgId) + } } } - d.Set("subnets", lb.Subnets) + d.Set("subnets", flattenStringList(lb.Subnets)) d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout) d.Set("connection_draining", lbAttrs.ConnectionDraining.Enabled) d.Set("connection_draining_timeout", lbAttrs.ConnectionDraining.Timeout) @@ -600,6 +597,80 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error { d.SetPartial("security_groups") } + if d.HasChange("availability_zones") { + o, n := d.GetChange("availability_zones") + os := o.(*schema.Set) + ns := n.(*schema.Set) + + removed := expandStringList(os.Difference(ns).List()) + added := expandStringList(ns.Difference(os).List()) + + if len(added) > 0 { + enableOpts := &elb.EnableAvailabilityZonesForLoadBalancerInput{ + LoadBalancerName: aws.String(d.Id()), + AvailabilityZones: added, + } + + log.Printf("[DEBUG] ELB enable availability zones opts: %s", enableOpts) + _, err := elbconn.EnableAvailabilityZonesForLoadBalancer(enableOpts) + if err != nil { + return fmt.Errorf("Failure enabling ELB availability zones: %s", err) + } + } + + if len(removed) > 0 { + disableOpts := &elb.DisableAvailabilityZonesForLoadBalancerInput{ + LoadBalancerName: aws.String(d.Id()), + AvailabilityZones: removed, + } + + log.Printf("[DEBUG] ELB disable availability zones opts: %s", disableOpts) + _, err := elbconn.DisableAvailabilityZonesForLoadBalancer(disableOpts) + if err != nil { + return fmt.Errorf("Failure disabling ELB availability zones: %s", err) + } + } + + d.SetPartial("availability_zones") + } + + if d.HasChange("subnets") { + o, n := d.GetChange("subnets") + os := o.(*schema.Set) + ns := n.(*schema.Set) + + removed := expandStringList(os.Difference(ns).List()) + added := expandStringList(ns.Difference(os).List()) + + if len(added) > 0 { + attachOpts := &elb.AttachLoadBalancerToSubnetsInput{ + LoadBalancerName: aws.String(d.Id()), + Subnets: added, + } + + log.Printf("[DEBUG] ELB attach subnets opts: %s", attachOpts) + _, err := elbconn.AttachLoadBalancerToSubnets(attachOpts) + if err != nil { + return fmt.Errorf("Failure adding ELB subnets: %s", err) + } + } + + if len(removed) > 0 { + detachOpts := &elb.DetachLoadBalancerFromSubnetsInput{ + LoadBalancerName: aws.String(d.Id()), + Subnets: removed, + } + + log.Printf("[DEBUG] ELB detach subnets opts: %s", detachOpts) + _, err := elbconn.DetachLoadBalancerFromSubnets(detachOpts) + if err != nil { + return fmt.Errorf("Failure removing ELB subnets: %s", err) + } + } + + d.SetPartial("subnets") + } + if err := setTagsELB(elbconn, d); err != nil { return err } @@ -673,29 +744,6 @@ func isLoadBalancerNotFound(err error) bool { return ok && elberr.Code() == "LoadBalancerNotFound" } -func validateElbName(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "only alphanumeric characters and hyphens allowed in %q: %q", - k, value)) - } - if len(value) > 32 { - errors = append(errors, fmt.Errorf( - "%q cannot be longer than 32 characters: %q", k, value)) - } - if regexp.MustCompile(`^-`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot begin with a hyphen: %q", k, value)) - } - if regexp.MustCompile(`-$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot end with a hyphen: %q", k, value)) - } - return - -} - func sourceSGIdByName(meta interface{}, sg, vpcId string) (string, error) { conn := meta.(*AWSClient).ec2conn var filters []*ec2.Filter diff --git a/builtin/providers/aws/resource_aws_elb_test.go b/builtin/providers/aws/resource_aws_elb_test.go index 6ccc5cd66f..f2d27c515c 100644 --- a/builtin/providers/aws/resource_aws_elb_test.go +++ b/builtin/providers/aws/resource_aws_elb_test.go @@ -2,22 +2,23 @@ package aws import ( "fmt" - "os" + "math/rand" "reflect" "regexp" "sort" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elb" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccAWSELB_basic(t *testing.T) { var conf elb.LoadBalancerDescription - ssl_certificate_id := os.Getenv("AWS_SSL_CERTIFICATE_ID") resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -30,19 +31,20 @@ func TestAccAWSELB_basic(t *testing.T) { testAccCheckAWSELBExists("aws_elb.bar", &conf), testAccCheckAWSELBAttributes(&conf), resource.TestCheckResourceAttr( - "aws_elb.bar", "name", "foobar-terraform-test"), + "aws_elb.bar", "availability_zones.#", "3"), resource.TestCheckResourceAttr( "aws_elb.bar", "availability_zones.2487133097", "us-west-2a"), resource.TestCheckResourceAttr( "aws_elb.bar", "availability_zones.221770259", "us-west-2b"), resource.TestCheckResourceAttr( "aws_elb.bar", "availability_zones.2050015877", "us-west-2c"), + resource.TestCheckResourceAttr( + "aws_elb.bar", "subnets.#", "3"), + // NOTE: Subnet IDs are different across AWS accounts and cannot be checked. resource.TestCheckResourceAttr( "aws_elb.bar", "listener.206423021.instance_port", "8000"), resource.TestCheckResourceAttr( "aws_elb.bar", "listener.206423021.instance_protocol", "http"), - resource.TestCheckResourceAttr( - "aws_elb.bar", "listener.206423021.ssl_certificate_id", ssl_certificate_id), resource.TestCheckResourceAttr( "aws_elb.bar", "listener.206423021.lb_port", "80"), resource.TestCheckResourceAttr( @@ -58,17 +60,20 @@ func TestAccAWSELB_basic(t *testing.T) { func TestAccAWSELB_fullCharacterRange(t *testing.T) { var conf elb.LoadBalancerDescription + lbName := fmt.Sprintf("Tf-%d", + rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccAWSELBFullRangeOfCharacters, + Config: fmt.Sprintf(testAccAWSELBFullRangeOfCharacters, lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), resource.TestCheckResourceAttr( - "aws_elb.foo", "name", "FoobarTerraform-test123"), + "aws_elb.foo", "name", lbName), ), }, }, @@ -87,8 +92,6 @@ func TestAccAWSELB_AccessLogs(t *testing.T) { Config: testAccAWSELBAccessLogs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), - resource.TestCheckResourceAttr( - "aws_elb.foo", "name", "FoobarTerraform-test123"), ), }, @@ -96,8 +99,6 @@ func TestAccAWSELB_AccessLogs(t *testing.T) { Config: testAccAWSELBAccessLogsOn, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), - resource.TestCheckResourceAttr( - "aws_elb.foo", "name", "FoobarTerraform-test123"), resource.TestCheckResourceAttr( "aws_elb.foo", "access_logs.#", "1"), resource.TestCheckResourceAttr( @@ -111,8 +112,6 @@ func TestAccAWSELB_AccessLogs(t *testing.T) { Config: testAccAWSELBAccessLogs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), - resource.TestCheckResourceAttr( - "aws_elb.foo", "name", "FoobarTerraform-test123"), resource.TestCheckResourceAttr( "aws_elb.foo", "access_logs.#", "0"), ), @@ -142,6 +141,45 @@ func TestAccAWSELB_generatedName(t *testing.T) { }) } +func TestAccAWSELB_availabilityZones(t *testing.T) { + var conf elb.LoadBalancerDescription + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSELBDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSELBConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.bar", &conf), + resource.TestCheckResourceAttr( + "aws_elb.bar", "availability_zones.#", "3"), + resource.TestCheckResourceAttr( + "aws_elb.bar", "availability_zones.2487133097", "us-west-2a"), + resource.TestCheckResourceAttr( + "aws_elb.bar", "availability_zones.221770259", "us-west-2b"), + resource.TestCheckResourceAttr( + "aws_elb.bar", "availability_zones.2050015877", "us-west-2c"), + ), + }, + + resource.TestStep{ + Config: testAccAWSELBConfig_AvailabilityZonesUpdate, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.bar", &conf), + resource.TestCheckResourceAttr( + "aws_elb.bar", "availability_zones.#", "2"), + resource.TestCheckResourceAttr( + "aws_elb.bar", "availability_zones.2487133097", "us-west-2a"), + resource.TestCheckResourceAttr( + "aws_elb.bar", "availability_zones.221770259", "us-west-2b"), + ), + }, + }, + }) +} + func TestAccAWSELB_tags(t *testing.T) { var conf elb.LoadBalancerDescription var td elb.TagDescription @@ -156,8 +194,6 @@ func TestAccAWSELB_tags(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), testAccCheckAWSELBAttributes(&conf), - resource.TestCheckResourceAttr( - "aws_elb.bar", "name", "foobar-terraform-test"), testAccLoadTags(&conf, &td), testAccCheckELBTags(&td.Tags, "bar", "baz"), ), @@ -168,8 +204,6 @@ func TestAccAWSELB_tags(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), testAccCheckAWSELBAttributes(&conf), - resource.TestCheckResourceAttr( - "aws_elb.bar", "name", "foobar-terraform-test"), testAccLoadTags(&conf, &td), testAccCheckELBTags(&td.Tags, "foo", "bar"), testAccCheckELBTags(&td.Tags, "new", "type"), @@ -196,7 +230,8 @@ func TestAccAWSELB_iam_server_cert(t *testing.T) { CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccELBIAMServerCertConfig, + Config: testAccELBIAMServerCertConfig( + fmt.Sprintf("tf-acctest-%s", acctest.RandString(10))), Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), testCheck, @@ -571,7 +606,7 @@ func testAccCheckAWSELBDestroy(s *terraform.State) error { return err } - if providerErr.Code() != "InvalidLoadBalancerName.NotFound" { + if providerErr.Code() != "LoadBalancerNotFound" { return fmt.Errorf("Unexpected error: %s", err) } } @@ -591,10 +626,6 @@ func testAccCheckAWSELBAttributes(conf *elb.LoadBalancerDescription) resource.Te return fmt.Errorf("bad availability_zones") } - if *conf.LoadBalancerName != "foobar-terraform-test" { - return fmt.Errorf("bad name") - } - l := elb.Listener{ InstancePort: aws.Int64(int64(8000)), InstanceProtocol: aws.String("HTTP"), @@ -629,10 +660,6 @@ func testAccCheckAWSELBAttributesHealthCheck(conf *elb.LoadBalancerDescription) return fmt.Errorf("bad availability_zones") } - if *conf.LoadBalancerName != "foobar-terraform-test" { - return fmt.Errorf("bad name") - } - check := &elb.HealthCheck{ Timeout: aws.Int64(int64(30)), UnhealthyThreshold: aws.Int64(int64(5)), @@ -699,7 +726,6 @@ func testAccCheckAWSELBExists(n string, res *elb.LoadBalancerDescription) resour const testAccAWSELBConfig = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -720,7 +746,7 @@ resource "aws_elb" "bar" { const testAccAWSELBFullRangeOfCharacters = ` resource "aws_elb" "foo" { - name = "FoobarTerraform-test123" + name = "%s" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -734,7 +760,6 @@ resource "aws_elb" "foo" { const testAccAWSELBAccessLogs = ` resource "aws_elb" "foo" { - name = "FoobarTerraform-test123" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -773,7 +798,6 @@ EOF } resource "aws_elb" "foo" { - name = "FoobarTerraform-test123" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -803,9 +827,21 @@ resource "aws_elb" "foo" { } ` +const testAccAWSELBConfig_AvailabilityZonesUpdate = ` +resource "aws_elb" "bar" { + availability_zones = ["us-west-2a", "us-west-2b"] + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } +} +` + const testAccAWSELBConfig_TagUpdate = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -826,7 +862,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigNewInstance = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -848,7 +883,6 @@ resource "aws_instance" "foo" { const testAccAWSELBConfigListenerSSLCertificateId = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -863,7 +897,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigHealthCheck = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -885,7 +918,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigHealthCheck_update = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -907,7 +939,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigListener_update = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -921,7 +952,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigIdleTimeout = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -937,7 +967,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigIdleTimeout_update = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -953,7 +982,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigConnectionDraining = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -970,7 +998,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigConnectionDraining_update_timeout = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -987,7 +1014,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigConnectionDraining_update_disable = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a"] listener { @@ -1003,7 +1029,6 @@ resource "aws_elb" "bar" { const testAccAWSELBConfigSecurityGroups = ` resource "aws_elb" "bar" { - name = "foobar-terraform-test" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] listener { @@ -1017,9 +1042,6 @@ resource "aws_elb" "bar" { } resource "aws_security_group" "bar" { - name = "terraform-elb-acceptance-test" - description = "Used in the terraform acceptance tests for the elb resource" - ingress { protocol = "tcp" from_port = 80 @@ -1031,9 +1053,10 @@ resource "aws_security_group" "bar" { // This IAM Server config is lifted from // builtin/providers/aws/resource_aws_iam_server_certificate_test.go -var testAccELBIAMServerCertConfig = ` +func testAccELBIAMServerCertConfig(certName string) string { + return fmt.Sprintf(` resource "aws_iam_server_certificate" "test_cert" { - name = "terraform-test-cert" + name = "%s" certificate_body = < 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", - s.RootModule().Resources) - } + conn := testAccProvider.Meta().(*AWSClient).glacierconn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glacier_vault" { + continue + } + + input := &glacier.DescribeVaultInput{ + VaultName: aws.String(rs.Primary.ID), + } + if _, err := conn.DescribeVault(input); err != nil { + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "ResourceNotFoundException" { + continue + } + + return err + } + return fmt.Errorf("still exists") + } return nil } diff --git a/builtin/providers/aws/resource_aws_iam_group_membership_test.go b/builtin/providers/aws/resource_aws_iam_group_membership_test.go index 26076dd9b7..63bef4dac7 100644 --- a/builtin/providers/aws/resource_aws_iam_group_membership_test.go +++ b/builtin/providers/aws/resource_aws_iam_group_membership_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -55,23 +56,18 @@ func testAccCheckAWSGroupMembershipDestroy(s *terraform.State) error { group := rs.Primary.Attributes["group"] - resp, err := conn.GetGroup(&iam.GetGroupInput{ + _, err := conn.GetGroup(&iam.GetGroupInput{ GroupName: aws.String(group), }) if err != nil { - // might error here + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "NoSuchEntity" { + continue + } return err } - users := []string{"test-user", "test-user-two", "test-user-three"} - for _, u := range resp.Users { - for _, i := range users { - if i == *u.UserName { - return fmt.Errorf("Error: User (%s) still a member of Group (%s)", i, *resp.Group.GroupName) - } - } - } - + return fmt.Errorf("still exists") } return nil diff --git a/builtin/providers/aws/resource_aws_iam_group_policy_test.go b/builtin/providers/aws/resource_aws_iam_group_policy_test.go index ac7a3baaa0..ccf35310be 100644 --- a/builtin/providers/aws/resource_aws_iam_group_policy_test.go +++ b/builtin/providers/aws/resource_aws_iam_group_policy_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -39,8 +40,30 @@ func TestAccAWSIAMGroupPolicy_basic(t *testing.T) { } func testAccCheckIAMGroupPolicyDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + conn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iam_group_policy" { + continue + } + + group, name := resourceAwsIamGroupPolicyParseId(rs.Primary.ID) + + request := &iam.GetGroupPolicyInput{ + PolicyName: aws.String(name), + GroupName: aws.String(group), + } + + _, err := conn.GetGroupPolicy(request) + if err != nil { + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "NoSuchEntity" { + continue + } + return err + } + + return fmt.Errorf("still exists") } return nil diff --git a/builtin/providers/aws/resource_aws_iam_role_policy_test.go b/builtin/providers/aws/resource_aws_iam_role_policy_test.go index 219c676ebc..3f3256435f 100644 --- a/builtin/providers/aws/resource_aws_iam_role_policy_test.go +++ b/builtin/providers/aws/resource_aws_iam_role_policy_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -39,8 +40,33 @@ func TestAccAWSIAMRolePolicy_basic(t *testing.T) { } func testAccCheckIAMRolePolicyDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iam_role_policy" { + continue + } + + role, name := resourceAwsIamRolePolicyParseId(rs.Primary.ID) + + request := &iam.GetRolePolicyInput{ + PolicyName: aws.String(name), + RoleName: aws.String(role), + } + + var err error + getResp, err := iamconn.GetRolePolicy(request) + if err != nil { + if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { + // none found, that's good + return nil + } + return fmt.Errorf("Error reading IAM policy %s from role %s: %s", name, role, err) + } + + if getResp != nil { + return fmt.Errorf("Found IAM Role, expected none: %s", getResp) + } } return nil diff --git a/builtin/providers/aws/resource_aws_iam_saml_provider_test.go b/builtin/providers/aws/resource_aws_iam_saml_provider_test.go index 63ed395883..4118a062ae 100644 --- a/builtin/providers/aws/resource_aws_iam_saml_provider_test.go +++ b/builtin/providers/aws/resource_aws_iam_saml_provider_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -33,8 +34,28 @@ func TestAccAWSIAMSamlProvider_basic(t *testing.T) { } func testAccCheckIAMSamlProviderDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iam_saml_provider" { + continue + } + + input := &iam.GetSAMLProviderInput{ + SAMLProviderArn: aws.String(rs.Primary.ID), + } + out, err := iamconn.GetSAMLProvider(input) + if err != nil { + if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { + // none found, that's good + return nil + } + return fmt.Errorf("Error reading IAM SAML Provider, out: %s, err: %s", out, err) + } + + if out != nil { + return fmt.Errorf("Found IAM SAML Provider, expected none: %s", out) + } } return nil diff --git a/builtin/providers/aws/resource_aws_iam_user_policy_test.go b/builtin/providers/aws/resource_aws_iam_user_policy_test.go index f5c5201808..019d82506a 100644 --- a/builtin/providers/aws/resource_aws_iam_user_policy_test.go +++ b/builtin/providers/aws/resource_aws_iam_user_policy_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -39,8 +40,33 @@ func TestAccAWSIAMUserPolicy_basic(t *testing.T) { } func testAccCheckIAMUserPolicyDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iam_user_policy" { + continue + } + + role, name := resourceAwsIamRolePolicyParseId(rs.Primary.ID) + + request := &iam.GetRolePolicyInput{ + PolicyName: aws.String(name), + RoleName: aws.String(role), + } + + var err error + getResp, err := iamconn.GetRolePolicy(request) + if err != nil { + if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { + // none found, that's good + return nil + } + return fmt.Errorf("Error reading IAM policy %s from role %s: %s", name, role, err) + } + + if getResp != nil { + return fmt.Errorf("Found IAM Role, expected none: %s", getResp) + } } return nil diff --git a/builtin/providers/aws/resource_aws_instance.go b/builtin/providers/aws/resource_aws_instance.go index d096a45d6f..6685de9653 100644 --- a/builtin/providers/aws/resource_aws_instance.go +++ b/builtin/providers/aws/resource_aws_instance.go @@ -132,6 +132,11 @@ func resourceAwsInstance() *schema.Resource { Computed: true, }, + "instance_state": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "private_dns": &schema.Schema{ Type: schema.TypeString, Computed: true, @@ -140,6 +145,7 @@ func resourceAwsInstance() *schema.Resource { "ebs_optimized": &schema.Schema{ Type: schema.TypeBool, Optional: true, + ForceNew: true, }, "disable_api_termination": &schema.Schema{ @@ -364,12 +370,22 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { time.Sleep(2 * time.Second) continue } + + // Warn if the AWS Error involves group ids, to help identify situation + // where a user uses group ids in security_groups for the Default VPC. + // See https://github.com/hashicorp/terraform/issues/3798 + if awsErr.Code() == "InvalidParameterValue" && strings.Contains(awsErr.Message(), "groupId is invalid") { + return fmt.Errorf("Error launching instance, possible mismatch of Security Group IDs and Names. See AWS Instance docs here: %s.\n\n\tAWS Error: %s", "https://terraform.io/docs/providers/aws/r/instance.html", awsErr.Message()) + } } break } if err != nil { return fmt.Errorf("Error launching source instance: %s", err) } + if runResp == nil || len(runResp.Instances) == 0 { + return fmt.Errorf("Error launching source instance: no instances returned in response") + } instance := runResp.Instances[0] log.Printf("[INFO] Instance ID: %s", *instance.InstanceId) @@ -385,7 +401,7 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "running", + Target: []string{"running"}, Refresh: InstanceStateRefreshFunc(conn, *instance.InstanceId), Timeout: 10 * time.Minute, Delay: 10 * time.Second, @@ -444,10 +460,14 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error { instance := resp.Reservations[0].Instances[0] - // If the instance is terminated, then it is gone - if *instance.State.Name == "terminated" { - d.SetId("") - return nil + if instance.State != nil { + // If the instance is terminated, then it is gone + if *instance.State.Name == "terminated" { + d.SetId("") + return nil + } + + d.Set("instance_state", instance.State.Name) } if instance.Placement != nil { @@ -1062,7 +1082,7 @@ func awsTerminateInstance(conn *ec2.EC2, id string) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, - Target: "terminated", + Target: []string{"terminated"}, Refresh: InstanceStateRefreshFunc(conn, id), Timeout: 10 * time.Minute, Delay: 10 * time.Second, @@ -1082,5 +1102,6 @@ func iamInstanceProfileArnToName(ip *ec2.IamInstanceProfile) string { if ip == nil || ip.Arn == nil { return "" } - return strings.Split(*ip.Arn, "/")[1] + parts := strings.Split(*ip.Arn, "/") + return parts[len(parts)-1] } diff --git a/builtin/providers/aws/resource_aws_instance_migrate.go b/builtin/providers/aws/resource_aws_instance_migrate.go index 5d7075f759..28a256b7b4 100644 --- a/builtin/providers/aws/resource_aws_instance_migrate.go +++ b/builtin/providers/aws/resource_aws_instance_migrate.go @@ -19,12 +19,10 @@ func resourceAwsInstanceMigrateState( default: return is, fmt.Errorf("Unexpected schema version: %d", v) } - - return is, nil } func migrateStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { - if is.Empty() { + if is.Empty() || is.Attributes == nil { log.Println("[DEBUG] Empty InstanceState; nothing to migrate.") return is, nil } diff --git a/builtin/providers/aws/resource_aws_instance_test.go b/builtin/providers/aws/resource_aws_instance_test.go index 3224f9b5e1..11df73a8c8 100644 --- a/builtin/providers/aws/resource_aws_instance_test.go +++ b/builtin/providers/aws/resource_aws_instance_test.go @@ -112,22 +112,22 @@ func TestAccAWSInstance_blockDevices(t *testing.T) { // Check if the root block device exists. if _, ok := blockDevices["/dev/sda1"]; !ok { - fmt.Errorf("block device doesn't exist: /dev/sda1") + return fmt.Errorf("block device doesn't exist: /dev/sda1") } // Check if the secondary block device exists. if _, ok := blockDevices["/dev/sdb"]; !ok { - fmt.Errorf("block device doesn't exist: /dev/sdb") + return fmt.Errorf("block device doesn't exist: /dev/sdb") } // Check if the third block device exists. if _, ok := blockDevices["/dev/sdc"]; !ok { - fmt.Errorf("block device doesn't exist: /dev/sdc") + return fmt.Errorf("block device doesn't exist: /dev/sdc") } // Check if the encrypted block device exists if _, ok := blockDevices["/dev/sdd"]; !ok { - fmt.Errorf("block device doesn't exist: /dev/sdd") + return fmt.Errorf("block device doesn't exist: /dev/sdd") } return nil @@ -513,6 +513,41 @@ func TestAccAWSInstance_rootBlockDeviceMismatch(t *testing.T) { }) } +// This test reproduces the bug here: +// https://github.com/hashicorp/terraform/issues/1752 +// +// I wish there were a way to exercise resources built with helper.Schema in a +// unit context, in which case this test could be moved there, but for now this +// will cover the bugfix. +// +// The following triggers "diffs didn't match during apply" without the fix in to +// set NewRemoved on the .# field when it changes to 0. +func TestAccAWSInstance_forceNewAndTagsDrift(t *testing.T) { + var v ec2.Instance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccInstanceConfigForceNewAndTagsDrift, + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists("aws_instance.foo", &v), + driftTags(&v), + ), + ExpectNonEmptyPlan: true, + }, + resource.TestStep{ + Config: testAccInstanceConfigForceNewAndTagsDrift_Update, + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists("aws_instance.foo", &v), + ), + }, + }, + }) +} + func testAccCheckInstanceDestroy(s *terraform.State) error { return testAccCheckInstanceDestroyWithProvider(s, testAccProvider) } @@ -540,26 +575,25 @@ func testAccCheckInstanceDestroyWithProvider(s *terraform.State, provider *schem } // Try to find the resource - var err error resp, err := conn.DescribeInstances(&ec2.DescribeInstancesInput{ InstanceIds: []*string{aws.String(rs.Primary.ID)}, }) if err == nil { - if len(resp.Reservations) > 0 { - return fmt.Errorf("still exist.") + for _, r := range resp.Reservations { + for _, i := range r.Instances { + if i.State != nil && *i.State.Name != "terminated" { + return fmt.Errorf("Found unterminated instance: %s", i) + } + } } - - return nil } // Verify the error is what we want - ec2err, ok := err.(awserr.Error) - if !ok { - return err - } - if ec2err.Code() != "InvalidInstanceID.NotFound" { - return err + if ae, ok := err.(awserr.Error); ok && ae.Code() == "InvalidInstanceID.NotFound" { + continue } + + return err } return nil @@ -623,6 +657,22 @@ func TestInstanceTenancySchema(t *testing.T) { } } +func driftTags(instance *ec2.Instance) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + _, err := conn.CreateTags(&ec2.CreateTagsInput{ + Resources: []*string{instance.InstanceId}, + Tags: []*ec2.Tag{ + &ec2.Tag{ + Key: aws.String("Drift"), + Value: aws.String("Happens"), + }, + }, + }) + return err + } +} + const testAccInstanceConfig_pre = ` resource "aws_security_group" "tf_test_foo" { name = "tf_test_foo" @@ -989,3 +1039,37 @@ resource "aws_instance" "foo" { } } ` + +const testAccInstanceConfigForceNewAndTagsDrift = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_instance" "foo" { + ami = "ami-22b9a343" + instance_type = "t2.nano" + subnet_id = "${aws_subnet.foo.id}" +} +` + +const testAccInstanceConfigForceNewAndTagsDrift_Update = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_instance" "foo" { + ami = "ami-22b9a343" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.foo.id}" +} +` diff --git a/builtin/providers/aws/resource_aws_internet_gateway.go b/builtin/providers/aws/resource_aws_internet_gateway.go index 76d1ac6ada..fb3c0b58b4 100644 --- a/builtin/providers/aws/resource_aws_internet_gateway.go +++ b/builtin/providers/aws/resource_aws_internet_gateway.go @@ -170,7 +170,7 @@ func resourceAwsInternetGatewayAttach(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Waiting for internet gateway (%s) to attach", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"detached", "attaching"}, - Target: "available", + Target: []string{"available"}, Refresh: IGAttachStateRefreshFunc(conn, d.Id(), "available"), Timeout: 1 * time.Minute, } @@ -205,7 +205,7 @@ func resourceAwsInternetGatewayDetach(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Waiting for internet gateway (%s) to detach", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"detaching"}, - Target: "detached", + Target: []string{"detached"}, Refresh: detachIGStateRefreshFunc(conn, d.Id(), vpcID.(string)), Timeout: 5 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_key_pair_migrate.go b/builtin/providers/aws/resource_aws_key_pair_migrate.go index 0d56123aab..c937ac360f 100644 --- a/builtin/providers/aws/resource_aws_key_pair_migrate.go +++ b/builtin/providers/aws/resource_aws_key_pair_migrate.go @@ -17,8 +17,6 @@ func resourceAwsKeyPairMigrateState( default: return is, fmt.Errorf("Unexpected schema version: %d", v) } - - return is, nil } func migrateKeyPairStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { diff --git a/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream.go b/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream.go index c39467ee4f..cd64a3eee4 100644 --- a/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream.go +++ b/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "log" "strings" "time" @@ -102,7 +103,7 @@ func resourceAwsKinesisFirehoseDeliveryStreamCreate(d *schema.ResourceData, meta DeliveryStreamName: aws.String(sn), } - s3_config := &firehose.S3DestinationConfiguration{ + s3Config := &firehose.S3DestinationConfiguration{ BucketARN: aws.String(d.Get("s3_bucket_arn").(string)), RoleARN: aws.String(d.Get("role_arn").(string)), BufferingHints: &firehose.BufferingHints{ @@ -112,12 +113,25 @@ func resourceAwsKinesisFirehoseDeliveryStreamCreate(d *schema.ResourceData, meta CompressionFormat: aws.String(d.Get("s3_data_compression").(string)), } if v, ok := d.GetOk("s3_prefix"); ok { - s3_config.Prefix = aws.String(v.(string)) + s3Config.Prefix = aws.String(v.(string)) } - input.S3DestinationConfiguration = s3_config + input.S3DestinationConfiguration = s3Config - _, err := conn.CreateDeliveryStream(input) + var err error + for i := 0; i < 5; i++ { + _, err := conn.CreateDeliveryStream(input) + if awsErr, ok := err.(awserr.Error); ok { + // IAM roles can take ~10 seconds to propagate in AWS: + // http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#launch-instance-with-role-console + if awsErr.Code() == "InvalidArgumentException" && strings.Contains(awsErr.Message(), "Firehose is unable to assume role") { + log.Printf("[DEBUG] Firehose could not assume role referenced, retrying...") + time.Sleep(2 * time.Second) + continue + } + } + break + } if err != nil { if awsErr, ok := err.(awserr.Error); ok { return fmt.Errorf("[WARN] Error creating Kinesis Firehose Delivery Stream: \"%s\", code: \"%s\"", awsErr.Message(), awsErr.Code()) @@ -127,7 +141,7 @@ func resourceAwsKinesisFirehoseDeliveryStreamCreate(d *schema.ResourceData, meta stateConf := &resource.StateChangeConf{ Pending: []string{"CREATING"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: firehoseStreamStateRefreshFunc(conn, sn), Timeout: 5 * time.Minute, Delay: 10 * time.Second, @@ -242,7 +256,7 @@ func resourceAwsKinesisFirehoseDeliveryStreamDelete(d *schema.ResourceData, meta stateConf := &resource.StateChangeConf{ Pending: []string{"DELETING"}, - Target: "DESTROYED", + Target: []string{"DESTROYED"}, Refresh: firehoseStreamStateRefreshFunc(conn, sn), Timeout: 5 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream_test.go b/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream_test.go index 611e196ce5..5130b32017 100644 --- a/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream_test.go +++ b/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream_test.go @@ -4,6 +4,7 @@ import ( "fmt" "log" "math/rand" + "os" "strings" "testing" "time" @@ -16,12 +17,17 @@ import ( func TestAccAWSKinesisFirehoseDeliveryStream_basic(t *testing.T) { var stream firehose.DeliveryStreamDescription - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() - config := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_basic, ri, ri) + config := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_basic, + os.Getenv("AWS_ACCOUNT_ID"), ri, ri) resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, + PreCheck: func() { + testAccPreCheck(t) + if os.Getenv("AWS_ACCOUNT_ID") == "" { + t.Fatal("AWS_ACCOUNT_ID must be set") + } + }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, Steps: []resource.TestStep{ @@ -40,11 +46,18 @@ func TestAccAWSKinesisFirehoseDeliveryStream_s3ConfigUpdates(t *testing.T) { var stream firehose.DeliveryStreamDescription ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() - preconfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3, ri, ri) - postConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3Updates, ri, ri) + preconfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3, + os.Getenv("AWS_ACCOUNT_ID"), ri, ri) + postConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3Updates, + os.Getenv("AWS_ACCOUNT_ID"), ri, ri) resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, + PreCheck: func() { + testAccPreCheck(t) + if os.Getenv("AWS_ACCOUNT_ID") == "" { + t.Fatal("AWS_ACCOUNT_ID must be set") + } + }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, Steps: []resource.TestStep{ @@ -147,41 +160,200 @@ func testAccCheckKinesisFirehoseDeliveryStreamDestroy(s *terraform.State) error } var testAccKinesisFirehoseDeliveryStreamConfig_basic = ` +resource "aws_iam_role" "firehose" { + name = "terraform_acctest_firehose_delivery_role" + assume_role_policy = < 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + conn := testAccProvider.Meta().(*AWSClient).elbconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_lb_cookie_stickiness_policy" { + continue + } + + lbName, _, policyName := resourceAwsLBCookieStickinessPolicyParseId(rs.Primary.ID) + out, err := conn.DescribeLoadBalancerPolicies( + &elb.DescribeLoadBalancerPoliciesInput{ + LoadBalancerName: aws.String(lbName), + PolicyNames: []*string{aws.String(policyName)}, + }) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && (ec2err.Code() == "PolicyNotFound" || ec2err.Code() == "LoadBalancerNotFound") { + continue + } + return err + } + + if len(out.PolicyDescriptions) > 0 { + return fmt.Errorf("Policy still exists") + } } return nil diff --git a/builtin/providers/aws/resource_aws_main_route_table_association_test.go b/builtin/providers/aws/resource_aws_main_route_table_association_test.go index 49f2815d9d..191696ef2d 100644 --- a/builtin/providers/aws/resource_aws_main_route_table_association_test.go +++ b/builtin/providers/aws/resource_aws_main_route_table_association_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -39,8 +40,28 @@ func TestAccAWSMainRouteTableAssociation_basic(t *testing.T) { } func testAccCheckMainRouteTableAssociationDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_main_route_table_association" { + continue + } + + mainAssociation, err := findMainRouteTableAssociation( + conn, + rs.Primary.Attributes["vpc_id"], + ) + if err != nil { + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "ApplicationDoesNotExistException" { + continue + } + return err + } + + if mainAssociation != nil { + return fmt.Errorf("still exists") + } } return nil diff --git a/builtin/providers/aws/resource_aws_nat_gateway.go b/builtin/providers/aws/resource_aws_nat_gateway.go new file mode 100644 index 0000000000..c8c46ff322 --- /dev/null +++ b/builtin/providers/aws/resource_aws_nat_gateway.go @@ -0,0 +1,181 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsNatGateway() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNatGatewayCreate, + Read: resourceAwsNatGatewayRead, + Delete: resourceAwsNatGatewayDelete, + + Schema: map[string]*schema.Schema{ + "allocation_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "network_interface_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "private_ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "public_ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + } +} + +func resourceAwsNatGatewayCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + // Create the NAT Gateway + createOpts := &ec2.CreateNatGatewayInput{ + AllocationId: aws.String(d.Get("allocation_id").(string)), + SubnetId: aws.String(d.Get("subnet_id").(string)), + } + + log.Printf("[DEBUG] Create NAT Gateway: %s", *createOpts) + natResp, err := conn.CreateNatGateway(createOpts) + if err != nil { + return fmt.Errorf("Error creating NAT Gateway: %s", err) + } + + // Get the ID and store it + ng := natResp.NatGateway + d.SetId(*ng.NatGatewayId) + log.Printf("[INFO] NAT Gateway ID: %s", d.Id()) + + // Wait for the NAT Gateway to become available + log.Printf("[DEBUG] Waiting for NAT Gateway (%s) to become available", d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: []string{"available"}, + Refresh: NGStateRefreshFunc(conn, d.Id()), + Timeout: 10 * time.Minute, + } + + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for NAT Gateway (%s) to become available: %s", d.Id(), err) + } + + // Update our attributes and return + return resourceAwsNatGatewayRead(d, meta) +} + +func resourceAwsNatGatewayRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + // Refresh the NAT Gateway state + ngRaw, state, err := NGStateRefreshFunc(conn, d.Id())() + if err != nil { + return err + } + if ngRaw == nil || strings.ToLower(state) == "deleted" { + log.Printf("[INFO] Removing %s from Terraform state as it is not found or in the deleted state.", d.Id()) + d.SetId("") + return nil + } + + // Set NAT Gateway attributes + ng := ngRaw.(*ec2.NatGateway) + address := ng.NatGatewayAddresses[0] + d.Set("network_interface_id", address.NetworkInterfaceId) + d.Set("private_ip", address.PrivateIp) + d.Set("public_ip", address.PublicIp) + + return nil +} + +func resourceAwsNatGatewayDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + deleteOpts := &ec2.DeleteNatGatewayInput{ + NatGatewayId: aws.String(d.Id()), + } + log.Printf("[INFO] Deleting NAT Gateway: %s", d.Id()) + + _, err := conn.DeleteNatGateway(deleteOpts) + if err != nil { + ec2err, ok := err.(awserr.Error) + if !ok { + return err + } + + if ec2err.Code() == "NatGatewayNotFound" { + return nil + } + + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"deleting"}, + Target: []string{"deleted"}, + Refresh: NGStateRefreshFunc(conn, d.Id()), + Timeout: 30 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 10 * time.Second, + } + + _, stateErr := stateConf.WaitForState() + if stateErr != nil { + return fmt.Errorf("Error waiting for NAT Gateway (%s) to delete: %s", d.Id(), err) + } + + return nil +} + +// NGStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch +// a NAT Gateway. +func NGStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + opts := &ec2.DescribeNatGatewaysInput{ + NatGatewayIds: []*string{aws.String(id)}, + } + resp, err := conn.DescribeNatGateways(opts) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "NatGatewayNotFound" { + resp = nil + } else { + log.Printf("Error on NGStateRefresh: %s", err) + return nil, "", err + } + } + + if resp == nil { + // Sometimes AWS just has consistency issues and doesn't see + // our instance yet. Return an empty state. + return nil, "", nil + } + + ng := resp.NatGateways[0] + return ng, *ng.State, nil + } +} diff --git a/builtin/providers/aws/resource_aws_nat_gateway_test.go b/builtin/providers/aws/resource_aws_nat_gateway_test.go new file mode 100644 index 0000000000..40b6f77c29 --- /dev/null +++ b/builtin/providers/aws/resource_aws_nat_gateway_test.go @@ -0,0 +1,154 @@ +package aws + +import ( + "fmt" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSNatGateway_basic(t *testing.T) { + var natGateway ec2.NatGateway + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNatGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNatGatewayConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckNatGatewayExists("aws_nat_gateway.gateway", &natGateway), + ), + }, + }, + }) +} + +func testAccCheckNatGatewayDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_nat_gateway" { + continue + } + + // Try to find the resource + resp, err := conn.DescribeNatGateways(&ec2.DescribeNatGatewaysInput{ + NatGatewayIds: []*string{aws.String(rs.Primary.ID)}, + }) + if err == nil { + if len(resp.NatGateways) > 0 && strings.ToLower(*resp.NatGateways[0].State) != "deleted" { + return fmt.Errorf("still exists") + } + + return nil + } + + // Verify the error is what we want + ec2err, ok := err.(awserr.Error) + if !ok { + return err + } + if ec2err.Code() != "NatGatewayNotFound" { + return err + } + } + + return nil +} + +func testAccCheckNatGatewayExists(n string, ng *ec2.NatGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + resp, err := conn.DescribeNatGateways(&ec2.DescribeNatGatewaysInput{ + NatGatewayIds: []*string{aws.String(rs.Primary.ID)}, + }) + if err != nil { + return err + } + if len(resp.NatGateways) == 0 { + return fmt.Errorf("NatGateway not found") + } + + *ng = *resp.NatGateways[0] + + return nil + } +} + +const testAccNatGatewayConfig = ` +resource "aws_vpc" "vpc" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "private" { + vpc_id = "${aws_vpc.vpc.id}" + cidr_block = "10.0.1.0/24" + map_public_ip_on_launch = false +} + +resource "aws_subnet" "public" { + vpc_id = "${aws_vpc.vpc.id}" + cidr_block = "10.0.2.0/24" + map_public_ip_on_launch = true +} + +resource "aws_internet_gateway" "gw" { + vpc_id = "${aws_vpc.vpc.id}" +} + +resource "aws_eip" "nat_gateway" { + vpc = true +} + +// Actual SUT +resource "aws_nat_gateway" "gateway" { + allocation_id = "${aws_eip.nat_gateway.id}" + subnet_id = "${aws_subnet.public.id}" + + depends_on = ["aws_internet_gateway.gw"] +} + +resource "aws_route_table" "private" { + vpc_id = "${aws_vpc.vpc.id}" + + route { + cidr_block = "0.0.0.0/0" + nat_gateway_id = "${aws_nat_gateway.gateway.id}" + } +} + +resource "aws_route_table_association" "private" { + subnet_id = "${aws_subnet.private.id}" + route_table_id = "${aws_route_table.private.id}" +} + +resource "aws_route_table" "public" { + vpc_id = "${aws_vpc.vpc.id}" + + route { + cidr_block = "0.0.0.0/0" + gateway_id = "${aws_internet_gateway.gw.id}" + } +} + +resource "aws_route_table_association" "public" { + subnet_id = "${aws_subnet.public.id}" + route_table_id = "${aws_route_table.public.id}" +} +` diff --git a/builtin/providers/aws/resource_aws_network_acl.go b/builtin/providers/aws/resource_aws_network_acl.go index 20144f7325..ede84ed958 100644 --- a/builtin/providers/aws/resource_aws_network_acl.go +++ b/builtin/providers/aws/resource_aws_network_acl.go @@ -50,6 +50,7 @@ func resourceAwsNetworkAcl() *schema.Resource { Type: schema.TypeSet, Required: false, Optional: true, + Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "from_port": &schema.Schema{ @@ -92,6 +93,7 @@ func resourceAwsNetworkAcl() *schema.Resource { Type: schema.TypeSet, Required: false, Optional: true, + Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "from_port": &schema.Schema{ @@ -316,87 +318,89 @@ func resourceAwsNetworkAclUpdate(d *schema.ResourceData, meta interface{}) error func updateNetworkAclEntries(d *schema.ResourceData, entryType string, conn *ec2.EC2) error { - o, n := d.GetChange(entryType) + if d.HasChange(entryType) { + o, n := d.GetChange(entryType) - if o == nil { - o = new(schema.Set) - } - if n == nil { - n = new(schema.Set) - } - - os := o.(*schema.Set) - ns := n.(*schema.Set) - - toBeDeleted, err := expandNetworkAclEntries(os.Difference(ns).List(), entryType) - if err != nil { - return err - } - for _, remove := range toBeDeleted { - - // AWS includes default rules with all network ACLs that can be - // neither modified nor destroyed. They have a custom rule - // number that is out of bounds for any other rule. If we - // encounter it, just continue. There's no work to be done. - if *remove.RuleNumber == 32767 { - continue + if o == nil { + o = new(schema.Set) + } + if n == nil { + n = new(schema.Set) } - // Delete old Acl - _, err := conn.DeleteNetworkAclEntry(&ec2.DeleteNetworkAclEntryInput{ - NetworkAclId: aws.String(d.Id()), - RuleNumber: remove.RuleNumber, - Egress: remove.Egress, - }) + os := o.(*schema.Set) + ns := n.(*schema.Set) + + toBeDeleted, err := expandNetworkAclEntries(os.Difference(ns).List(), entryType) if err != nil { - return fmt.Errorf("Error deleting %s entry: %s", entryType, err) - } - } - - toBeCreated, err := expandNetworkAclEntries(ns.Difference(os).List(), entryType) - if err != nil { - return err - } - for _, add := range toBeCreated { - // Protocol -1 rules don't store ports in AWS. Thus, they'll always - // hash differently when being read out of the API. Force the user - // to set from_port and to_port to 0 for these rules, to keep the - // hashing consistent. - if *add.Protocol == "-1" { - to := *add.PortRange.To - from := *add.PortRange.From - expected := &expectedPortPair{ - to_port: 0, - from_port: 0, - } - if ok := validatePorts(to, from, *expected); !ok { - return fmt.Errorf( - "to_port (%d) and from_port (%d) must both be 0 to use the the 'all' \"-1\" protocol!", - to, from) - } - } - - // AWS mutates the CIDR block into a network implied by the IP and - // mask provided. This results in hashing inconsistencies between - // the local config file and the state returned by the API. Error - // if the user provides a CIDR block with an inappropriate mask - if err := validateCIDRBlock(*add.CidrBlock); err != nil { return err } + for _, remove := range toBeDeleted { - // Add new Acl entry - _, connErr := conn.CreateNetworkAclEntry(&ec2.CreateNetworkAclEntryInput{ - NetworkAclId: aws.String(d.Id()), - CidrBlock: add.CidrBlock, - Egress: add.Egress, - PortRange: add.PortRange, - Protocol: add.Protocol, - RuleAction: add.RuleAction, - RuleNumber: add.RuleNumber, - IcmpTypeCode: add.IcmpTypeCode, - }) - if connErr != nil { - return fmt.Errorf("Error creating %s entry: %s", entryType, connErr) + // AWS includes default rules with all network ACLs that can be + // neither modified nor destroyed. They have a custom rule + // number that is out of bounds for any other rule. If we + // encounter it, just continue. There's no work to be done. + if *remove.RuleNumber == 32767 { + continue + } + + // Delete old Acl + _, err := conn.DeleteNetworkAclEntry(&ec2.DeleteNetworkAclEntryInput{ + NetworkAclId: aws.String(d.Id()), + RuleNumber: remove.RuleNumber, + Egress: remove.Egress, + }) + if err != nil { + return fmt.Errorf("Error deleting %s entry: %s", entryType, err) + } + } + + toBeCreated, err := expandNetworkAclEntries(ns.Difference(os).List(), entryType) + if err != nil { + return err + } + for _, add := range toBeCreated { + // Protocol -1 rules don't store ports in AWS. Thus, they'll always + // hash differently when being read out of the API. Force the user + // to set from_port and to_port to 0 for these rules, to keep the + // hashing consistent. + if *add.Protocol == "-1" { + to := *add.PortRange.To + from := *add.PortRange.From + expected := &expectedPortPair{ + to_port: 0, + from_port: 0, + } + if ok := validatePorts(to, from, *expected); !ok { + return fmt.Errorf( + "to_port (%d) and from_port (%d) must both be 0 to use the the 'all' \"-1\" protocol!", + to, from) + } + } + + // AWS mutates the CIDR block into a network implied by the IP and + // mask provided. This results in hashing inconsistencies between + // the local config file and the state returned by the API. Error + // if the user provides a CIDR block with an inappropriate mask + if err := validateCIDRBlock(*add.CidrBlock); err != nil { + return err + } + + // Add new Acl entry + _, connErr := conn.CreateNetworkAclEntry(&ec2.CreateNetworkAclEntryInput{ + NetworkAclId: aws.String(d.Id()), + CidrBlock: add.CidrBlock, + Egress: add.Egress, + PortRange: add.PortRange, + Protocol: add.Protocol, + RuleAction: add.RuleAction, + RuleNumber: add.RuleNumber, + IcmpTypeCode: add.IcmpTypeCode, + }) + if connErr != nil { + return fmt.Errorf("Error creating %s entry: %s", entryType, connErr) + } } } return nil diff --git a/builtin/providers/aws/resource_aws_network_acl_rule.go b/builtin/providers/aws/resource_aws_network_acl_rule.go new file mode 100644 index 0000000000..16b174d0c1 --- /dev/null +++ b/builtin/providers/aws/resource_aws_network_acl_rule.go @@ -0,0 +1,247 @@ +package aws + +import ( + "bytes" + "fmt" + "log" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsNetworkAclRule() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNetworkAclRuleCreate, + Read: resourceAwsNetworkAclRuleRead, + Delete: resourceAwsNetworkAclRuleDelete, + + Schema: map[string]*schema.Schema{ + "network_acl_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "rule_number": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "egress": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + }, + "protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "rule_action": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "cidr_block": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "from_port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + "to_port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + "icmp_type": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + "icmp_code": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsNetworkAclRuleCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + protocol := d.Get("protocol").(string) + p, protocolErr := strconv.Atoi(protocol) + if protocolErr != nil { + var ok bool + p, ok = protocolIntegers()[protocol] + if !ok { + return fmt.Errorf("Invalid Protocol %s for rule %#v", protocol, d.Get("rule_number").(int)) + } + } + log.Printf("[INFO] Transformed Protocol %s into %d", protocol, p) + + params := &ec2.CreateNetworkAclEntryInput{ + NetworkAclId: aws.String(d.Get("network_acl_id").(string)), + Egress: aws.Bool(d.Get("egress").(bool)), + RuleNumber: aws.Int64(int64(d.Get("rule_number").(int))), + Protocol: aws.String(strconv.Itoa(p)), + CidrBlock: aws.String(d.Get("cidr_block").(string)), + RuleAction: aws.String(d.Get("rule_action").(string)), + PortRange: &ec2.PortRange{ + From: aws.Int64(int64(d.Get("from_port").(int))), + To: aws.Int64(int64(d.Get("to_port").(int))), + }, + } + + // Specify additional required fields for ICMP + if p == 1 { + params.IcmpTypeCode = &ec2.IcmpTypeCode{} + if v, ok := d.GetOk("icmp_code"); ok { + params.IcmpTypeCode.Code = aws.Int64(int64(v.(int))) + } + if v, ok := d.GetOk("icmp_type"); ok { + params.IcmpTypeCode.Type = aws.Int64(int64(v.(int))) + } + } + + log.Printf("[INFO] Creating Network Acl Rule: %d (%t)", d.Get("rule_number").(int), d.Get("egress").(bool)) + _, err := conn.CreateNetworkAclEntry(params) + if err != nil { + return fmt.Errorf("Error Creating Network Acl Rule: %s", err.Error()) + } + d.SetId(networkAclIdRuleNumberEgressHash(d.Get("network_acl_id").(string), d.Get("rule_number").(int), d.Get("egress").(bool), d.Get("protocol").(string))) + + // It appears it might be a while until the newly created rule is visible via the + // API (see issue GH-4721). Retry the `findNetworkAclRule` function until it is + // visible (which in most cases is likely immediately). + err = resource.Retry(3*time.Minute, func() error { + _, findErr := findNetworkAclRule(d, meta) + if findErr != nil { + return findErr + } + + return nil + }) + if err != nil { + return fmt.Errorf("Created Network ACL Rule was not visible in API within 3 minute period. Running 'terraform apply' again will resume infrastructure creation.") + } + + return resourceAwsNetworkAclRuleRead(d, meta) +} + +func resourceAwsNetworkAclRuleRead(d *schema.ResourceData, meta interface{}) error { + resp, err := findNetworkAclRule(d, meta) + if err != nil { + return err + } + + d.Set("rule_number", resp.RuleNumber) + d.Set("cidr_block", resp.CidrBlock) + d.Set("egress", resp.Egress) + if resp.IcmpTypeCode != nil { + d.Set("icmp_code", resp.IcmpTypeCode.Code) + d.Set("icmp_type", resp.IcmpTypeCode.Type) + } + if resp.PortRange != nil { + d.Set("from_port", resp.PortRange.From) + d.Set("to_port", resp.PortRange.To) + } + + d.Set("rule_action", resp.RuleAction) + + p, protocolErr := strconv.Atoi(*resp.Protocol) + log.Printf("[INFO] Converting the protocol %v", p) + if protocolErr == nil { + var ok bool + protocol, ok := protocolStrings(protocolIntegers())[p] + if !ok { + return fmt.Errorf("Invalid Protocol %s for rule %#v", *resp.Protocol, d.Get("rule_number").(int)) + } + log.Printf("[INFO] Transformed Protocol %s back into %s", *resp.Protocol, protocol) + d.Set("protocol", protocol) + } + + return nil +} + +func resourceAwsNetworkAclRuleDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + params := &ec2.DeleteNetworkAclEntryInput{ + NetworkAclId: aws.String(d.Get("network_acl_id").(string)), + RuleNumber: aws.Int64(int64(d.Get("rule_number").(int))), + Egress: aws.Bool(d.Get("egress").(bool)), + } + + log.Printf("[INFO] Deleting Network Acl Rule: %s", d.Id()) + _, err := conn.DeleteNetworkAclEntry(params) + if err != nil { + return fmt.Errorf("Error Deleting Network Acl Rule: %s", err.Error()) + } + + return nil +} + +func findNetworkAclRule(d *schema.ResourceData, meta interface{}) (*ec2.NetworkAclEntry, error) { + conn := meta.(*AWSClient).ec2conn + + filters := make([]*ec2.Filter, 0, 2) + ruleNumberFilter := &ec2.Filter{ + Name: aws.String("entry.rule-number"), + Values: []*string{aws.String(fmt.Sprintf("%v", d.Get("rule_number").(int)))}, + } + filters = append(filters, ruleNumberFilter) + egressFilter := &ec2.Filter{ + Name: aws.String("entry.egress"), + Values: []*string{aws.String(fmt.Sprintf("%v", d.Get("egress").(bool)))}, + } + filters = append(filters, egressFilter) + params := &ec2.DescribeNetworkAclsInput{ + NetworkAclIds: []*string{aws.String(d.Get("network_acl_id").(string))}, + Filters: filters, + } + + log.Printf("[INFO] Describing Network Acl: %s", d.Get("network_acl_id").(string)) + log.Printf("[INFO] Describing Network Acl with the Filters %#v", params) + resp, err := conn.DescribeNetworkAcls(params) + if err != nil { + return nil, fmt.Errorf("Error Finding Network Acl Rule %d: %s", d.Get("rule_number").(int), err.Error()) + } + + if resp == nil || len(resp.NetworkAcls) != 1 || resp.NetworkAcls[0] == nil { + return nil, fmt.Errorf( + "Expected to find one Network ACL, got: %#v", + resp.NetworkAcls) + } + networkAcl := resp.NetworkAcls[0] + if networkAcl.Entries != nil { + for _, i := range networkAcl.Entries { + if *i.RuleNumber == int64(d.Get("rule_number").(int)) && *i.Egress == d.Get("egress").(bool) { + return i, nil + } + } + } + return nil, fmt.Errorf( + "Expected the Network ACL to have Entries, got: %#v", + networkAcl) + +} + +func networkAclIdRuleNumberEgressHash(networkAclId string, ruleNumber int, egress bool, protocol string) string { + var buf bytes.Buffer + buf.WriteString(fmt.Sprintf("%s-", networkAclId)) + buf.WriteString(fmt.Sprintf("%d-", ruleNumber)) + buf.WriteString(fmt.Sprintf("%t-", egress)) + buf.WriteString(fmt.Sprintf("%s-", protocol)) + return fmt.Sprintf("nacl-%d", hashcode.String(buf.String())) +} diff --git a/builtin/providers/aws/resource_aws_network_acl_rule_test.go b/builtin/providers/aws/resource_aws_network_acl_rule_test.go new file mode 100644 index 0000000000..56973b1d47 --- /dev/null +++ b/builtin/providers/aws/resource_aws_network_acl_rule_test.go @@ -0,0 +1,125 @@ +package aws + +import ( + "fmt" + "strconv" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSNetworkAclRule_basic(t *testing.T) { + var networkAcl ec2.NetworkAcl + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSNetworkAclRuleBasicConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkAclRuleExists("aws_network_acl_rule.bar", &networkAcl), + ), + }, + }, + }) +} + +func testAccCheckAWSNetworkAclRuleDestroy(s *terraform.State) error { + + for _, rs := range s.RootModule().Resources { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + if rs.Type != "aws_network_acl_rule" { + continue + } + + req := &ec2.DescribeNetworkAclsInput{ + NetworkAclIds: []*string{aws.String(rs.Primary.ID)}, + } + resp, err := conn.DescribeNetworkAcls(req) + if err == nil { + if len(resp.NetworkAcls) > 0 && *resp.NetworkAcls[0].NetworkAclId == rs.Primary.ID { + networkAcl := resp.NetworkAcls[0] + if networkAcl.Entries != nil { + return fmt.Errorf("Network ACL Entries still exist") + } + } + } + + ec2err, ok := err.(awserr.Error) + if !ok { + return err + } + if ec2err.Code() != "InvalidNetworkAclID.NotFound" { + return err + } + } + + return nil +} + +func testAccCheckAWSNetworkAclRuleExists(n string, networkAcl *ec2.NetworkAcl) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Network ACL Id is set") + } + + req := &ec2.DescribeNetworkAclsInput{ + NetworkAclIds: []*string{aws.String(rs.Primary.Attributes["network_acl_id"])}, + } + resp, err := conn.DescribeNetworkAcls(req) + if err != nil { + return err + } + if len(resp.NetworkAcls) != 1 { + return fmt.Errorf("Network ACL not found") + } + egress, err := strconv.ParseBool(rs.Primary.Attributes["egress"]) + if err != nil { + return err + } + ruleNo, err := strconv.ParseInt(rs.Primary.Attributes["rule_number"], 10, 64) + if err != nil { + return err + } + for _, e := range resp.NetworkAcls[0].Entries { + if *e.RuleNumber == ruleNo && *e.Egress == egress { + return nil + } + } + return fmt.Errorf("Entry not found: %s", resp.NetworkAcls[0]) + } +} + +const testAccAWSNetworkAclRuleBasicConfig = ` +provider "aws" { + region = "us-east-1" +} +resource "aws_vpc" "foo" { + cidr_block = "10.3.0.0/16" +} +resource "aws_network_acl" "bar" { + vpc_id = "${aws_vpc.foo.id}" +} +resource "aws_network_acl_rule" "bar" { + network_acl_id = "${aws_network_acl.bar.id}" + rule_number = 200 + egress = false + protocol = "tcp" + rule_action = "allow" + cidr_block = "0.0.0.0/0" + from_port = 22 + to_port = 22 +} +` diff --git a/builtin/providers/aws/resource_aws_network_interface.go b/builtin/providers/aws/resource_aws_network_interface.go index d994e56545..46d329bc6f 100644 --- a/builtin/providers/aws/resource_aws_network_interface.go +++ b/builtin/providers/aws/resource_aws_network_interface.go @@ -33,7 +33,6 @@ func resourceAwsNetworkInterface() *schema.Resource { "private_ips": &schema.Schema{ Type: schema.TypeSet, Optional: true, - ForceNew: true, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, @@ -187,7 +186,7 @@ func resourceAwsNetworkInterfaceDetach(oa *schema.Set, meta interface{}, eniId s log.Printf("[DEBUG] Waiting for ENI (%s) to become dettached", eniId) stateConf := &resource.StateChangeConf{ Pending: []string{"true"}, - Target: "false", + Target: []string{"false"}, Refresh: networkInterfaceAttachmentRefreshFunc(conn, eniId), Timeout: 10 * time.Minute, } @@ -230,6 +229,47 @@ func resourceAwsNetworkInterfaceUpdate(d *schema.ResourceData, meta interface{}) d.SetPartial("attachment") } + if d.HasChange("private_ips") { + o, n := d.GetChange("private_ips") + if o == nil { + o = new(schema.Set) + } + if n == nil { + n = new(schema.Set) + } + + os := o.(*schema.Set) + ns := n.(*schema.Set) + + // Unassign old IP addresses + unassignIps := os.Difference(ns) + if unassignIps.Len() != 0 { + input := &ec2.UnassignPrivateIpAddressesInput{ + NetworkInterfaceId: aws.String(d.Id()), + PrivateIpAddresses: expandStringList(unassignIps.List()), + } + _, err := conn.UnassignPrivateIpAddresses(input) + if err != nil { + return fmt.Errorf("Failure to unassign Private IPs: %s", err) + } + } + + // Assign new IP addresses + assignIps := ns.Difference(os) + if assignIps.Len() != 0 { + input := &ec2.AssignPrivateIpAddressesInput{ + NetworkInterfaceId: aws.String(d.Id()), + PrivateIpAddresses: expandStringList(assignIps.List()), + } + _, err := conn.AssignPrivateIpAddresses(input) + if err != nil { + return fmt.Errorf("Failure to assign Private IPs: %s", err) + } + } + + d.SetPartial("private_ips") + } + request := &ec2.ModifyNetworkInterfaceAttributeInput{ NetworkInterfaceId: aws.String(d.Id()), SourceDestCheck: &ec2.AttributeBooleanValue{Value: aws.Bool(d.Get("source_dest_check").(bool))}, diff --git a/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go b/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go index a39b5dbdba..ed3d0fad6e 100644 --- a/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go +++ b/builtin/providers/aws/resource_aws_opsworks_custom_layer_test.go @@ -4,6 +4,9 @@ import ( "fmt" "testing" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/opsworks" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -11,7 +14,7 @@ import ( // These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` // and `aws-opsworks-service-role`. -func TestAccAwsOpsworksCustomLayer(t *testing.T) { +func TestAccAWSOpsworksCustomLayer(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -131,11 +134,30 @@ func TestAccAwsOpsworksCustomLayer(t *testing.T) { } func testAccCheckAwsOpsworksCustomLayerDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + opsworksconn := testAccProvider.Meta().(*AWSClient).opsworksconn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opsworks_custom_layer" { + continue + } + req := &opsworks.DescribeLayersInput{ + LayerIds: []*string{ + aws.String(rs.Primary.ID), + }, + } + + _, err := opsworksconn.DescribeLayers(req) + if err != nil { + if awserr, ok := err.(awserr.Error); ok { + if awserr.Code() == "ResourceNotFoundException" { + // not found, good to go + return nil + } + } + return err + } } - return nil + return fmt.Errorf("Fall through error on OpsWorks custom layer test") } var testAccAwsOpsworksCustomLayerSecurityGroups = ` @@ -160,6 +182,10 @@ resource "aws_security_group" "tf-ops-acc-layer2" { ` var testAccAwsOpsworksCustomLayerConfigCreate = testAccAwsOpsworksStackConfigNoVpcCreate + testAccAwsOpsworksCustomLayerSecurityGroups + ` +provider "aws" { + region = "us-east-1" +} + resource "aws_opsworks_custom_layer" "tf-acc" { stack_id = "${aws_opsworks_stack.tf-acc.id}" name = "tf-ops-acc-custom-layer" diff --git a/builtin/providers/aws/resource_aws_opsworks_stack.go b/builtin/providers/aws/resource_aws_opsworks_stack.go index 8eeda3f05b..19cbba9ecd 100644 --- a/builtin/providers/aws/resource_aws_opsworks_stack.go +++ b/builtin/providers/aws/resource_aws_opsworks_stack.go @@ -231,9 +231,6 @@ func resourceAwsOpsworksSetStackCustomCookbooksSource(d *schema.ResourceData, v if v.Revision != nil { m["revision"] = *v.Revision } - if v.SshKey != nil { - m["ssh_key"] = *v.SshKey - } nv = append(nv, m) } @@ -259,6 +256,7 @@ func resourceAwsOpsworksStackRead(d *schema.ResourceData, meta interface{}) erro if err != nil { if awserr, ok := err.(awserr.Error); ok { if awserr.Code() == "ResourceNotFoundException" { + log.Printf("[DEBUG] OpsWorks stack (%s) not found", d.Id()) d.SetId("") return nil } @@ -306,9 +304,10 @@ func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) er req := &opsworks.CreateStackInput{ DefaultInstanceProfileArn: aws.String(d.Get("default_instance_profile_arn").(string)), - Name: aws.String(d.Get("name").(string)), - Region: aws.String(d.Get("region").(string)), - ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)), + Name: aws.String(d.Get("name").(string)), + Region: aws.String(d.Get("region").(string)), + ServiceRoleArn: aws.String(d.Get("service_role_arn").(string)), + UseOpsworksSecurityGroups: aws.Bool(d.Get("use_opsworks_security_groups").(bool)), } inVpc := false if vpcId, ok := d.GetOk("vpc_id"); ok { @@ -322,7 +321,7 @@ func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) er req.DefaultAvailabilityZone = aws.String(defaultAvailabilityZone.(string)) } - log.Printf("[DEBUG] Creating OpsWorks stack: %s", *req.Name) + log.Printf("[DEBUG] Creating OpsWorks stack: %s", req) var resp *opsworks.CreateStackOutput err = resource.Retry(20*time.Minute, func() error { @@ -339,7 +338,9 @@ func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) er // The full error we're looking for looks something like // the following: // Service Role Arn: [...] is not yet propagated, please try again in a couple of minutes - if opserr.Code() == "ValidationException" && strings.Contains(opserr.Message(), "not yet propagated") { + propErr := "not yet propagated" + trustErr := "not the necessary trust relationship" + if opserr.Code() == "ValidationException" && (strings.Contains(opserr.Message(), trustErr) || strings.Contains(opserr.Message(), propErr)) { log.Printf("[INFO] Waiting for service IAM role to propagate") return cerr } @@ -356,7 +357,7 @@ func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) er d.SetId(stackId) d.Set("id", stackId) - if inVpc { + if inVpc && *req.UseOpsworksSecurityGroups { // For VPC-based stacks, OpsWorks asynchronously creates some default // security groups which must exist before layers can be created. // Unfortunately it doesn't tell us what the ids of these are, so @@ -414,7 +415,7 @@ func resourceAwsOpsworksStackUpdate(d *schema.ResourceData, meta interface{}) er Version: aws.String(d.Get("configuration_manager_version").(string)), } - log.Printf("[DEBUG] Updating OpsWorks stack: %s", d.Id()) + log.Printf("[DEBUG] Updating OpsWorks stack: %s", req) _, err = client.UpdateStack(req) if err != nil { @@ -447,7 +448,10 @@ func resourceAwsOpsworksStackDelete(d *schema.ResourceData, meta interface{}) er // wait for the security groups to be deleted. // There is no robust way to check for this, so we'll just wait a // nominal amount of time. - if _, ok := d.GetOk("vpc_id"); ok { + _, inVpc := d.GetOk("vpc_id") + _, useOpsworksDefaultSg := d.GetOk("use_opsworks_security_group") + + if inVpc && useOpsworksDefaultSg { log.Print("[INFO] Waiting for Opsworks built-in security groups to be deleted") time.Sleep(30 * time.Second) } diff --git a/builtin/providers/aws/resource_aws_opsworks_stack_test.go b/builtin/providers/aws/resource_aws_opsworks_stack_test.go index 63a27578c6..ba34663d4f 100644 --- a/builtin/providers/aws/resource_aws_opsworks_stack_test.go +++ b/builtin/providers/aws/resource_aws_opsworks_stack_test.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/opsworks" ) @@ -90,10 +91,10 @@ resource "aws_iam_instance_profile" "opsworks_instance" { var testAccAwsOpsworksStackConfigNoVpcCreate = testAccAwsOpsworksStackIamConfig + ` resource "aws_opsworks_stack" "tf-acc" { name = "tf-opsworks-acc" - region = "us-west-2" + region = "us-east-1" service_role_arn = "${aws_iam_role.opsworks_service.arn}" default_instance_profile_arn = "${aws_iam_instance_profile.opsworks_instance.arn}" - default_availability_zone = "us-west-2a" + default_availability_zone = "us-east-1c" default_os = "Amazon Linux 2014.09" default_root_device_type = "ebs" custom_json = "{\"key\": \"value\"}" @@ -104,10 +105,10 @@ resource "aws_opsworks_stack" "tf-acc" { var testAccAWSOpsworksStackConfigNoVpcUpdate = testAccAwsOpsworksStackIamConfig + ` resource "aws_opsworks_stack" "tf-acc" { name = "tf-opsworks-acc" - region = "us-west-2" + region = "us-east-1" service_role_arn = "${aws_iam_role.opsworks_service.arn}" default_instance_profile_arn = "${aws_iam_instance_profile.opsworks_instance.arn}" - default_availability_zone = "us-west-2a" + default_availability_zone = "us-east-1c" default_os = "Amazon Linux 2014.09" default_root_device_type = "ebs" custom_json = "{\"key\": \"value\"}" @@ -123,7 +124,7 @@ resource "aws_opsworks_stack" "tf-acc" { } ` -func TestAccAwsOpsworksStackNoVpc(t *testing.T) { +func TestAccAWSOpsworksStackNoVpc(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -131,11 +132,11 @@ func TestAccAwsOpsworksStackNoVpc(t *testing.T) { Steps: []resource.TestStep{ resource.TestStep{ Config: testAccAwsOpsworksStackConfigNoVpcCreate, - Check: testAccAwsOpsworksStackCheckResourceAttrsCreate, + Check: testAccAwsOpsworksStackCheckResourceAttrsCreate("us-east-1c"), }, resource.TestStep{ Config: testAccAWSOpsworksStackConfigNoVpcUpdate, - Check: testAccAwsOpsworksStackCheckResourceAttrsUpdate, + Check: testAccAwsOpsworksStackCheckResourceAttrsUpdate("us-east-1c"), }, }, }) @@ -200,7 +201,7 @@ resource "aws_opsworks_stack" "tf-acc" { } ` -func TestAccAwsOpsworksStackVpc(t *testing.T) { +func TestAccAWSOpsworksStackVpc(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -208,12 +209,12 @@ func TestAccAwsOpsworksStackVpc(t *testing.T) { Steps: []resource.TestStep{ resource.TestStep{ Config: testAccAwsOpsworksStackConfigVpcCreate, - Check: testAccAwsOpsworksStackCheckResourceAttrsCreate, + Check: testAccAwsOpsworksStackCheckResourceAttrsCreate("us-west-2a"), }, resource.TestStep{ Config: testAccAWSOpsworksStackConfigVpcUpdate, Check: resource.ComposeTestCheckFunc( - testAccAwsOpsworksStackCheckResourceAttrsUpdate, + testAccAwsOpsworksStackCheckResourceAttrsUpdate("us-west-2a"), testAccAwsOpsworksCheckVpc, ), }, @@ -225,106 +226,110 @@ func TestAccAwsOpsworksStackVpc(t *testing.T) { //// Checkers and Utilities //////////////////////////// -var testAccAwsOpsworksStackCheckResourceAttrsCreate = resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "name", - "tf-opsworks-acc", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "default_availability_zone", - "us-west-2a", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "default_os", - "Amazon Linux 2014.09", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "default_root_device_type", - "ebs", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "custom_json", - `{"key": "value"}`, - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "configuration_manager_version", - "11.10", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "use_opsworks_security_groups", - "false", - ), -) +func testAccAwsOpsworksStackCheckResourceAttrsCreate(zone string) resource.TestCheckFunc { + return resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "name", + "tf-opsworks-acc", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_availability_zone", + zone, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_os", + "Amazon Linux 2014.09", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_root_device_type", + "ebs", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_json", + `{"key": "value"}`, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "configuration_manager_version", + "11.10", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_opsworks_security_groups", + "false", + ), + ) +} -var testAccAwsOpsworksStackCheckResourceAttrsUpdate = resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "name", - "tf-opsworks-acc", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "default_availability_zone", - "us-west-2a", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "default_os", - "Amazon Linux 2014.09", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "default_root_device_type", - "ebs", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "custom_json", - `{"key": "value"}`, - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "configuration_manager_version", - "11.10", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "use_opsworks_security_groups", - "false", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "use_custom_cookbooks", - "true", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "manage_berkshelf", - "true", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "custom_cookbooks_source.0.type", - "git", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "custom_cookbooks_source.0.revision", - "master", - ), - resource.TestCheckResourceAttr( - "aws_opsworks_stack.tf-acc", - "custom_cookbooks_source.0.url", - "https://github.com/aws/opsworks-example-cookbooks.git", - ), -) +func testAccAwsOpsworksStackCheckResourceAttrsUpdate(zone string) resource.TestCheckFunc { + return resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "name", + "tf-opsworks-acc", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_availability_zone", + zone, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_os", + "Amazon Linux 2014.09", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "default_root_device_type", + "ebs", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_json", + `{"key": "value"}`, + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "configuration_manager_version", + "11.10", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_opsworks_security_groups", + "false", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "use_custom_cookbooks", + "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "manage_berkshelf", + "true", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.type", + "git", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.revision", + "master", + ), + resource.TestCheckResourceAttr( + "aws_opsworks_stack.tf-acc", + "custom_cookbooks_source.0.url", + "https://github.com/aws/opsworks-example-cookbooks.git", + ), + ) +} func testAccAwsOpsworksCheckVpc(s *terraform.State) error { rs, ok := s.RootModule().Resources["aws_opsworks_stack.tf-acc"] @@ -358,9 +363,28 @@ func testAccAwsOpsworksCheckVpc(s *terraform.State) error { } func testAccCheckAwsOpsworksStackDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) - } + opsworksconn := testAccProvider.Meta().(*AWSClient).opsworksconn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opsworks_stack" { + continue + } - return nil + req := &opsworks.DescribeStacksInput{ + StackIds: []*string{ + aws.String(rs.Primary.ID), + }, + } + + _, err := opsworksconn.DescribeStacks(req) + if err != nil { + if awserr, ok := err.(awserr.Error); ok { + if awserr.Code() == "ResourceNotFoundException" { + // not found, all good + return nil + } + } + return err + } + } + return fmt.Errorf("Fall through error for OpsWorks stack test") } diff --git a/builtin/providers/aws/resource_aws_placement_group.go b/builtin/providers/aws/resource_aws_placement_group.go index 9f0452f755..fcc8cc95cb 100644 --- a/builtin/providers/aws/resource_aws_placement_group.go +++ b/builtin/providers/aws/resource_aws_placement_group.go @@ -49,7 +49,7 @@ func resourceAwsPlacementGroupCreate(d *schema.ResourceData, meta interface{}) e wait := resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "available", + Target: []string{"available"}, Timeout: 5 * time.Minute, MinTimeout: 1 * time.Second, Refresh: func() (interface{}, string, error) { @@ -114,7 +114,7 @@ func resourceAwsPlacementGroupDelete(d *schema.ResourceData, meta interface{}) e wait := resource.StateChangeConf{ Pending: []string{"deleting"}, - Target: "deleted", + Target: []string{"deleted"}, Timeout: 5 * time.Minute, MinTimeout: 1 * time.Second, Refresh: func() (interface{}, string, error) { diff --git a/builtin/providers/aws/resource_aws_placement_group_test.go b/builtin/providers/aws/resource_aws_placement_group_test.go index a68e43e92f..8743975c24 100644 --- a/builtin/providers/aws/resource_aws_placement_group_test.go +++ b/builtin/providers/aws/resource_aws_placement_group_test.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" ) @@ -34,12 +35,19 @@ func testAccCheckAWSPlacementGroupDestroy(s *terraform.State) error { if rs.Type != "aws_placement_group" { continue } - _, err := conn.DeletePlacementGroup(&ec2.DeletePlacementGroupInput{ - GroupName: aws.String(rs.Primary.ID), + + _, err := conn.DescribePlacementGroups(&ec2.DescribePlacementGroupsInput{ + GroupNames: []*string{aws.String(rs.Primary.Attributes["name"])}, }) if err != nil { + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "InvalidPlacementGroup.Unknown" { + continue + } return err } + + return fmt.Errorf("still exists") } return nil } diff --git a/builtin/providers/aws/resource_aws_proxy_protocol_policy_test.go b/builtin/providers/aws/resource_aws_proxy_protocol_policy_test.go index 945a62d48b..ff1cc7a7e7 100644 --- a/builtin/providers/aws/resource_aws_proxy_protocol_policy_test.go +++ b/builtin/providers/aws/resource_aws_proxy_protocol_policy_test.go @@ -4,6 +4,8 @@ import ( "fmt" "testing" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/elb" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -43,10 +45,28 @@ func TestAccAWSProxyProtocolPolicy_basic(t *testing.T) { } func testAccCheckProxyProtocolPolicyDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) - } + conn := testAccProvider.Meta().(*AWSClient).elbconn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_placement_group" { + continue + } + + req := &elb.DescribeLoadBalancersInput{ + LoadBalancerNames: []*string{ + aws.String(rs.Primary.Attributes["load_balancer"])}, + } + _, err := conn.DescribeLoadBalancers(req) + if err != nil { + // Verify the error is what we want + if isLoadBalancerNotFound(err) { + continue + } + return err + } + + return fmt.Errorf("still exists") + } return nil } diff --git a/builtin/providers/aws/resource_aws_rds_cluster.go b/builtin/providers/aws/resource_aws_rds_cluster.go index 6dac39f0ea..553a85221c 100644 --- a/builtin/providers/aws/resource_aws_rds_cluster.go +++ b/builtin/providers/aws/resource_aws_rds_cluster.go @@ -212,7 +212,7 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error d.SetId(*resp.DBCluster.DBClusterIdentifier) stateConf := &resource.StateChangeConf{ Pending: []string{"creating", "backing-up", "modifying"}, - Target: "available", + Target: []string{"available"}, Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), Timeout: 5 * time.Minute, MinTimeout: 3 * time.Second, @@ -352,7 +352,7 @@ func resourceAwsRDSClusterDelete(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"deleting", "backing-up", "modifying"}, - Target: "destroyed", + Target: []string{"destroyed"}, Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), Timeout: 5 * time.Minute, MinTimeout: 3 * time.Second, diff --git a/builtin/providers/aws/resource_aws_rds_cluster_instance.go b/builtin/providers/aws/resource_aws_rds_cluster_instance.go index bdffd59d4c..39e144a356 100644 --- a/builtin/providers/aws/resource_aws_rds_cluster_instance.go +++ b/builtin/providers/aws/resource_aws_rds_cluster_instance.go @@ -105,7 +105,7 @@ func resourceAwsRDSClusterInstanceCreate(d *schema.ResourceData, meta interface{ // reuse db_instance refresh func stateConf := &resource.StateChangeConf{ Pending: []string{"creating", "backing-up", "modifying"}, - Target: "available", + Target: []string{"available"}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Timeout: 40 * time.Minute, MinTimeout: 10 * time.Second, @@ -205,7 +205,7 @@ func resourceAwsRDSClusterInstanceDelete(d *schema.ResourceData, meta interface{ log.Println("[INFO] Waiting for RDS Cluster Instance to be destroyed") stateConf := &resource.StateChangeConf{ Pending: []string{"modifying", "deleting"}, - Target: "", + Target: []string{}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Timeout: 40 * time.Minute, MinTimeout: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_redshift_cluster.go b/builtin/providers/aws/resource_aws_redshift_cluster.go new file mode 100644 index 0000000000..95793cd6ad --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_cluster.go @@ -0,0 +1,574 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRedshiftCluster() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRedshiftClusterCreate, + Read: resourceAwsRedshiftClusterRead, + Update: resourceAwsRedshiftClusterUpdate, + Delete: resourceAwsRedshiftClusterDelete, + + Schema: map[string]*schema.Schema{ + "database_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validateRedshiftClusterDbName, + }, + + "cluster_identifier": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateRedshiftClusterIdentifier, + }, + "cluster_type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "node_type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "master_username": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRedshiftClusterMasterUsername, + }, + + "master_password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "cluster_security_groups": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "vpc_security_group_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "cluster_subnet_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "availability_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "preferred_maintenance_window": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: func(val interface{}) string { + if val == nil { + return "" + } + return strings.ToLower(val.(string)) + }, + }, + + "cluster_parameter_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "automated_snapshot_retention_period": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 1, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(int) + if value > 35 { + es = append(es, fmt.Errorf( + "backup retention period cannot be more than 35 days")) + } + return + }, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 5439, + }, + + "cluster_version": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "1.0", + }, + + "allow_version_upgrade": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "number_of_nodes": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 1, + }, + + "publicly_accessible": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + + "encrypted": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "elastic_ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "final_snapshot_identifier": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRedshiftClusterFinalSnapshotIdentifier, + }, + + "skip_final_snapshot": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "cluster_public_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "cluster_revision_number": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + } +} + +func resourceAwsRedshiftClusterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + log.Printf("[INFO] Building Redshift Cluster Options") + createOpts := &redshift.CreateClusterInput{ + ClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Port: aws.Int64(int64(d.Get("port").(int))), + MasterUserPassword: aws.String(d.Get("master_password").(string)), + MasterUsername: aws.String(d.Get("master_username").(string)), + ClusterType: aws.String(d.Get("cluster_type").(string)), + ClusterVersion: aws.String(d.Get("cluster_version").(string)), + NodeType: aws.String(d.Get("node_type").(string)), + DBName: aws.String(d.Get("database_name").(string)), + AllowVersionUpgrade: aws.Bool(d.Get("allow_version_upgrade").(bool)), + } + if d.Get("cluster_type") == "multi-node" { + createOpts.NumberOfNodes = aws.Int64(int64(d.Get("number_of_nodes").(int))) + } + if v := d.Get("cluster_security_groups").(*schema.Set); v.Len() > 0 { + createOpts.ClusterSecurityGroups = expandStringList(v.List()) + } + + if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 { + createOpts.VpcSecurityGroupIds = expandStringList(v.List()) + } + + if v, ok := d.GetOk("cluster_subnet_group_name"); ok { + createOpts.ClusterSubnetGroupName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("availability_zone"); ok { + createOpts.AvailabilityZone = aws.String(v.(string)) + } + + if v, ok := d.GetOk("preferred_maintenance_window"); ok { + createOpts.PreferredMaintenanceWindow = aws.String(v.(string)) + } + + if v, ok := d.GetOk("cluster_parameter_group_name"); ok { + createOpts.ClusterParameterGroupName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("automated_snapshot_retention_period"); ok { + createOpts.AutomatedSnapshotRetentionPeriod = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("publicly_accessible"); ok { + createOpts.PubliclyAccessible = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("encrypted"); ok { + createOpts.Encrypted = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("elastic_ip"); ok { + createOpts.ElasticIp = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Redshift Cluster create options: %s", createOpts) + resp, err := conn.CreateCluster(createOpts) + if err != nil { + log.Printf("[ERROR] Error creating Redshift Cluster: %s", err) + return err + } + + log.Printf("[DEBUG]: Cluster create response: %s", resp) + d.SetId(*resp.Cluster.ClusterIdentifier) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "backing-up", "modifying"}, + Target: []string{"available"}, + Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta), + Timeout: 5 * time.Minute, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("[WARN] Error waiting for Redshift Cluster state to be \"available\": %s", err) + } + + return resourceAwsRedshiftClusterRead(d, meta) +} + +func resourceAwsRedshiftClusterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + log.Printf("[INFO] Reading Redshift Cluster Information: %s", d.Id()) + resp, err := conn.DescribeClusters(&redshift.DescribeClustersInput{ + ClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if "ClusterNotFound" == awsErr.Code() { + d.SetId("") + log.Printf("[DEBUG] Redshift Cluster (%s) not found", d.Id()) + return nil + } + } + log.Printf("[DEBUG] Error describing Redshift Cluster (%s)", d.Id()) + return err + } + + var rsc *redshift.Cluster + for _, c := range resp.Clusters { + if *c.ClusterIdentifier == d.Id() { + rsc = c + } + } + + if rsc == nil { + log.Printf("[WARN] Redshift Cluster (%s) not found", d.Id()) + d.SetId("") + return nil + } + + d.Set("database_name", rsc.DBName) + d.Set("cluster_subnet_group_name", rsc.ClusterSubnetGroupName) + d.Set("availability_zone", rsc.AvailabilityZone) + d.Set("encrypted", rsc.Encrypted) + d.Set("automated_snapshot_retention_period", rsc.AutomatedSnapshotRetentionPeriod) + d.Set("preferred_maintenance_window", rsc.PreferredMaintenanceWindow) + d.Set("endpoint", aws.String(fmt.Sprintf("%s:%d", *rsc.Endpoint.Address, *rsc.Endpoint.Port))) + d.Set("cluster_parameter_group_name", rsc.ClusterParameterGroups[0].ParameterGroupName) + + var vpcg []string + for _, g := range rsc.VpcSecurityGroups { + vpcg = append(vpcg, *g.VpcSecurityGroupId) + } + if err := d.Set("vpc_security_group_ids", vpcg); err != nil { + return fmt.Errorf("[DEBUG] Error saving VPC Security Group IDs to state for Redshift Cluster (%s): %s", d.Id(), err) + } + + var csg []string + for _, g := range rsc.ClusterSecurityGroups { + csg = append(csg, *g.ClusterSecurityGroupName) + } + if err := d.Set("cluster_security_groups", csg); err != nil { + return fmt.Errorf("[DEBUG] Error saving Cluster Security Group Names to state for Redshift Cluster (%s): %s", d.Id(), err) + } + + d.Set("cluster_public_key", rsc.ClusterPublicKey) + d.Set("cluster_revision_number", rsc.ClusterRevisionNumber) + + return nil +} + +func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + log.Printf("[INFO] Building Redshift Modify Cluster Options") + req := &redshift.ModifyClusterInput{ + ClusterIdentifier: aws.String(d.Id()), + } + + if d.HasChange("cluster_type") { + req.ClusterType = aws.String(d.Get("cluster_type").(string)) + } + + if d.HasChange("node_type") { + req.NodeType = aws.String(d.Get("node_type").(string)) + } + + if d.HasChange("number_of_nodes") { + log.Printf("[INFO] When changing the NumberOfNodes in a Redshift Cluster, NodeType is required") + req.NumberOfNodes = aws.Int64(int64(d.Get("number_of_nodes").(int))) + req.NodeType = aws.String(d.Get("node_type").(string)) + } + + if d.HasChange("cluster_security_groups") { + req.ClusterSecurityGroups = expandStringList(d.Get("cluster_security_groups").(*schema.Set).List()) + } + + if d.HasChange("vpc_security_group_ips") { + req.VpcSecurityGroupIds = expandStringList(d.Get("vpc_security_group_ips").(*schema.Set).List()) + } + + if d.HasChange("master_password") { + req.MasterUserPassword = aws.String(d.Get("master_password").(string)) + } + + if d.HasChange("cluster_parameter_group_name") { + req.ClusterParameterGroupName = aws.String(d.Get("cluster_parameter_group_name").(string)) + } + + if d.HasChange("automated_snapshot_retention_period") { + req.AutomatedSnapshotRetentionPeriod = aws.Int64(int64(d.Get("automated_snapshot_retention_period").(int))) + } + + if d.HasChange("preferred_maintenance_window") { + req.PreferredMaintenanceWindow = aws.String(d.Get("preferred_maintenance_window").(string)) + } + + if d.HasChange("cluster_version") { + req.ClusterVersion = aws.String(d.Get("cluster_version").(string)) + } + + if d.HasChange("allow_version_upgrade") { + req.AllowVersionUpgrade = aws.Bool(d.Get("allow_version_upgrade").(bool)) + } + + log.Printf("[INFO] Modifying Redshift Cluster: %s", d.Id()) + log.Printf("[DEBUG] Redshift Cluster Modify options: %s", req) + _, err := conn.ModifyCluster(req) + if err != nil { + return fmt.Errorf("[WARN] Error modifying Redshift Cluster (%s): %s", d.Id(), err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "deleting", "rebooting", "resizing", "renaming"}, + Target: []string{"available"}, + Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta), + Timeout: 10 * time.Minute, + MinTimeout: 5 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("[WARN] Error Modifying Redshift Cluster (%s): %s", d.Id(), err) + } + + return resourceAwsRedshiftClusterRead(d, meta) +} + +func resourceAwsRedshiftClusterDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + log.Printf("[DEBUG] Destroying Redshift Cluster (%s)", d.Id()) + + deleteOpts := redshift.DeleteClusterInput{ + ClusterIdentifier: aws.String(d.Id()), + } + + skipFinalSnapshot := d.Get("skip_final_snapshot").(bool) + deleteOpts.SkipFinalClusterSnapshot = aws.Bool(skipFinalSnapshot) + + if !skipFinalSnapshot { + if name, present := d.GetOk("final_snapshot_identifier"); present { + deleteOpts.FinalClusterSnapshotIdentifier = aws.String(name.(string)) + } else { + return fmt.Errorf("Redshift Cluster Instance FinalSnapshotIdentifier is required when a final snapshot is required") + } + } + + log.Printf("[DEBUG] Redshift Cluster delete options: %s", deleteOpts) + _, err := conn.DeleteCluster(&deleteOpts) + if err != nil { + return fmt.Errorf("[ERROR] Error deleting Redshift Cluster (%s): %s", d.Id(), err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"available", "creating", "deleting", "rebooting", "resizing", "renaming"}, + Target: []string{"destroyed"}, + Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta), + Timeout: 40 * time.Minute, + MinTimeout: 5 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("[ERROR] Error deleting Redshift Cluster (%s): %s", d.Id(), err) + } + + log.Printf("[INFO] Redshift Cluster %s successfully deleted", d.Id()) + + return nil +} + +func resourceAwsRedshiftClusterStateRefreshFunc(d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + conn := meta.(*AWSClient).redshiftconn + + log.Printf("[INFO] Reading Redshift Cluster Information: %s", d.Id()) + resp, err := conn.DescribeClusters(&redshift.DescribeClustersInput{ + ClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if "ClusterNotFound" == awsErr.Code() { + return 42, "destroyed", nil + } + } + log.Printf("[WARN] Error on retrieving Redshift Cluster (%s) when waiting: %s", d.Id(), err) + return nil, "", err + } + + var rsc *redshift.Cluster + + for _, c := range resp.Clusters { + if *c.ClusterIdentifier == d.Id() { + rsc = c + } + } + + if rsc == nil { + return 42, "destroyed", nil + } + + if rsc.ClusterStatus != nil { + log.Printf("[DEBUG] Redshift Cluster status (%s): %s", d.Id(), *rsc.ClusterStatus) + } + + return rsc, *rsc.ClusterStatus, nil + } +} + +func validateRedshiftClusterIdentifier(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + return +} + +func validateRedshiftClusterDbName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[a-z]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase letters characters allowed in %q", k)) + } + if len(value) > 64 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 64 characters: %q", k, value)) + } + if value == "" { + errors = append(errors, fmt.Errorf( + "%q cannot be an empty string", k)) + } + + return +} + +func validateRedshiftClusterFinalSnapshotIdentifier(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf("%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf("%q cannot end in a hyphen", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf("%q cannot be more than 255 characters", k)) + } + return +} + +func validateRedshiftClusterMasterUsername(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[A-Za-z0-9]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters in %q", k)) + } + if !regexp.MustCompile(`^[A-Za-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if len(value) > 128 { + errors = append(errors, fmt.Errorf("%q cannot be more than 128 characters", k)) + } + return +} diff --git a/builtin/providers/aws/resource_aws_redshift_cluster_test.go b/builtin/providers/aws/resource_aws_redshift_cluster_test.go new file mode 100644 index 0000000000..241311db6b --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_cluster_test.go @@ -0,0 +1,249 @@ +package aws + +import ( + "fmt" + "math/rand" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSRedshiftCluster_basic(t *testing.T) { + var v redshift.Cluster + + ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + config := fmt.Sprintf(testAccAWSRedshiftClusterConfig_basic, ri) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: config, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &v), + ), + }, + }, + }) +} + +func testAccCheckAWSRedshiftClusterDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_redshift_cluster" { + continue + } + + // Try to find the Group + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + var err error + resp, err := conn.DescribeClusters( + &redshift.DescribeClustersInput{ + ClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.Clusters) != 0 && + *resp.Clusters[0].ClusterIdentifier == rs.Primary.ID { + return fmt.Errorf("Redshift Cluster %s still exists", rs.Primary.ID) + } + } + + // Return nil if the cluster is already destroyed + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() == "ClusterNotFound" { + return nil + } + } + + return err + } + + return nil +} + +func testAccCheckAWSRedshiftClusterExists(n string, v *redshift.Cluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Redshift Cluster Instance ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + resp, err := conn.DescribeClusters(&redshift.DescribeClustersInput{ + ClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + for _, c := range resp.Clusters { + if *c.ClusterIdentifier == rs.Primary.ID { + *v = *c + return nil + } + } + + return fmt.Errorf("Redshift Cluster (%s) not found", rs.Primary.ID) + } +} + +func TestResourceAWSRedshiftClusterIdentifierValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting", + ErrCount: 1, + }, + { + Value: "1testing", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing!", + ErrCount: 1, + }, + { + Value: "testing-", + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateRedshiftClusterIdentifier(tc.Value, "aws_redshift_cluster_identifier") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Redshift Cluster cluster_identifier to trigger a validation error") + } + } +} + +func TestResourceAWSRedshiftClusterDbNameValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting", + ErrCount: 1, + }, + { + Value: "testing1", + ErrCount: 1, + }, + { + Value: "testing-", + ErrCount: 1, + }, + { + Value: "", + ErrCount: 2, + }, + { + Value: randomString(65), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateRedshiftClusterDbName(tc.Value, "aws_redshift_cluster_database_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Redshift Cluster database_name to trigger a validation error") + } + } +} + +func TestResourceAWSRedshiftClusterFinalSnapshotIdentifierValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing-", + ErrCount: 1, + }, + { + Value: "Testingq123!", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateRedshiftClusterFinalSnapshotIdentifier(tc.Value, "aws_redshift_cluster_final_snapshot_identifier") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Redshift Cluster final_snapshot_identifier to trigger a validation error") + } + } +} + +func TestResourceAWSRedshiftClusterMasterUsernameValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "1Testing", + ErrCount: 1, + }, + { + Value: "Testing!!", + ErrCount: 1, + }, + { + Value: randomString(129), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateRedshiftClusterMasterUsername(tc.Value, "aws_redshift_cluster_master_username") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Redshift Cluster master_username to trigger a validation error") + } + } +} + +var testAccAWSRedshiftClusterConfig_basic = ` +provider "aws" { + region = "us-west-2" +} + +resource "aws_redshift_cluster" "default" { + cluster_identifier = "tf-redshift-cluster-%d" + availability_zone = "us-west-2a" + database_name = "mydb" + master_username = "foo" + master_password = "Mustbe8characters" + node_type = "dc1.large" + cluster_type = "single-node" + automated_snapshot_retention_period = 7 + allow_version_upgrade = false +}` diff --git a/builtin/providers/aws/resource_aws_redshift_parameter_group.go b/builtin/providers/aws/resource_aws_redshift_parameter_group.go new file mode 100644 index 0000000000..4059e4b768 --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_parameter_group.go @@ -0,0 +1,238 @@ +package aws + +import ( + "bytes" + "fmt" + "log" + "regexp" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRedshiftParameterGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRedshiftParameterGroupCreate, + Read: resourceAwsRedshiftParameterGroupRead, + Update: resourceAwsRedshiftParameterGroupUpdate, + Delete: resourceAwsRedshiftParameterGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Required: true, + ValidateFunc: validateRedshiftParamGroupName, + }, + + "family": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "description": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "parameter": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: false, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "value": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + Set: resourceAwsRedshiftParameterHash, + }, + }, + } +} + +func resourceAwsRedshiftParameterGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + createOpts := redshift.CreateClusterParameterGroupInput{ + ParameterGroupName: aws.String(d.Get("name").(string)), + ParameterGroupFamily: aws.String(d.Get("family").(string)), + Description: aws.String(d.Get("description").(string)), + } + + log.Printf("[DEBUG] Create Redshift Parameter Group: %#v", createOpts) + _, err := conn.CreateClusterParameterGroup(&createOpts) + if err != nil { + return fmt.Errorf("Error creating Redshift Parameter Group: %s", err) + } + + d.SetId(*createOpts.ParameterGroupName) + log.Printf("[INFO] Redshift Parameter Group ID: %s", d.Id()) + + return resourceAwsRedshiftParameterGroupUpdate(d, meta) +} + +func resourceAwsRedshiftParameterGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + describeOpts := redshift.DescribeClusterParameterGroupsInput{ + ParameterGroupName: aws.String(d.Id()), + } + + describeResp, err := conn.DescribeClusterParameterGroups(&describeOpts) + if err != nil { + return err + } + + if len(describeResp.ParameterGroups) != 1 || + *describeResp.ParameterGroups[0].ParameterGroupName != d.Id() { + d.SetId("") + return fmt.Errorf("Unable to find Parameter Group: %#v", describeResp.ParameterGroups) + } + + d.Set("name", describeResp.ParameterGroups[0].ParameterGroupName) + d.Set("family", describeResp.ParameterGroups[0].ParameterGroupFamily) + d.Set("description", describeResp.ParameterGroups[0].Description) + + describeParametersOpts := redshift.DescribeClusterParametersInput{ + ParameterGroupName: aws.String(d.Id()), + Source: aws.String("user"), + } + + describeParametersResp, err := conn.DescribeClusterParameters(&describeParametersOpts) + if err != nil { + return err + } + + d.Set("parameter", flattenRedshiftParameters(describeParametersResp.Parameters)) + return nil +} + +func resourceAwsRedshiftParameterGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + d.Partial(true) + + if d.HasChange("parameter") { + o, n := d.GetChange("parameter") + if o == nil { + o = new(schema.Set) + } + if n == nil { + n = new(schema.Set) + } + + os := o.(*schema.Set) + ns := n.(*schema.Set) + + // Expand the "parameter" set to aws-sdk-go compat []redshift.Parameter + parameters, err := expandRedshiftParameters(ns.Difference(os).List()) + if err != nil { + return err + } + + if len(parameters) > 0 { + modifyOpts := redshift.ModifyClusterParameterGroupInput{ + ParameterGroupName: aws.String(d.Get("name").(string)), + Parameters: parameters, + } + + log.Printf("[DEBUG] Modify Redshift Parameter Group: %s", modifyOpts) + _, err = conn.ModifyClusterParameterGroup(&modifyOpts) + if err != nil { + return fmt.Errorf("Error modifying Redshift Parameter Group: %s", err) + } + } + d.SetPartial("parameter") + } + + d.Partial(false) + return resourceAwsRedshiftParameterGroupRead(d, meta) +} + +func resourceAwsRedshiftParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: []string{"destroyed"}, + Refresh: resourceAwsRedshiftParameterGroupDeleteRefreshFunc(d, meta), + Timeout: 3 * time.Minute, + MinTimeout: 1 * time.Second, + } + _, err := stateConf.WaitForState() + return err +} + +func resourceAwsRedshiftParameterGroupDeleteRefreshFunc( + d *schema.ResourceData, + meta interface{}) resource.StateRefreshFunc { + conn := meta.(*AWSClient).redshiftconn + + return func() (interface{}, string, error) { + + deleteOpts := redshift.DeleteClusterParameterGroupInput{ + ParameterGroupName: aws.String(d.Id()), + } + + if _, err := conn.DeleteClusterParameterGroup(&deleteOpts); err != nil { + redshiftErr, ok := err.(awserr.Error) + if !ok { + return d, "error", err + } + + if redshiftErr.Code() != "RedshiftParameterGroupNotFoundFault" { + return d, "error", err + } + } + + return d, "destroyed", nil + } +} + +func resourceAwsRedshiftParameterHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["name"].(string))) + // Store the value as a lower case string, to match how we store them in flattenParameters + buf.WriteString(fmt.Sprintf("%s-", strings.ToLower(m["value"].(string)))) + + return hashcode.String(buf.String()) +} + +func validateRedshiftParamGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 255 characters", k)) + } + return +} diff --git a/builtin/providers/aws/resource_aws_redshift_parameter_group_test.go b/builtin/providers/aws/resource_aws_redshift_parameter_group_test.go new file mode 100644 index 0000000000..b71fbed085 --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_parameter_group_test.go @@ -0,0 +1,207 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSRedshiftParameterGroup_withParameters(t *testing.T) { + var v redshift.ClusterParameterGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftParameterGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSRedshiftParameterGroupConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftParameterGroupExists("aws_redshift_parameter_group.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "name", "parameter-group-test-terraform"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "family", "redshift-1.0"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "description", "Test parameter group for terraform"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "parameter.490804664.name", "require_ssl"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "parameter.490804664.value", "true"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "parameter.2036118857.name", "query_group"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "parameter.2036118857.value", "example"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "parameter.484080973.name", "enable_user_activity_logging"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "parameter.484080973.value", "true"), + ), + }, + }, + }) +} + +func TestAccAWSRedshiftParameterGroup_withoutParameters(t *testing.T) { + var v redshift.ClusterParameterGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftParameterGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSRedshiftParameterGroupOnlyConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftParameterGroupExists("aws_redshift_parameter_group.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "name", "parameter-group-test-terraform"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "family", "redshift-1.0"), + resource.TestCheckResourceAttr( + "aws_redshift_parameter_group.bar", "description", "Test parameter group for terraform"), + ), + }, + }, + }) +} + +func TestResourceAWSRedshiftParameterGroupNameValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting123", + ErrCount: 1, + }, + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "1testing123", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing123-", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateRedshiftParamGroupName(tc.Value, "aws_redshift_parameter_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Redshift Parameter Group Name to trigger a validation error") + } + } +} + +func testAccCheckAWSRedshiftParameterGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_redshift_parameter_group" { + continue + } + + // Try to find the Group + resp, err := conn.DescribeClusterParameterGroups( + &redshift.DescribeClusterParameterGroupsInput{ + ParameterGroupName: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.ParameterGroups) != 0 && + *resp.ParameterGroups[0].ParameterGroupName == rs.Primary.ID { + return fmt.Errorf("Redshift Parameter Group still exists") + } + } + + // Verify the error + newerr, ok := err.(awserr.Error) + if !ok { + return err + } + if newerr.Code() != "ClusterParameterGroupNotFound" { + return err + } + } + + return nil +} + +func testAccCheckAWSRedshiftParameterGroupExists(n string, v *redshift.ClusterParameterGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Redshift Parameter Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + opts := redshift.DescribeClusterParameterGroupsInput{ + ParameterGroupName: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeClusterParameterGroups(&opts) + + if err != nil { + return err + } + + if len(resp.ParameterGroups) != 1 || + *resp.ParameterGroups[0].ParameterGroupName != rs.Primary.ID { + return fmt.Errorf("Redshift Parameter Group not found") + } + + *v = *resp.ParameterGroups[0] + + return nil + } +} + +const testAccAWSRedshiftParameterGroupOnlyConfig = ` +resource "aws_redshift_parameter_group" "bar" { + name = "parameter-group-test-terraform" + family = "redshift-1.0" + description = "Test parameter group for terraform" +}` + +const testAccAWSRedshiftParameterGroupConfig = ` +resource "aws_redshift_parameter_group" "bar" { + name = "parameter-group-test-terraform" + family = "redshift-1.0" + description = "Test parameter group for terraform" + parameter { + name = "require_ssl" + value = "true" + } + parameter { + name = "query_group" + value = "example" + } + parameter{ + name = "enable_user_activity_logging" + value = "true" + } +} +` diff --git a/builtin/providers/aws/resource_aws_redshift_security_group.go b/builtin/providers/aws/resource_aws_redshift_security_group.go new file mode 100644 index 0000000000..8393e647b0 --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_security_group.go @@ -0,0 +1,291 @@ +package aws + +import ( + "bytes" + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRedshiftSecurityGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRedshiftSecurityGroupCreate, + Read: resourceAwsRedshiftSecurityGroupRead, + Delete: resourceAwsRedshiftSecurityGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateRedshiftSecurityGroupName, + }, + + "description": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "ingress": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cidr": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "security_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "security_group_owner_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + Set: resourceAwsRedshiftSecurityGroupIngressHash, + }, + }, + } +} + +func resourceAwsRedshiftSecurityGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + var err error + var errs []error + + name := d.Get("name").(string) + desc := d.Get("description").(string) + sgInput := &redshift.CreateClusterSecurityGroupInput{ + ClusterSecurityGroupName: aws.String(name), + Description: aws.String(desc), + } + log.Printf("[DEBUG] Redshift security group create: name: %s, description: %s", name, desc) + _, err = conn.CreateClusterSecurityGroup(sgInput) + if err != nil { + return fmt.Errorf("Error creating RedshiftSecurityGroup: %s", err) + } + + d.SetId(d.Get("name").(string)) + + log.Printf("[INFO] Redshift Security Group ID: %s", d.Id()) + sg, err := resourceAwsRedshiftSecurityGroupRetrieve(d, meta) + if err != nil { + return err + } + + ingresses := d.Get("ingress").(*schema.Set) + for _, ing := range ingresses.List() { + err := resourceAwsRedshiftSecurityGroupAuthorizeRule(ing, *sg.ClusterSecurityGroupName, conn) + if err != nil { + errs = append(errs, err) + } + } + + if len(errs) > 0 { + return &multierror.Error{Errors: errs} + } + + log.Println("[INFO] Waiting for Redshift Security Group Ingress Authorizations to be authorized") + stateConf := &resource.StateChangeConf{ + Pending: []string{"authorizing"}, + Target: []string{"authorized"}, + Refresh: resourceAwsRedshiftSecurityGroupStateRefreshFunc(d, meta), + Timeout: 10 * time.Minute, + } + + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + return resourceAwsRedshiftSecurityGroupRead(d, meta) +} + +func resourceAwsRedshiftSecurityGroupRead(d *schema.ResourceData, meta interface{}) error { + sg, err := resourceAwsRedshiftSecurityGroupRetrieve(d, meta) + if err != nil { + return err + } + + rules := &schema.Set{ + F: resourceAwsRedshiftSecurityGroupIngressHash, + } + + for _, v := range sg.IPRanges { + rule := map[string]interface{}{"cidr": *v.CIDRIP} + rules.Add(rule) + } + + for _, g := range sg.EC2SecurityGroups { + rule := map[string]interface{}{ + "security_group_name": *g.EC2SecurityGroupName, + "security_group_owner_id": *g.EC2SecurityGroupOwnerId, + } + rules.Add(rule) + } + + d.Set("ingress", rules) + d.Set("name", *sg.ClusterSecurityGroupName) + d.Set("description", *sg.Description) + + return nil +} + +func resourceAwsRedshiftSecurityGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + log.Printf("[DEBUG] Redshift Security Group destroy: %v", d.Id()) + opts := redshift.DeleteClusterSecurityGroupInput{ + ClusterSecurityGroupName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Redshift Security Group destroy configuration: %v", opts) + _, err := conn.DeleteClusterSecurityGroup(&opts) + + if err != nil { + newerr, ok := err.(awserr.Error) + if ok && newerr.Code() == "InvalidRedshiftSecurityGroup.NotFound" { + return nil + } + return err + } + + return nil +} + +func resourceAwsRedshiftSecurityGroupRetrieve(d *schema.ResourceData, meta interface{}) (*redshift.ClusterSecurityGroup, error) { + conn := meta.(*AWSClient).redshiftconn + + opts := redshift.DescribeClusterSecurityGroupsInput{ + ClusterSecurityGroupName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Redshift Security Group describe configuration: %#v", opts) + + resp, err := conn.DescribeClusterSecurityGroups(&opts) + + if err != nil { + return nil, fmt.Errorf("Error retrieving Redshift Security Groups: %s", err) + } + + if len(resp.ClusterSecurityGroups) != 1 || + *resp.ClusterSecurityGroups[0].ClusterSecurityGroupName != d.Id() { + return nil, fmt.Errorf("Unable to find Redshift Security Group: %#v", resp.ClusterSecurityGroups) + } + + return resp.ClusterSecurityGroups[0], nil +} + +func validateRedshiftSecurityGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if value == "default" { + errors = append(errors, fmt.Errorf("the Redshift Security Group name cannot be %q", value)) + } + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q: %q", + k, value)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 32 characters: %q", k, value)) + } + return + +} + +func resourceAwsRedshiftSecurityGroupIngressHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + + if v, ok := m["cidr"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + + if v, ok := m["security_group_name"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + + if v, ok := m["security_group_owner_id"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + + return hashcode.String(buf.String()) +} + +func resourceAwsRedshiftSecurityGroupAuthorizeRule(ingress interface{}, redshiftSecurityGroupName string, conn *redshift.Redshift) error { + ing := ingress.(map[string]interface{}) + + opts := redshift.AuthorizeClusterSecurityGroupIngressInput{ + ClusterSecurityGroupName: aws.String(redshiftSecurityGroupName), + } + + if attr, ok := ing["cidr"]; ok && attr != "" { + opts.CIDRIP = aws.String(attr.(string)) + } + + if attr, ok := ing["security_group_name"]; ok && attr != "" { + opts.EC2SecurityGroupName = aws.String(attr.(string)) + } + + if attr, ok := ing["security_group_owner_id"]; ok && attr != "" { + opts.EC2SecurityGroupOwnerId = aws.String(attr.(string)) + } + + log.Printf("[DEBUG] Authorize ingress rule configuration: %#v", opts) + _, err := conn.AuthorizeClusterSecurityGroupIngress(&opts) + + if err != nil { + return fmt.Errorf("Error authorizing security group ingress: %s", err) + } + + return nil +} + +func resourceAwsRedshiftSecurityGroupStateRefreshFunc( + d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + v, err := resourceAwsRedshiftSecurityGroupRetrieve(d, meta) + + if err != nil { + log.Printf("Error on retrieving Redshift Security Group when waiting: %s", err) + return nil, "", err + } + + statuses := make([]string, 0, len(v.EC2SecurityGroups)+len(v.IPRanges)) + for _, ec2g := range v.EC2SecurityGroups { + statuses = append(statuses, *ec2g.Status) + } + for _, ips := range v.IPRanges { + statuses = append(statuses, *ips.Status) + } + + for _, stat := range statuses { + // Not done + if stat != "authorized" { + return nil, "authorizing", nil + } + } + + return v, "authorized", nil + } +} diff --git a/builtin/providers/aws/resource_aws_redshift_security_group_test.go b/builtin/providers/aws/resource_aws_redshift_security_group_test.go new file mode 100644 index 0000000000..4fc3bdbe51 --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_security_group_test.go @@ -0,0 +1,205 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSRedshiftSecurityGroup_ingressCidr(t *testing.T) { + var v redshift.ClusterSecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftSecurityGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSRedshiftSecurityGroupConfig_ingressCidr, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftSecurityGroupExists("aws_redshift_security_group.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "name", "redshift-sg-terraform"), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "description", "this is a description"), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "ingress.2735652665.cidr", "10.0.0.1/24"), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "ingress.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSRedshiftSecurityGroup_ingressSecurityGroup(t *testing.T) { + var v redshift.ClusterSecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftSecurityGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSRedshiftSecurityGroupConfig_ingressSgId, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftSecurityGroupExists("aws_redshift_security_group.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "name", "redshift-sg-terraform"), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "description", "this is a description"), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "ingress.#", "1"), + resource.TestCheckResourceAttr( + "aws_redshift_security_group.bar", "ingress.220863.security_group_name", "terraform_redshift_acceptance_test"), + ), + }, + }, + }) +} + +func testAccCheckAWSRedshiftSecurityGroupExists(n string, v *redshift.ClusterSecurityGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Redshift Security Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + opts := redshift.DescribeClusterSecurityGroupsInput{ + ClusterSecurityGroupName: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeClusterSecurityGroups(&opts) + + if err != nil { + return err + } + + if len(resp.ClusterSecurityGroups) != 1 || + *resp.ClusterSecurityGroups[0].ClusterSecurityGroupName != rs.Primary.ID { + return fmt.Errorf("Redshift Security Group not found") + } + + *v = *resp.ClusterSecurityGroups[0] + + return nil + } +} + +func testAccCheckAWSRedshiftSecurityGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_redshift_security_group" { + continue + } + + // Try to find the Group + resp, err := conn.DescribeClusterSecurityGroups( + &redshift.DescribeClusterSecurityGroupsInput{ + ClusterSecurityGroupName: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.ClusterSecurityGroups) != 0 && + *resp.ClusterSecurityGroups[0].ClusterSecurityGroupName == rs.Primary.ID { + return fmt.Errorf("Redshift Security Group still exists") + } + } + + // Verify the error + newerr, ok := err.(awserr.Error) + if !ok { + return err + } + if newerr.Code() != "ClusterSecurityGroupNotFound" { + return err + } + } + + return nil +} + +func TestResourceAWSRedshiftSecurityGroupNameValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "default", + ErrCount: 1, + }, + { + Value: "testing123%%", + ErrCount: 1, + }, + { + Value: "TestingSG", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateRedshiftSecurityGroupName(tc.Value, "aws_redshift_security_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Redshift Security Group Name to trigger a validation error") + } + } +} + +const testAccAWSRedshiftSecurityGroupConfig_ingressCidr = ` +provider "aws" { + region = "us-east-1" +} + +resource "aws_redshift_security_group" "bar" { + name = "redshift-sg-terraform" + description = "this is a description" + + ingress { + cidr = "10.0.0.1/24" + } +}` + +const testAccAWSRedshiftSecurityGroupConfig_ingressSgId = ` +provider "aws" { + region = "us-east-1" +} + +resource "aws_security_group" "redshift" { + name = "terraform_redshift_acceptance_test" + description = "Used in the redshift acceptance tests" + + ingress { + protocol = "tcp" + from_port = 22 + to_port = 22 + cidr_blocks = ["10.0.0.0/8"] + } +} + +resource "aws_redshift_security_group" "bar" { + name = "redshift-sg-terraform" + description = "this is a description" + + ingress { + security_group_name = "${aws_security_group.redshift.name}" + security_group_owner_id = "${aws_security_group.redshift.owner_id}" + } +}` diff --git a/builtin/providers/aws/resource_aws_redshift_subnet_group.go b/builtin/providers/aws/resource_aws_redshift_subnet_group.go new file mode 100644 index 0000000000..4e02e658ea --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_subnet_group.go @@ -0,0 +1,186 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRedshiftSubnetGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRedshiftSubnetGroupCreate, + Read: resourceAwsRedshiftSubnetGroupRead, + Update: resourceAwsRedshiftSubnetGroupUpdate, + Delete: resourceAwsRedshiftSubnetGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Required: true, + ValidateFunc: validateRedshiftSubnetGroupName, + }, + + "description": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "subnet_ids": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func resourceAwsRedshiftSubnetGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + subnetIdsSet := d.Get("subnet_ids").(*schema.Set) + subnetIds := make([]*string, subnetIdsSet.Len()) + for i, subnetId := range subnetIdsSet.List() { + subnetIds[i] = aws.String(subnetId.(string)) + } + + createOpts := redshift.CreateClusterSubnetGroupInput{ + ClusterSubnetGroupName: aws.String(d.Get("name").(string)), + Description: aws.String(d.Get("description").(string)), + SubnetIds: subnetIds, + } + + log.Printf("[DEBUG] Create Redshift Subnet Group: %#v", createOpts) + _, err := conn.CreateClusterSubnetGroup(&createOpts) + if err != nil { + return fmt.Errorf("Error creating Redshift Subnet Group: %s", err) + } + + d.SetId(*createOpts.ClusterSubnetGroupName) + log.Printf("[INFO] Redshift Subnet Group ID: %s", d.Id()) + return resourceAwsRedshiftSubnetGroupRead(d, meta) +} + +func resourceAwsRedshiftSubnetGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + describeOpts := redshift.DescribeClusterSubnetGroupsInput{ + ClusterSubnetGroupName: aws.String(d.Id()), + } + + describeResp, err := conn.DescribeClusterSubnetGroups(&describeOpts) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "ClusterSubnetGroupNotFoundFault" { + log.Printf("[INFO] Redshift Subnet Group: %s was not found", d.Id()) + d.SetId("") + return nil + } + return err + } + + if len(describeResp.ClusterSubnetGroups) == 0 { + return fmt.Errorf("Unable to find Redshift Subnet Group: %#v", describeResp.ClusterSubnetGroups) + } + + d.Set("name", d.Id()) + d.Set("description", describeResp.ClusterSubnetGroups[0].Description) + d.Set("subnet_ids", subnetIdsToSlice(describeResp.ClusterSubnetGroups[0].Subnets)) + + return nil +} + +func resourceAwsRedshiftSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + if d.HasChange("subnet_ids") { + _, n := d.GetChange("subnet_ids") + if n == nil { + n = new(schema.Set) + } + ns := n.(*schema.Set) + + var sIds []*string + for _, s := range ns.List() { + sIds = append(sIds, aws.String(s.(string))) + } + + _, err := conn.ModifyClusterSubnetGroup(&redshift.ModifyClusterSubnetGroupInput{ + ClusterSubnetGroupName: aws.String(d.Id()), + SubnetIds: sIds, + }) + + if err != nil { + return err + } + } + + return nil +} + +func resourceAwsRedshiftSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: []string{"destroyed"}, + Refresh: resourceAwsRedshiftSubnetGroupDeleteRefreshFunc(d, meta), + Timeout: 3 * time.Minute, + MinTimeout: 1 * time.Second, + } + _, err := stateConf.WaitForState() + return err +} + +func resourceAwsRedshiftSubnetGroupDeleteRefreshFunc(d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { + conn := meta.(*AWSClient).redshiftconn + + return func() (interface{}, string, error) { + + deleteOpts := redshift.DeleteClusterSubnetGroupInput{ + ClusterSubnetGroupName: aws.String(d.Id()), + } + + if _, err := conn.DeleteClusterSubnetGroup(&deleteOpts); err != nil { + redshiftErr, ok := err.(awserr.Error) + if !ok { + return d, "error", err + } + + if redshiftErr.Code() != "ClusterSubnetGroupNotFoundFault" { + return d, "error", err + } + } + + return d, "destroyed", nil + } +} + +func subnetIdsToSlice(subnetIds []*redshift.Subnet) []string { + subnetsSlice := make([]string, 0, len(subnetIds)) + for _, s := range subnetIds { + subnetsSlice = append(subnetsSlice, *s.SubnetIdentifier) + } + return subnetsSlice +} + +func validateRedshiftSubnetGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-_]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters, hyphens, underscores, and periods allowed in %q", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 255 characters", k)) + } + if regexp.MustCompile(`(?i)^default$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q is not allowed as %q", "Default", k)) + } + return +} diff --git a/builtin/providers/aws/resource_aws_redshift_subnet_group_test.go b/builtin/providers/aws/resource_aws_redshift_subnet_group_test.go new file mode 100644 index 0000000000..296ee569af --- /dev/null +++ b/builtin/providers/aws/resource_aws_redshift_subnet_group_test.go @@ -0,0 +1,220 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSRedshiftSubnetGroup_basic(t *testing.T) { + var v redshift.ClusterSubnetGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRedshiftSubnetGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccRedshiftSubnetGroupConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), + resource.TestCheckResourceAttr( + "aws_redshift_subnet_group.foo", "subnet_ids.#", "2"), + ), + }, + }, + }) +} + +func TestAccAWSRedshiftSubnetGroup_updateSubnetIds(t *testing.T) { + var v redshift.ClusterSubnetGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRedshiftSubnetGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccRedshiftSubnetGroupConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), + resource.TestCheckResourceAttr( + "aws_redshift_subnet_group.foo", "subnet_ids.#", "2"), + ), + }, + + resource.TestStep{ + Config: testAccRedshiftSubnetGroupConfig_updateSubnetIds, + Check: resource.ComposeTestCheckFunc( + testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), + resource.TestCheckResourceAttr( + "aws_redshift_subnet_group.foo", "subnet_ids.#", "3"), + ), + }, + }, + }) +} + +func TestResourceAWSRedshiftSubnetGroupNameValidation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "default", + ErrCount: 1, + }, + { + Value: "testing123%%", + ErrCount: 1, + }, + { + Value: "TestingSG", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateRedshiftSubnetGroupName(tc.Value, "aws_redshift_subnet_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Redshift Subnet Group Name to trigger a validation error") + } + } +} + +func testAccCheckRedshiftSubnetGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_redshift_subnet_group" { + continue + } + + resp, err := conn.DescribeClusterSubnetGroups( + &redshift.DescribeClusterSubnetGroupsInput{ + ClusterSubnetGroupName: aws.String(rs.Primary.ID)}) + if err == nil { + if len(resp.ClusterSubnetGroups) > 0 { + return fmt.Errorf("still exist.") + } + + return nil + } + + redshiftErr, ok := err.(awserr.Error) + if !ok { + return err + } + if redshiftErr.Code() != "ClusterSubnetGroupNotFoundFault" { + return err + } + } + + return nil +} + +func testAccCheckRedshiftSubnetGroupExists(n string, v *redshift.ClusterSubnetGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + resp, err := conn.DescribeClusterSubnetGroups( + &redshift.DescribeClusterSubnetGroupsInput{ClusterSubnetGroupName: aws.String(rs.Primary.ID)}) + if err != nil { + return err + } + if len(resp.ClusterSubnetGroups) == 0 { + return fmt.Errorf("ClusterSubnetGroup not found") + } + + *v = *resp.ClusterSubnetGroups[0] + + return nil + } +} + +const testAccRedshiftSubnetGroupConfig = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-1" + } +} + +resource "aws_subnet" "bar" { + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-2" + } +} + +resource "aws_redshift_subnet_group" "foo" { + name = "foo" + description = "foo description" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] +} +` + +const testAccRedshiftSubnetGroupConfig_updateSubnetIds = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-1" + } +} + +resource "aws_subnet" "bar" { + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-2" + } +} + +resource "aws_subnet" "foobar" { + cidr_block = "10.1.3.0/24" + availability_zone = "us-west-2c" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-3" + } +} + +resource "aws_redshift_subnet_group" "foo" { + name = "foo" + description = "foo description" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}", "${aws_subnet.foobar.id}"] +} +` diff --git a/builtin/providers/aws/resource_aws_route.go b/builtin/providers/aws/resource_aws_route.go index 60c666ecde..6832f87033 100644 --- a/builtin/providers/aws/resource_aws_route.go +++ b/builtin/providers/aws/resource_aws_route.go @@ -1,6 +1,7 @@ package aws import ( + "errors" "fmt" "log" @@ -10,6 +11,11 @@ import ( "github.com/hashicorp/terraform/helper/schema" ) +// How long to sleep if a limit-exceeded event happens +var routeTargetValidationError = errors.New("Error: more than 1 target specified. Only 1 of gateway_id" + + "nat_gateway_id, instance_id, network_interface_id, route_table_id or" + + "vpc_peering_connection_id is allowed.") + // AWS Route resource Schema declaration func resourceAwsRoute() *schema.Resource { return &schema.Resource{ @@ -36,6 +42,11 @@ func resourceAwsRoute() *schema.Resource { Optional: true, }, + "nat_gateway_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "instance_id": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -80,6 +91,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { var setTarget string allowedTargets := []string{ "gateway_id", + "nat_gateway_id", "instance_id", "network_interface_id", "vpc_peering_connection_id", @@ -94,9 +106,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { } if numTargets > 1 { - fmt.Errorf("Error: more than 1 target specified. Only 1 of gateway_id" + - "instance_id, network_interface_id, route_table_id or" + - "vpc_peering_connection_id is allowed.") + return routeTargetValidationError } createOpts := &ec2.CreateRouteInput{} @@ -108,6 +118,12 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { DestinationCidrBlock: aws.String(d.Get("destination_cidr_block").(string)), GatewayId: aws.String(d.Get("gateway_id").(string)), } + case "nat_gateway_id": + createOpts = &ec2.CreateRouteInput{ + RouteTableId: aws.String(d.Get("route_table_id").(string)), + DestinationCidrBlock: aws.String(d.Get("destination_cidr_block").(string)), + NatGatewayId: aws.String(d.Get("nat_gateway_id").(string)), + } case "instance_id": createOpts = &ec2.CreateRouteInput{ RouteTableId: aws.String(d.Get("route_table_id").(string)), @@ -127,7 +143,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { VpcPeeringConnectionId: aws.String(d.Get("vpc_peering_connection_id").(string)), } default: - fmt.Errorf("Error: invalid target type specified.") + return fmt.Errorf("Error: invalid target type specified.") } log.Printf("[DEBUG] Route create config: %s", createOpts) @@ -139,7 +155,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { route, err := findResourceRoute(conn, d.Get("route_table_id").(string), d.Get("destination_cidr_block").(string)) if err != nil { - fmt.Errorf("Error: %s", err) + return err } d.SetId(routeIDHash(d, route)) @@ -156,6 +172,7 @@ func resourceAwsRouteRead(d *schema.ResourceData, meta interface{}) error { d.Set("destination_prefix_list_id", route.DestinationPrefixListId) d.Set("gateway_id", route.GatewayId) + d.Set("nat_gateway_id", route.NatGatewayId) d.Set("instance_id", route.InstanceId) d.Set("instance_owner_id", route.InstanceOwnerId) d.Set("network_interface_id", route.NetworkInterfaceId) @@ -172,6 +189,7 @@ func resourceAwsRouteUpdate(d *schema.ResourceData, meta interface{}) error { var setTarget string allowedTargets := []string{ "gateway_id", + "nat_gateway_id", "instance_id", "network_interface_id", "vpc_peering_connection_id", @@ -187,9 +205,7 @@ func resourceAwsRouteUpdate(d *schema.ResourceData, meta interface{}) error { } if numTargets > 1 { - fmt.Errorf("Error: more than 1 target specified. Only 1 of gateway_id" + - "instance_id, network_interface_id, route_table_id or" + - "vpc_peering_connection_id is allowed.") + return routeTargetValidationError } // Formulate ReplaceRouteInput based on the target type @@ -200,6 +216,12 @@ func resourceAwsRouteUpdate(d *schema.ResourceData, meta interface{}) error { DestinationCidrBlock: aws.String(d.Get("destination_cidr_block").(string)), GatewayId: aws.String(d.Get("gateway_id").(string)), } + case "nat_gateway_id": + replaceOpts = &ec2.ReplaceRouteInput{ + RouteTableId: aws.String(d.Get("route_table_id").(string)), + DestinationCidrBlock: aws.String(d.Get("destination_cidr_block").(string)), + NatGatewayId: aws.String(d.Get("nat_gateway_id").(string)), + } case "instance_id": replaceOpts = &ec2.ReplaceRouteInput{ RouteTableId: aws.String(d.Get("route_table_id").(string)), @@ -221,7 +243,7 @@ func resourceAwsRouteUpdate(d *schema.ResourceData, meta interface{}) error { VpcPeeringConnectionId: aws.String(d.Get("vpc_peering_connection_id").(string)), } default: - fmt.Errorf("Error: invalid target type specified.") + return fmt.Errorf("Error: invalid target type specified.") } log.Printf("[DEBUG] Route replace config: %s", replaceOpts) diff --git a/builtin/providers/aws/resource_aws_route53_health_check.go b/builtin/providers/aws/resource_aws_route53_health_check.go index 1850401d91..4034996a9a 100644 --- a/builtin/providers/aws/resource_aws_route53_health_check.go +++ b/builtin/providers/aws/resource_aws_route53_health_check.go @@ -1,7 +1,9 @@ package aws import ( + "fmt" "log" + "strings" "time" "github.com/hashicorp/terraform/helper/schema" @@ -23,14 +25,17 @@ func resourceAwsRoute53HealthCheck() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + StateFunc: func(val interface{}) string { + return strings.ToUpper(val.(string)) + }, }, "failure_threshold": &schema.Schema{ Type: schema.TypeInt, - Required: true, + Optional: true, }, "request_interval": &schema.Schema{ Type: schema.TypeInt, - Required: true, + Optional: true, ForceNew: true, // todo this should be updateable but the awslabs route53 service doesnt have the ability }, "ip_address": &schema.Schema{ @@ -47,14 +52,46 @@ func resourceAwsRoute53HealthCheck() *schema.Resource { Optional: true, }, + "invert_healthcheck": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + }, + "resource_path": &schema.Schema{ Type: schema.TypeString, Optional: true, }, + "search_string": &schema.Schema{ Type: schema.TypeString, Optional: true, }, + "measure_latency": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + + "child_healthchecks": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + Set: schema.HashString, + }, + "child_health_threshold": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(int) + if value > 256 { + es = append(es, fmt.Errorf( + "Child HealthThreshold cannot be more than 256")) + } + return + }, + }, + "tags": tagsSchema(), }, } @@ -83,8 +120,16 @@ func resourceAwsRoute53HealthCheckUpdate(d *schema.ResourceData, meta interface{ updateHealthCheck.ResourcePath = aws.String(d.Get("resource_path").(string)) } - if d.HasChange("search_string") { - updateHealthCheck.SearchString = aws.String(d.Get("search_string").(string)) + if d.HasChange("invert_healthcheck") { + updateHealthCheck.Inverted = aws.Bool(d.Get("invert_healthcheck").(bool)) + } + + if d.HasChange("child_healthchecks") { + updateHealthCheck.ChildHealthChecks = expandStringList(d.Get("child_healthchecks").(*schema.Set).List()) + + } + if d.HasChange("child_health_threshold") { + updateHealthCheck.HealthThreshold = aws.Int64(int64(d.Get("child_health_threshold").(int))) } _, err := conn.UpdateHealthCheck(updateHealthCheck) @@ -103,9 +148,15 @@ func resourceAwsRoute53HealthCheckCreate(d *schema.ResourceData, meta interface{ conn := meta.(*AWSClient).r53conn healthConfig := &route53.HealthCheckConfig{ - Type: aws.String(d.Get("type").(string)), - FailureThreshold: aws.Int64(int64(d.Get("failure_threshold").(int))), - RequestInterval: aws.Int64(int64(d.Get("request_interval").(int))), + Type: aws.String(d.Get("type").(string)), + } + + if v, ok := d.GetOk("request_interval"); ok { + healthConfig.RequestInterval = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("failure_threshold"); ok { + healthConfig.FailureThreshold = aws.Int64(int64(v.(int))) } if v, ok := d.GetOk("fqdn"); ok { @@ -128,6 +179,26 @@ func resourceAwsRoute53HealthCheckCreate(d *schema.ResourceData, meta interface{ healthConfig.ResourcePath = aws.String(v.(string)) } + if *healthConfig.Type != route53.HealthCheckTypeCalculated { + if v, ok := d.GetOk("measure_latency"); ok { + healthConfig.MeasureLatency = aws.Bool(v.(bool)) + } + } + + if v, ok := d.GetOk("invert_healthcheck"); ok { + healthConfig.Inverted = aws.Bool(v.(bool)) + } + + if *healthConfig.Type == route53.HealthCheckTypeCalculated { + if v, ok := d.GetOk("child_healthchecks"); ok { + healthConfig.ChildHealthChecks = expandStringList(v.(*schema.Set).List()) + } + + if v, ok := d.GetOk("child_health_threshold"); ok { + healthConfig.HealthThreshold = aws.Int64(int64(v.(int))) + } + } + input := &route53.CreateHealthCheckInput{ CallerReference: aws.String(time.Now().Format(time.RFC3339Nano)), HealthCheckConfig: healthConfig, @@ -174,6 +245,10 @@ func resourceAwsRoute53HealthCheckRead(d *schema.ResourceData, meta interface{}) d.Set("ip_address", updated.IPAddress) d.Set("port", updated.Port) d.Set("resource_path", updated.ResourcePath) + d.Set("measure_latency", updated.MeasureLatency) + d.Set("invent_healthcheck", updated.Inverted) + d.Set("child_healthchecks", updated.ChildHealthChecks) + d.Set("child_health_threshold", updated.HealthThreshold) // read the tags req := &route53.ListTagsForResourceInput{ @@ -209,3 +284,12 @@ func resourceAwsRoute53HealthCheckDelete(d *schema.ResourceData, meta interface{ return nil } + +func createChildHealthCheckList(s *schema.Set) (nl []*string) { + l := s.List() + for _, n := range l { + nl = append(nl, aws.String(n.(string))) + } + + return nl +} diff --git a/builtin/providers/aws/resource_aws_route53_health_check_test.go b/builtin/providers/aws/resource_aws_route53_health_check_test.go index 9b14419637..3e27bc1023 100644 --- a/builtin/providers/aws/resource_aws_route53_health_check_test.go +++ b/builtin/providers/aws/resource_aws_route53_health_check_test.go @@ -20,6 +20,10 @@ func TestAccAWSRoute53HealthCheck_basic(t *testing.T) { Config: testAccRoute53HealthCheckConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53HealthCheckExists("aws_route53_health_check.foo"), + resource.TestCheckResourceAttr( + "aws_route53_health_check.foo", "measure_latency", "true"), + resource.TestCheckResourceAttr( + "aws_route53_health_check.foo", "invert_healthcheck", "true"), ), }, resource.TestStep{ @@ -28,6 +32,24 @@ func TestAccAWSRoute53HealthCheck_basic(t *testing.T) { testAccCheckRoute53HealthCheckExists("aws_route53_health_check.foo"), resource.TestCheckResourceAttr( "aws_route53_health_check.foo", "failure_threshold", "5"), + resource.TestCheckResourceAttr( + "aws_route53_health_check.foo", "invert_healthcheck", "false"), + ), + }, + }, + }) +} + +func TestAccAWSRoute53HealthCheck_withChildHealthChecks(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53HealthCheckDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccRoute53HealthCheckConfig_withChildHealthChecks, + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53HealthCheckExists("aws_route53_health_check.foo"), ), }, }, @@ -124,6 +146,8 @@ resource "aws_route53_health_check" "foo" { resource_path = "/" failure_threshold = "2" request_interval = "30" + measure_latency = true + invert_healthcheck = true tags = { Name = "tf-test-health-check" @@ -139,6 +163,8 @@ resource "aws_route53_health_check" "foo" { resource_path = "/" failure_threshold = "5" request_interval = "30" + measure_latency = true + invert_healthcheck = false tags = { Name = "tf-test-health-check" @@ -160,3 +186,24 @@ resource "aws_route53_health_check" "bar" { } } ` + +const testAccRoute53HealthCheckConfig_withChildHealthChecks = ` +resource "aws_route53_health_check" "child1" { + fqdn = "child1.notexample.com" + port = 80 + type = "HTTP" + resource_path = "/" + failure_threshold = "2" + request_interval = "30" +} + +resource "aws_route53_health_check" "foo" { + type = "CALCULATED" + child_health_threshold = 1 + child_healthchecks = ["${aws_route53_health_check.child1.id}"] + + tags = { + Name = "tf-test-calculated-health-check" + } +} +` diff --git a/builtin/providers/aws/resource_aws_route53_record.go b/builtin/providers/aws/resource_aws_route53_record.go index cf99b9b9b3..84a1e3ac49 100644 --- a/builtin/providers/aws/resource_aws_route53_record.go +++ b/builtin/providers/aws/resource_aws_route53_record.go @@ -64,9 +64,13 @@ func resourceAwsRoute53Record() *schema.Resource { ConflictsWith: []string{"alias"}, }, + // Weight uses a special sentinel value to indicate it's presense. + // Because 0 is a valid value for Weight, we default to -1 so that any + // inclusion of a weight (zero or not) will be a usable value "weight": &schema.Schema{ Type: schema.TypeInt, Optional: true, + Default: -1, }, "set_identifier": &schema.Schema{ @@ -171,12 +175,12 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er ChangeBatch: changeBatch, } - log.Printf("[DEBUG] Creating resource records for zone: %s, name: %s", - zone, *rec.Name) + log.Printf("[DEBUG] Creating resource records for zone: %s, name: %s\n\n%s", + zone, *rec.Name, req) wait := resource.StateChangeConf{ Pending: []string{"rejected"}, - Target: "accepted", + Target: []string{"accepted"}, Timeout: 5 * time.Minute, MinTimeout: 1 * time.Second, Refresh: func() (interface{}, string, error) { @@ -219,7 +223,7 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er wait = resource.StateChangeConf{ Delay: 30 * time.Second, Pending: []string{"PENDING"}, - Target: "INSYNC", + Target: []string{"INSYNC"}, Timeout: 30 * time.Minute, MinTimeout: 5 * time.Second, Refresh: func() (result interface{}, state string, err error) { @@ -245,6 +249,11 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro // get expanded name zoneRecord, err := conn.GetHostedZone(&route53.GetHostedZoneInput{Id: aws.String(zone)}) if err != nil { + if r53err, ok := err.(awserr.Error); ok && r53err.Code() == "NoSuchHostedZone" { + log.Printf("[DEBUG] No matching Route 53 Record found for: %s, removing from state file", d.Id()) + d.SetId("") + return nil + } return err } en := expandRecordName(d.Get("name").(string), *zoneRecord.HostedZone.Name) @@ -287,7 +296,12 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro } d.Set("ttl", record.TTL) - d.Set("weight", record.Weight) + // Only set the weight if it's non-nil, otherwise we end up with a 0 weight + // which has actual contextual meaning with Route 53 records + // See http://docs.aws.amazon.com/fr_fr/Route53/latest/APIReference/API_ChangeResourceRecordSets_Examples.html + if record.Weight != nil { + d.Set("weight", record.Weight) + } d.Set("set_identifier", record.SetIdentifier) d.Set("failover", record.Failover) d.Set("health_check_id", record.HealthCheckId) @@ -312,6 +326,11 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er var err error zoneRecord, err := conn.GetHostedZone(&route53.GetHostedZoneInput{Id: aws.String(zone)}) if err != nil { + if r53err, ok := err.(awserr.Error); ok && r53err.Code() == "NoSuchHostedZone" { + log.Printf("[DEBUG] No matching Route 53 Record found for: %s, removing from state file", d.Id()) + d.SetId("") + return nil + } return err } // Get the records @@ -338,7 +357,7 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er wait := resource.StateChangeConf{ Pending: []string{"rejected"}, - Target: "accepted", + Target: []string{"accepted"}, Timeout: 5 * time.Minute, MinTimeout: 1 * time.Second, Refresh: func() (interface{}, string, error) { @@ -429,8 +448,9 @@ func resourceAwsRoute53RecordBuildSet(d *schema.ResourceData, zoneName string) ( rec.SetIdentifier = aws.String(v.(string)) } - if v, ok := d.GetOk("weight"); ok { - rec.Weight = aws.Int64(int64(v.(int))) + w := d.Get("weight").(int) + if w > -1 { + rec.Weight = aws.Int64(int64(w)) } return rec, nil diff --git a/builtin/providers/aws/resource_aws_route53_record_test.go b/builtin/providers/aws/resource_aws_route53_record_test.go index 94dfe8e4b4..f07215df51 100644 --- a/builtin/providers/aws/resource_aws_route53_record_test.go +++ b/builtin/providers/aws/resource_aws_route53_record_test.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/route53" ) @@ -277,6 +278,12 @@ func testAccCheckRoute53RecordDestroy(s *terraform.State) error { resp, err := conn.ListResourceRecordSets(lopts) if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + // if NoSuchHostedZone, then all the things are destroyed + if awsErr.Code() == "NoSuchHostedZone" { + return nil + } + } return err } if len(resp.ResourceRecordSets) == 0 { diff --git a/builtin/providers/aws/resource_aws_route53_zone.go b/builtin/providers/aws/resource_aws_route53_zone.go index 50478bfdb8..d1bf8ddf12 100644 --- a/builtin/providers/aws/resource_aws_route53_zone.go +++ b/builtin/providers/aws/resource_aws_route53_zone.go @@ -109,7 +109,7 @@ func resourceAwsRoute53ZoneCreate(d *schema.ResourceData, meta interface{}) erro wait := resource.StateChangeConf{ Delay: 30 * time.Second, Pending: []string{"PENDING"}, - Target: "INSYNC", + Target: []string{"INSYNC"}, Timeout: 10 * time.Minute, MinTimeout: 2 * time.Second, Refresh: func() (result interface{}, state string, err error) { @@ -213,6 +213,11 @@ func resourceAwsRoute53ZoneDelete(d *schema.ResourceData, meta interface{}) erro d.Get("name").(string), d.Id()) _, err := r53.DeleteHostedZone(&route53.DeleteHostedZoneInput{Id: aws.String(d.Id())}) if err != nil { + if r53err, ok := err.(awserr.Error); ok && r53err.Code() == "NoSuchHostedZone" { + log.Printf("[DEBUG] No matching Route 53 Zone found for: %s, removing from state file", d.Id()) + d.SetId("") + return nil + } return err } diff --git a/builtin/providers/aws/resource_aws_route53_zone_association.go b/builtin/providers/aws/resource_aws_route53_zone_association.go index 32d9bc36c7..c416095ec9 100644 --- a/builtin/providers/aws/resource_aws_route53_zone_association.go +++ b/builtin/providers/aws/resource_aws_route53_zone_association.go @@ -71,7 +71,7 @@ func resourceAwsRoute53ZoneAssociationCreate(d *schema.ResourceData, meta interf wait := resource.StateChangeConf{ Delay: 30 * time.Second, Pending: []string{"PENDING"}, - Target: "INSYNC", + Target: []string{"INSYNC"}, Timeout: 10 * time.Minute, MinTimeout: 2 * time.Second, Refresh: func() (result interface{}, state string, err error) { diff --git a/builtin/providers/aws/resource_aws_route_table.go b/builtin/providers/aws/resource_aws_route_table.go index 38e95363e5..95aebd5533 100644 --- a/builtin/providers/aws/resource_aws_route_table.go +++ b/builtin/providers/aws/resource_aws_route_table.go @@ -60,6 +60,11 @@ func resourceAwsRouteTable() *schema.Resource { Optional: true, }, + "nat_gateway_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "vpc_peering_connection_id": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -102,7 +107,7 @@ func resourceAwsRouteTableCreate(d *schema.ResourceData, meta interface{}) error d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "ready", + Target: []string{"ready"}, Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } @@ -163,6 +168,9 @@ func resourceAwsRouteTableRead(d *schema.ResourceData, meta interface{}) error { if r.GatewayId != nil { m["gateway_id"] = *r.GatewayId } + if r.NatGatewayId != nil { + m["nat_gateway_id"] = *r.NatGatewayId + } if r.InstanceId != nil { m["instance_id"] = *r.InstanceId } @@ -287,6 +295,10 @@ func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error NetworkInterfaceId: aws.String(m["network_interface_id"].(string)), } + if m["nat_gateway_id"].(string) != "" { + opts.NatGatewayId = aws.String(m["nat_gateway_id"].(string)) + } + log.Printf("[INFO] Creating route for %s: %#v", d.Id(), opts) if _, err := conn.CreateRoute(&opts); err != nil { return err @@ -360,7 +372,7 @@ func resourceAwsRouteTableDelete(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"ready"}, - Target: "", + Target: []string{}, Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } @@ -385,6 +397,12 @@ func resourceAwsRouteTableHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } + natGatewaySet := false + if v, ok := m["nat_gateway_id"]; ok { + natGatewaySet = v.(string) != "" + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + instanceSet := false if v, ok := m["instance_id"]; ok { instanceSet = v.(string) != "" @@ -395,7 +413,7 @@ func resourceAwsRouteTableHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } - if v, ok := m["network_interface_id"]; ok && !instanceSet { + if v, ok := m["network_interface_id"]; ok && !(instanceSet || natGatewaySet) { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } diff --git a/builtin/providers/aws/resource_aws_route_table_test.go b/builtin/providers/aws/resource_aws_route_table_test.go index 17fd4087ec..5c74a57ddb 100644 --- a/builtin/providers/aws/resource_aws_route_table_test.go +++ b/builtin/providers/aws/resource_aws_route_table_test.go @@ -218,11 +218,6 @@ func testAccCheckRouteTableExists(n string, v *ec2.RouteTable) resource.TestChec func TestAccAWSRouteTable_vpcPeering(t *testing.T) { var v ec2.RouteTable - acctId := os.Getenv("TF_ACC_ID") - if acctId == "" && os.Getenv(resource.TestEnvVar) != "" { - t.Fatal("Error: Test TestAccAWSRouteTable_vpcPeering requires an Account ID in TF_ACC_ID ") - } - testCheck := func(*terraform.State) error { if len(v.Routes) != 2 { return fmt.Errorf("bad routes: %#v", v.Routes) @@ -243,12 +238,17 @@ func TestAccAWSRouteTable_vpcPeering(t *testing.T) { return nil } resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, + PreCheck: func() { + testAccPreCheck(t) + if os.Getenv("AWS_ACCOUNT_ID") == "" { + t.Fatal("Error: Test TestAccAWSRouteTable_vpcPeering requires an Account ID in AWS_ACCOUNT_ID ") + } + }, Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccRouteTableVpcPeeringConfig(acctId), + Config: testAccRouteTableVpcPeeringConfig(os.Getenv("AWS_ACCOUNT_ID")), Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( "aws_route_table.foo", &v), @@ -401,7 +401,7 @@ resource "aws_route_table" "foo" { ` // VPC Peering connections are prefixed with pcx -// This test requires an ENV var, TF_ACC_ID, with a valid AWS Account ID +// This test requires an ENV var, AWS_ACCOUNT_ID, with a valid AWS Account ID func testAccRouteTableVpcPeeringConfig(acc string) string { cfg := `resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" diff --git a/builtin/providers/aws/resource_aws_s3_bucket.go b/builtin/providers/aws/resource_aws_s3_bucket.go index 069cb837ab..9c357976e6 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket.go +++ b/builtin/providers/aws/resource_aws_s3_bucket.go @@ -5,7 +5,10 @@ import ( "encoding/json" "fmt" "log" + "net/url" + "time" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/aws/aws-sdk-go/aws" @@ -149,6 +152,30 @@ func resourceAwsS3Bucket() *schema.Resource { }, }, + "logging": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "target_bucket": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "target_prefix": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["target_bucket"])) + buf.WriteString(fmt.Sprintf("%s-", m["target_prefix"])) + return hashcode.String(buf.String()) + }, + }, + "tags": tagsSchema(), "force_destroy": &schema.Schema{ @@ -229,6 +256,12 @@ func resourceAwsS3BucketUpdate(d *schema.ResourceData, meta interface{}) error { } } + if d.HasChange("logging") { + if err := resourceAwsS3BucketLoggingUpdate(s3conn, d); err != nil { + return err + } + } + return resourceAwsS3BucketRead(d, meta) } @@ -308,7 +341,14 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { } if v := ws.RedirectAllRequestsTo; v != nil { - w["redirect_all_requests_to"] = *v.HostName + if v.Protocol == nil { + w["redirect_all_requests_to"] = *v.HostName + } else { + w["redirect_all_requests_to"] = (&url.URL{ + Host: *v.HostName, + Scheme: *v.Protocol, + }).String() + } } websites = append(websites, w) @@ -339,6 +379,29 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { } } + // Read the logging configuration + logging, err := s3conn.GetBucketLogging(&s3.GetBucketLoggingInput{ + Bucket: aws.String(d.Id()), + }) + if err != nil { + return err + } + log.Printf("[DEBUG] S3 Bucket: %s, logging: %v", d.Id(), logging) + if v := logging.LoggingEnabled; v != nil { + lcl := make([]map[string]interface{}, 0, 1) + lc := make(map[string]interface{}) + if *v.TargetBucket != "" { + lc["target_bucket"] = *v.TargetBucket + } + if *v.TargetPrefix != "" { + lc["target_prefix"] = *v.TargetPrefix + } + lcl = append(lcl, lc) + if err := d.Set("logging", lcl); err != nil { + return err + } + } + // Add the region as an attribute location, err := s3conn.GetBucketLocation( &s3.GetBucketLocationInput{ @@ -406,30 +469,46 @@ func resourceAwsS3BucketDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %+v", err) bucket := d.Get("bucket").(string) - resp, err := s3conn.ListObjects( - &s3.ListObjectsInput{ + resp, err := s3conn.ListObjectVersions( + &s3.ListObjectVersionsInput{ Bucket: aws.String(bucket), }, ) if err != nil { - return fmt.Errorf("Error S3 Bucket list Objects err: %s", err) + return fmt.Errorf("Error S3 Bucket list Object Versions err: %s", err) } - objectsToDelete := make([]*s3.ObjectIdentifier, len(resp.Contents)) - for i, v := range resp.Contents { - objectsToDelete[i] = &s3.ObjectIdentifier{ - Key: v.Key, + objectsToDelete := make([]*s3.ObjectIdentifier, 0) + + if len(resp.DeleteMarkers) != 0 { + + for _, v := range resp.DeleteMarkers { + objectsToDelete = append(objectsToDelete, &s3.ObjectIdentifier{ + Key: v.Key, + VersionId: v.VersionId, + }) } } - _, err = s3conn.DeleteObjects( - &s3.DeleteObjectsInput{ - Bucket: aws.String(bucket), - Delete: &s3.Delete{ - Objects: objectsToDelete, - }, + + if len(resp.Versions) != 0 { + for _, v := range resp.Versions { + objectsToDelete = append(objectsToDelete, &s3.ObjectIdentifier{ + Key: v.Key, + VersionId: v.VersionId, + }) + } + } + + params := &s3.DeleteObjectsInput{ + Bucket: aws.String(bucket), + Delete: &s3.Delete{ + Objects: objectsToDelete, }, - ) + } + + _, err = s3conn.DeleteObjects(params) + if err != nil { return fmt.Errorf("Error S3 Bucket force_destroy error deleting: %s", err) } @@ -450,9 +529,24 @@ func resourceAwsS3BucketPolicyUpdate(s3conn *s3.S3, d *schema.ResourceData) erro if policy != "" { log.Printf("[DEBUG] S3 bucket: %s, put policy: %s", bucket, policy) - _, err := s3conn.PutBucketPolicy(&s3.PutBucketPolicyInput{ + params := &s3.PutBucketPolicyInput{ Bucket: aws.String(bucket), Policy: aws.String(policy), + } + + err := resource.Retry(1*time.Minute, func() error { + if _, err := s3conn.PutBucketPolicy(params); err != nil { + if awserr, ok := err.(awserr.Error); ok { + if awserr.Code() == "MalformedPolicy" { + // Retryable + return awserr + } + } + // Not retryable + return resource.RetryError{Err: err} + } + // No error + return nil }) if err != nil { @@ -566,7 +660,12 @@ func resourceAwsS3BucketWebsitePut(s3conn *s3.S3, d *schema.ResourceData, websit } if redirectAllRequestsTo != "" { - websiteConfiguration.RedirectAllRequestsTo = &s3.RedirectAllRequestsTo{HostName: aws.String(redirectAllRequestsTo)} + redirect, err := url.Parse(redirectAllRequestsTo) + if err == nil && redirect.Scheme != "" { + websiteConfiguration.RedirectAllRequestsTo = &s3.RedirectAllRequestsTo{HostName: aws.String(redirect.Host), Protocol: aws.String(redirect.Scheme)} + } else { + websiteConfiguration.RedirectAllRequestsTo = &s3.RedirectAllRequestsTo{HostName: aws.String(redirectAllRequestsTo)} + } } putInput := &s3.PutBucketWebsiteInput{ @@ -693,6 +792,39 @@ func resourceAwsS3BucketVersioningUpdate(s3conn *s3.S3, d *schema.ResourceData) return nil } +func resourceAwsS3BucketLoggingUpdate(s3conn *s3.S3, d *schema.ResourceData) error { + logging := d.Get("logging").(*schema.Set).List() + bucket := d.Get("bucket").(string) + loggingStatus := &s3.BucketLoggingStatus{} + + if len(logging) > 0 { + c := logging[0].(map[string]interface{}) + + loggingEnabled := &s3.LoggingEnabled{} + if val, ok := c["target_bucket"]; ok { + loggingEnabled.TargetBucket = aws.String(val.(string)) + } + if val, ok := c["target_prefix"]; ok { + loggingEnabled.TargetPrefix = aws.String(val.(string)) + } + + loggingStatus.LoggingEnabled = loggingEnabled + } + + i := &s3.PutBucketLoggingInput{ + Bucket: aws.String(bucket), + BucketLoggingStatus: loggingStatus, + } + log.Printf("[DEBUG] S3 put bucket logging: %#v", i) + + _, err := s3conn.PutBucketLogging(i) + if err != nil { + return fmt.Errorf("Error putting S3 logging: %s", err) + } + + return nil +} + func normalizeJson(jsonString interface{}) string { if jsonString == nil { return "" diff --git a/builtin/providers/aws/resource_aws_s3_bucket_test.go b/builtin/providers/aws/resource_aws_s3_bucket_test.go index 0026775c8a..40d53586c4 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_test.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_test.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/s3" ) @@ -113,7 +114,7 @@ func TestAccAWSS3Bucket_Website_Simple(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), testAccCheckAWSS3BucketWebsite( - "aws_s3_bucket.bucket", "index.html", "", ""), + "aws_s3_bucket.bucket", "index.html", "", "", ""), resource.TestCheckResourceAttr( "aws_s3_bucket.bucket", "website_endpoint", testAccWebsiteEndpoint), ), @@ -123,7 +124,7 @@ func TestAccAWSS3Bucket_Website_Simple(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), testAccCheckAWSS3BucketWebsite( - "aws_s3_bucket.bucket", "index.html", "error.html", ""), + "aws_s3_bucket.bucket", "index.html", "error.html", "", ""), resource.TestCheckResourceAttr( "aws_s3_bucket.bucket", "website_endpoint", testAccWebsiteEndpoint), ), @@ -133,7 +134,7 @@ func TestAccAWSS3Bucket_Website_Simple(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), testAccCheckAWSS3BucketWebsite( - "aws_s3_bucket.bucket", "", "", ""), + "aws_s3_bucket.bucket", "", "", "", ""), resource.TestCheckResourceAttr( "aws_s3_bucket.bucket", "website_endpoint", ""), ), @@ -153,7 +154,17 @@ func TestAccAWSS3Bucket_WebsiteRedirect(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), testAccCheckAWSS3BucketWebsite( - "aws_s3_bucket.bucket", "", "", "hashicorp.com"), + "aws_s3_bucket.bucket", "", "", "", "hashicorp.com"), + resource.TestCheckResourceAttr( + "aws_s3_bucket.bucket", "website_endpoint", testAccWebsiteEndpoint), + ), + }, + resource.TestStep{ + Config: testAccAWSS3BucketWebsiteConfigWithHttpsRedirect, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), + testAccCheckAWSS3BucketWebsite( + "aws_s3_bucket.bucket", "", "", "https", "hashicorp.com"), resource.TestCheckResourceAttr( "aws_s3_bucket.bucket", "website_endpoint", testAccWebsiteEndpoint), ), @@ -163,7 +174,7 @@ func TestAccAWSS3Bucket_WebsiteRedirect(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), testAccCheckAWSS3BucketWebsite( - "aws_s3_bucket.bucket", "", "", ""), + "aws_s3_bucket.bucket", "", "", "", ""), resource.TestCheckResourceAttr( "aws_s3_bucket.bucket", "website_endpoint", ""), ), @@ -187,6 +198,7 @@ func TestAccAWSS3Bucket_shouldFailNotFound(t *testing.T) { testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), testAccCheckAWSS3DestroyBucket("aws_s3_bucket.bucket"), ), + ExpectNonEmptyPlan: true, }, }, }) @@ -254,6 +266,24 @@ func TestAccAWSS3Bucket_Cors(t *testing.T) { }) } +func TestAccAWSS3Bucket_Logging(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSS3BucketConfigWithLogging, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"), + testAccCheckAWSS3BucketLogging( + "aws_s3_bucket.bucket", "aws_s3_bucket.log_bucket", "log/"), + ), + }, + }, + }) +} + func testAccCheckAWSS3BucketDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).s3conn @@ -265,6 +295,9 @@ func testAccCheckAWSS3BucketDestroy(s *terraform.State) error { Bucket: aws.String(rs.Primary.ID), }) if err != nil { + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoSuchBucket" { + return nil + } return err } } @@ -357,7 +390,7 @@ func testAccCheckAWSS3BucketPolicy(n string, policy string) resource.TestCheckFu return nil } } -func testAccCheckAWSS3BucketWebsite(n string, indexDoc string, errorDoc string, redirectTo string) resource.TestCheckFunc { +func testAccCheckAWSS3BucketWebsite(n string, indexDoc string, errorDoc string, redirectProtocol string, redirectTo string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, _ := s.RootModule().Resources[n] conn := testAccProvider.Meta().(*AWSClient).s3conn @@ -404,6 +437,9 @@ func testAccCheckAWSS3BucketWebsite(n string, indexDoc string, errorDoc string, if *v.HostName != redirectTo { return fmt.Errorf("bad redirect to, expected: %s, got %#v", redirectTo, out.RedirectAllRequestsTo) } + if redirectProtocol != "" && *v.Protocol != redirectProtocol { + return fmt.Errorf("bad redirect protocol to, expected: %s, got %#v", redirectProtocol, out.RedirectAllRequestsTo) + } } return nil @@ -457,6 +493,45 @@ func testAccCheckAWSS3BucketCors(n string, corsRules []*s3.CORSRule) resource.Te } } +func testAccCheckAWSS3BucketLogging(n, b, p string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, _ := s.RootModule().Resources[n] + conn := testAccProvider.Meta().(*AWSClient).s3conn + + out, err := conn.GetBucketLogging(&s3.GetBucketLoggingInput{ + Bucket: aws.String(rs.Primary.ID), + }) + + if err != nil { + return fmt.Errorf("GetBucketLogging error: %v", err) + } + + tb, _ := s.RootModule().Resources[b] + + if v := out.LoggingEnabled.TargetBucket; v == nil { + if tb.Primary.ID != "" { + return fmt.Errorf("bad target bucket, found nil, expected: %s", tb.Primary.ID) + } + } else { + if *v != tb.Primary.ID { + return fmt.Errorf("bad target bucket, expected: %s, got %s", tb.Primary.ID, *v) + } + } + + if v := out.LoggingEnabled.TargetPrefix; v == nil { + if p != "" { + return fmt.Errorf("bad target prefix, found nil, expected: %s", p) + } + } else { + if *v != p { + return fmt.Errorf("bad target prefix, expected: %s, got %s", p, *v) + } + } + + return nil + } +} + // These need a bit of randomness as the name can only be used once globally // within AWS var randInt = rand.New(rand.NewSource(time.Now().UnixNano())).Int() @@ -504,6 +579,17 @@ resource "aws_s3_bucket" "bucket" { } `, randInt) +var testAccAWSS3BucketWebsiteConfigWithHttpsRedirect = fmt.Sprintf(` +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-%d" + acl = "public-read" + + website { + redirect_all_requests_to = "https://hashicorp.com" + } +} +`, randInt) + var testAccAWSS3BucketConfigWithPolicy = fmt.Sprintf(` resource "aws_s3_bucket" "bucket" { bucket = "tf-test-bucket-%d" @@ -566,3 +652,18 @@ resource "aws_s3_bucket" "bucket" { acl = "private" } ` + +var testAccAWSS3BucketConfigWithLogging = fmt.Sprintf(` +resource "aws_s3_bucket" "log_bucket" { + bucket = "tf-test-log-bucket-%d" + acl = "log-delivery-write" +} +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-%d" + acl = "private" + logging { + target_bucket = "${aws_s3_bucket.log_bucket.id}" + target_prefix = "log/" + } +} +`, randInt, randInt) diff --git a/builtin/providers/aws/resource_aws_security_group.go b/builtin/providers/aws/resource_aws_security_group.go index 5bfdf3612d..4d2b2e83e3 100644 --- a/builtin/providers/aws/resource_aws_security_group.go +++ b/builtin/providers/aws/resource_aws_security_group.go @@ -24,10 +24,11 @@ func resourceAwsSecurityGroup() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if len(value) > 255 { @@ -38,6 +39,20 @@ func resourceAwsSecurityGroup() *schema.Resource { }, }, + "name_prefix": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 100 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 100 characters, name is limited to 255", k)) + } + return + }, + }, + "description": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -178,6 +193,8 @@ func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) er var groupName string if v, ok := d.GetOk("name"); ok { groupName = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + groupName = resource.PrefixedUniqueId(v.(string)) } else { groupName = resource.UniqueId() } @@ -201,7 +218,7 @@ func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) er d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{""}, - Target: "exists", + Target: []string{"exists"}, Refresh: SGStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } diff --git a/builtin/providers/aws/resource_aws_security_group_rule.go b/builtin/providers/aws/resource_aws_security_group_rule.go index 2a35303c39..d1759dcafa 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule.go +++ b/builtin/providers/aws/resource_aws_security_group_rule.go @@ -93,7 +93,10 @@ func resourceAwsSecurityGroupRuleCreate(d *schema.ResourceData, meta interface{} return err } - perm := expandIPPerm(d, sg) + perm, err := expandIPPerm(d, sg) + if err != nil { + return err + } ruleType := d.Get("type").(string) @@ -171,7 +174,10 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) rules = sg.IpPermissionsEgress } - p := expandIPPerm(d, sg) + p, err := expandIPPerm(d, sg) + if err != nil { + return err + } if len(rules) == 0 { log.Printf("[WARN] No %s rules were found for Security Group (%s) looking for Security Group Rule (%s)", @@ -262,7 +268,10 @@ func resourceAwsSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{} return err } - perm := expandIPPerm(d, sg) + perm, err := expandIPPerm(d, sg) + if err != nil { + return err + } ruleType := d.Get("type").(string) switch ruleType { case "ingress": @@ -383,7 +392,7 @@ func ipPermissionIDHash(sg_id, ruleType string, ip *ec2.IpPermission) string { return fmt.Sprintf("sgrule-%d", hashcode.String(buf.String())) } -func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) *ec2.IpPermission { +func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) (*ec2.IpPermission, error) { var perm ec2.IpPermission perm.FromPort = aws.Int64(int64(d.Get("from_port").(int))) @@ -435,9 +444,13 @@ func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) *ec2.IpPermissi list := raw.([]interface{}) perm.IpRanges = make([]*ec2.IpRange, len(list)) for i, v := range list { - perm.IpRanges[i] = &ec2.IpRange{CidrIp: aws.String(v.(string))} + cidrIP, ok := v.(string) + if !ok { + return nil, fmt.Errorf("empty element found in cidr_blocks - consider using the compact function") + } + perm.IpRanges[i] = &ec2.IpRange{CidrIp: aws.String(cidrIP)} } } - return &perm + return &perm, nil } diff --git a/builtin/providers/aws/resource_aws_security_group_rule_migrate.go b/builtin/providers/aws/resource_aws_security_group_rule_migrate.go index 0b57f3f171..12788054e3 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_migrate.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_migrate.go @@ -26,8 +26,6 @@ func resourceAwsSecurityGroupRuleMigrateState( default: return is, fmt.Errorf("Unexpected schema version: %d", v) } - - return is, nil } func migrateSGRuleStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { diff --git a/builtin/providers/aws/resource_aws_security_group_test.go b/builtin/providers/aws/resource_aws_security_group_test.go index e6b520d957..2ae9d283b2 100644 --- a/builtin/providers/aws/resource_aws_security_group_test.go +++ b/builtin/providers/aws/resource_aws_security_group_test.go @@ -46,6 +46,26 @@ func TestAccAWSSecurityGroup_basic(t *testing.T) { }) } +func TestAccAWSSecurityGroup_namePrefix(t *testing.T) { + var group ec2.SecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSSecurityGroupPrefixNameConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.baz", &group), + testAccCheckAWSSecurityGroupGeneratedNamePrefix( + "aws_security_group.baz", "baz-"), + ), + }, + }, + }) +} + func TestAccAWSSecurityGroup_self(t *testing.T) { var group ec2.SecurityGroup @@ -324,6 +344,24 @@ func testAccCheckAWSSecurityGroupDestroy(s *terraform.State) error { return nil } +func testAccCheckAWSSecurityGroupGeneratedNamePrefix( + resource, prefix string) resource.TestCheckFunc { + return func(s *terraform.State) error { + r, ok := s.RootModule().Resources[resource] + if !ok { + return fmt.Errorf("Resource not found") + } + name, ok := r.Primary.Attributes["name"] + if !ok { + return fmt.Errorf("Name attr not found: %#v", r.Primary.Attributes) + } + if !strings.HasPrefix(name, prefix) { + return fmt.Errorf("Name: %q, does not have prefix: %q", name, prefix) + } + return nil + } +} + func testAccCheckAWSSecurityGroupExists(n string, group *ec2.SecurityGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -809,3 +847,14 @@ resource "aws_security_group" "web" { description = "Used in the terraform acceptance tests" } ` + +const testAccAWSSecurityGroupPrefixNameConfig = ` +provider "aws" { + region = "us-east-1" +} + +resource "aws_security_group" "baz" { + name_prefix = "baz-" + description = "Used in the terraform acceptance tests" +} +` diff --git a/builtin/providers/aws/resource_aws_sns_topic.go b/builtin/providers/aws/resource_aws_sns_topic.go index 6bf0127d0c..a64f4b5d2c 100644 --- a/builtin/providers/aws/resource_aws_sns_topic.go +++ b/builtin/providers/aws/resource_aws_sns_topic.go @@ -119,7 +119,7 @@ func resourceAwsSnsTopicUpdate(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Updating SNS Topic (%s) attributes request: %s", d.Id(), req) stateConf := &resource.StateChangeConf{ Pending: []string{"retrying"}, - Target: "success", + Target: []string{"success"}, Refresh: resourceAwsSNSUpdateRefreshFunc(meta, req), Timeout: 1 * time.Minute, MinTimeout: 3 * time.Second, diff --git a/builtin/providers/aws/resource_aws_sns_topic_subscription.go b/builtin/providers/aws/resource_aws_sns_topic_subscription.go index 1286496ed5..72e9c3307a 100644 --- a/builtin/providers/aws/resource_aws_sns_topic_subscription.go +++ b/builtin/providers/aws/resource_aws_sns_topic_subscription.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "log" + "strings" "github.com/hashicorp/terraform/helper/schema" @@ -10,6 +11,8 @@ import ( "github.com/aws/aws-sdk-go/service/sns" ) +const awsSNSPendingConfirmationMessage = "pending confirmation" + func resourceAwsSnsTopicSubscription() *schema.Resource { return &schema.Resource{ Create: resourceAwsSnsTopicSubscriptionCreate, @@ -22,6 +25,19 @@ func resourceAwsSnsTopicSubscription() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: false, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + forbidden := []string{"email", "sms", "http"} + for _, f := range forbidden { + if strings.Contains(value, f) { + errors = append( + errors, + fmt.Errorf("Unsupported protocol (%s) for SNS Topic", value), + ) + } + } + return + }, }, "endpoint": &schema.Schema{ Type: schema.TypeString, @@ -55,16 +71,17 @@ func resourceAwsSnsTopicSubscription() *schema.Resource { func resourceAwsSnsTopicSubscriptionCreate(d *schema.ResourceData, meta interface{}) error { snsconn := meta.(*AWSClient).snsconn - if d.Get("protocol") == "email" { - return fmt.Errorf("Email endpoints are not supported!") - } - output, err := subscribeToSNSTopic(d, snsconn) if err != nil { return err } + if output.SubscriptionArn != nil && *output.SubscriptionArn == awsSNSPendingConfirmationMessage { + log.Printf("[WARN] Invalid SNS Subscription, received a \"%s\" ARN", awsSNSPendingConfirmationMessage) + return nil + } + log.Printf("New subscription ARN: %s", *output.SubscriptionArn) d.SetId(*output.SubscriptionArn) @@ -92,7 +109,7 @@ func resourceAwsSnsTopicSubscriptionUpdate(d *schema.ResourceData, meta interfac // Re-subscribe and set id output, err := subscribeToSNSTopic(d, snsconn) d.SetId(*output.SubscriptionArn) - + d.Set("arn", *output.SubscriptionArn) } if d.HasChange("raw_message_delivery") { diff --git a/builtin/providers/aws/resource_aws_spot_instance_request.go b/builtin/providers/aws/resource_aws_spot_instance_request.go index 1369c972e8..a2783eeaa3 100644 --- a/builtin/providers/aws/resource_aws_spot_instance_request.go +++ b/builtin/providers/aws/resource_aws_spot_instance_request.go @@ -132,7 +132,7 @@ func resourceAwsSpotInstanceRequestCreate(d *schema.ResourceData, meta interface spotStateConf := &resource.StateChangeConf{ // http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html Pending: []string{"start", "pending-evaluation", "pending-fulfillment"}, - Target: "fulfilled", + Target: []string{"fulfilled"}, Refresh: SpotInstanceStateRefreshFunc(conn, sir), Timeout: 10 * time.Minute, Delay: 10 * time.Second, @@ -194,8 +194,9 @@ func resourceAwsSpotInstanceRequestRead(d *schema.ResourceData, meta interface{} return fmt.Errorf("[ERR] Error reading Spot Instance Data: %s", err) } } - d.Set("spot_request_state", *request.State) - d.Set("block_duration_minutes", *request.BlockDurationMinutes) + + d.Set("spot_request_state", request.State) + d.Set("block_duration_minutes", request.BlockDurationMinutes) d.Set("tags", tagsToMap(request.Tags)) return nil diff --git a/builtin/providers/aws/resource_aws_spot_instance_request_test.go b/builtin/providers/aws/resource_aws_spot_instance_request_test.go index 2fe9860a6c..37bd93507f 100644 --- a/builtin/providers/aws/resource_aws_spot_instance_request_test.go +++ b/builtin/providers/aws/resource_aws_spot_instance_request_test.go @@ -135,12 +135,26 @@ func testAccCheckAWSSpotInstanceRequestDestroy(s *terraform.State) error { } resp, err := conn.DescribeSpotInstanceRequests(req) + var s *ec2.SpotInstanceRequest if err == nil { - if len(resp.SpotInstanceRequests) > 0 { - return fmt.Errorf("Spot instance request is still here.") + for _, sir := range resp.SpotInstanceRequests { + if sir.SpotInstanceRequestId != nil && *sir.SpotInstanceRequestId == rs.Primary.ID { + s = sir + } + continue } } + if s == nil { + // not found + return nil + } + + if *s.State == "canceled" { + // Requests stick around for a while, so we make sure it's cancelled + return nil + } + // Verify the error is what we expect ec2err, ok := err.(awserr.Error) if !ok { diff --git a/builtin/providers/aws/resource_aws_subnet.go b/builtin/providers/aws/resource_aws_subnet.go index 880a160985..34937c3c04 100644 --- a/builtin/providers/aws/resource_aws_subnet.go +++ b/builtin/providers/aws/resource_aws_subnet.go @@ -75,7 +75,7 @@ func resourceAwsSubnetCreate(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Waiting for subnet (%s) to become available", *subnet.SubnetId) stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "available", + Target: []string{"available"}, Refresh: SubnetStateRefreshFunc(conn, *subnet.SubnetId), Timeout: 10 * time.Minute, } @@ -166,7 +166,7 @@ func resourceAwsSubnetDelete(d *schema.ResourceData, meta interface{}) error { wait := resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "destroyed", + Target: []string{"destroyed"}, Timeout: 5 * time.Minute, MinTimeout: 1 * time.Second, Refresh: func() (interface{}, string, error) { diff --git a/builtin/providers/aws/resource_aws_volume_attachment.go b/builtin/providers/aws/resource_aws_volume_attachment.go index 1e9d350777..658f897651 100644 --- a/builtin/providers/aws/resource_aws_volume_attachment.go +++ b/builtin/providers/aws/resource_aws_volume_attachment.go @@ -72,7 +72,7 @@ func resourceAwsVolumeAttachmentCreate(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"attaching"}, - Target: "attached", + Target: []string{"attached"}, Refresh: volumeAttachmentStateRefreshFunc(conn, vID, iID), Timeout: 5 * time.Minute, Delay: 10 * time.Second, @@ -163,7 +163,7 @@ func resourceAwsVolumeAttachmentDelete(d *schema.ResourceData, meta interface{}) _, err := conn.DetachVolume(opts) stateConf := &resource.StateChangeConf{ Pending: []string{"detaching"}, - Target: "detached", + Target: []string{"detached"}, Refresh: volumeAttachmentStateRefreshFunc(conn, vID, iID), Timeout: 5 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_vpc.go b/builtin/providers/aws/resource_aws_vpc.go index 0de908f0d0..007a2f8154 100644 --- a/builtin/providers/aws/resource_aws_vpc.go +++ b/builtin/providers/aws/resource_aws_vpc.go @@ -55,6 +55,12 @@ func resourceAwsVpc() *schema.Resource { Computed: true, }, + "enable_classiclink": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "main_route_table_id": &schema.Schema{ Type: schema.TypeString, Computed: true, @@ -112,7 +118,7 @@ func resourceAwsVpcCreate(d *schema.ResourceData, meta interface{}) error { d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "available", + Target: []string{"available"}, Refresh: VPCStateRefreshFunc(conn, d.Id()), Timeout: 10 * time.Minute, } @@ -170,6 +176,22 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { } d.Set("enable_dns_hostnames", *resp.EnableDnsHostnames) + DescribeClassiclinkOpts := &ec2.DescribeVpcClassicLinkInput{ + VpcIds: []*string{&vpcid}, + } + respClassiclink, err := conn.DescribeVpcClassicLink(DescribeClassiclinkOpts) + if err != nil { + return err + } + classiclink_enabled := false + for _, v := range respClassiclink.Vpcs { + if *v.VpcId == vpcid { + classiclink_enabled = *v.ClassicLinkEnabled + break + } + } + d.Set("enable_classiclink", classiclink_enabled) + // Get the main routing table for this VPC // Really Ugly need to make this better - rmenn filter1 := &ec2.Filter{ @@ -241,6 +263,34 @@ func resourceAwsVpcUpdate(d *schema.ResourceData, meta interface{}) error { d.SetPartial("enable_dns_support") } + if d.HasChange("enable_classiclink") { + val := d.Get("enable_classiclink").(bool) + + if val { + modifyOpts := &ec2.EnableVpcClassicLinkInput{ + VpcId: &vpcid, + } + log.Printf( + "[INFO] Modifying enable_classiclink vpc attribute for %s: %#v", + d.Id(), modifyOpts) + if _, err := conn.EnableVpcClassicLink(modifyOpts); err != nil { + return err + } + } else { + modifyOpts := &ec2.DisableVpcClassicLinkInput{ + VpcId: &vpcid, + } + log.Printf( + "[INFO] Modifying enable_classiclink vpc attribute for %s: %#v", + d.Id(), modifyOpts) + if _, err := conn.DisableVpcClassicLink(modifyOpts); err != nil { + return err + } + } + + d.SetPartial("enable_classiclink") + } + if err := setTags(conn, d); err != nil { return err } else { diff --git a/builtin/providers/aws/resource_aws_vpc_dhcp_options.go b/builtin/providers/aws/resource_aws_vpc_dhcp_options.go index 36b4b1f810..e1a5d86a94 100644 --- a/builtin/providers/aws/resource_aws_vpc_dhcp_options.go +++ b/builtin/providers/aws/resource_aws_vpc_dhcp_options.go @@ -121,7 +121,7 @@ func resourceAwsVpcDhcpOptionsCreate(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Waiting for DHCP Options (%s) to become available", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "", + Target: []string{}, Refresh: DHCPOptionsStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } @@ -223,8 +223,6 @@ func resourceAwsVpcDhcpOptionsDelete(d *schema.ResourceData, meta interface{}) e // Any other error, we want to quit the retry loop immediately return resource.RetryError{Err: err} } - - return nil }) } diff --git a/builtin/providers/aws/resource_aws_vpc_dhcp_options_test.go b/builtin/providers/aws/resource_aws_vpc_dhcp_options_test.go index 7ff15a5fa9..baa86f7d7d 100644 --- a/builtin/providers/aws/resource_aws_vpc_dhcp_options_test.go +++ b/builtin/providers/aws/resource_aws_vpc_dhcp_options_test.go @@ -50,9 +50,12 @@ func testAccCheckDHCPOptionsDestroy(s *terraform.State) error { aws.String(rs.Primary.ID), }, }) + if ae, ok := err.(awserr.Error); ok && ae.Code() == "InvalidDhcpOptionID.NotFound" { + continue + } if err == nil { if len(resp.DhcpOptions) > 0 { - return fmt.Errorf("still exist.") + return fmt.Errorf("still exists") } return nil diff --git a/builtin/providers/aws/resource_aws_vpc_endpoint.go b/builtin/providers/aws/resource_aws_vpc_endpoint.go index 06ba0bf005..1b971c64df 100644 --- a/builtin/providers/aws/resource_aws_vpc_endpoint.go +++ b/builtin/providers/aws/resource_aws_vpc_endpoint.go @@ -103,7 +103,9 @@ func resourceAwsVPCEndpointRead(d *schema.ResourceData, meta interface{}) error d.Set("vpc_id", vpce.VpcId) d.Set("policy", normalizeJson(*vpce.PolicyDocument)) d.Set("service_name", vpce.ServiceName) - d.Set("route_table_ids", vpce.RouteTableIds) + if err := d.Set("route_table_ids", aws.StringValueSlice(vpce.RouteTableIds)); err != nil { + return err + } return nil } @@ -119,12 +121,12 @@ func resourceAwsVPCEndpointUpdate(d *schema.ResourceData, meta interface{}) erro os := o.(*schema.Set) ns := n.(*schema.Set) - add := expandStringList(os.Difference(ns).List()) + add := expandStringList(ns.Difference(os).List()) if len(add) > 0 { input.AddRouteTableIds = add } - remove := expandStringList(ns.Difference(os).List()) + remove := expandStringList(os.Difference(ns).List()) if len(remove) > 0 { input.RemoveRouteTableIds = remove } @@ -142,7 +144,7 @@ func resourceAwsVPCEndpointUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] VPC Endpoint %q updated", input.VpcEndpointId) - return nil + return resourceAwsVPCEndpointRead(d, meta) } func resourceAwsVPCEndpointDelete(d *schema.ResourceData, meta interface{}) error { diff --git a/builtin/providers/aws/resource_aws_vpc_endpoint_test.go b/builtin/providers/aws/resource_aws_vpc_endpoint_test.go index 7973cf8f00..4a081b69c0 100644 --- a/builtin/providers/aws/resource_aws_vpc_endpoint_test.go +++ b/builtin/providers/aws/resource_aws_vpc_endpoint_test.go @@ -5,6 +5,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" @@ -20,9 +21,9 @@ func TestAccAWSVpcEndpoint_basic(t *testing.T) { CheckDestroy: testAccCheckVpcEndpointDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccVpcEndpointConfig, + Config: testAccVpcEndpointWithRouteTableAndPolicyConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVpcEndpointExists("aws_vpc_endpoint.private-s3", &endpoint), + testAccCheckVpcEndpointExists("aws_vpc_endpoint.second-private-s3", &endpoint), ), }, }, @@ -69,7 +70,13 @@ func testAccCheckVpcEndpointDestroy(s *terraform.State) error { VpcEndpointIds: []*string{aws.String(rs.Primary.ID)}, } resp, err := conn.DescribeVpcEndpoints(input) - + if err != nil { + // Verify the error is what we want + if ae, ok := err.(awserr.Error); ok && ae.Code() == "InvalidVpcEndpointId.NotFound" { + continue + } + return err + } if len(resp.VpcEndpoints) > 0 { return fmt.Errorf("VPC Endpoints still exist.") } @@ -109,17 +116,6 @@ func testAccCheckVpcEndpointExists(n string, endpoint *ec2.VpcEndpoint) resource } } -const testAccVpcEndpointConfig = ` -resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" -} - -resource "aws_vpc_endpoint" "private-s3" { - vpc_id = "${aws_vpc.foo.id}" - service_name = "com.amazonaws.us-west-2.s3" -} -` - const testAccVpcEndpointWithRouteTableAndPolicyConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.0.0.0/16" diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection.go b/builtin/providers/aws/resource_aws_vpc_peering_connection.go index 6b7c4dc52c..3ee7222f9f 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection.go @@ -75,7 +75,7 @@ func resourceAwsVPCPeeringCreate(d *schema.ResourceData, meta interface{}) error d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "pending-acceptance", + Target: []string{"pending-acceptance"}, Refresh: resourceAwsVPCPeeringConnectionStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go index ca92ce66a6..6393d4564c 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go @@ -37,6 +37,9 @@ func TestAccAWSVPCPeeringConnection_basic(t *testing.T) { func TestAccAWSVPCPeeringConnection_tags(t *testing.T) { var connection ec2.VpcPeeringConnection peerId := os.Getenv("TF_PEER_ID") + if peerId == "" { + t.Skip("Error: TestAccAWSVPCPeeringConnection_tags requires a peer id to be set") + } resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -67,14 +70,32 @@ func testAccCheckAWSVpcPeeringConnectionDestroy(s *terraform.State) error { VpcPeeringConnectionIds: []*string{aws.String(rs.Primary.ID)}, }) - if err == nil { - if len(describe.VpcPeeringConnections) != 0 { - return fmt.Errorf("vpc peering connection still exists") + if err != nil { + return err + } + + var pc *ec2.VpcPeeringConnection + for _, c := range describe.VpcPeeringConnections { + if rs.Primary.ID == *c.VpcPeeringConnectionId { + pc = c } } + + if pc == nil { + // not found + return nil + } + + if pc.Status != nil { + if *pc.Status.Code == "deleted" { + return nil + } + return fmt.Errorf("Found vpc peering connection in unexpected state: %s", pc) + } + } - return nil + return fmt.Errorf("Fall through error for testAccCheckAWSVpcPeeringConnectionDestroy") } func testAccCheckAWSVpcPeeringConnectionExists(n string, connection *ec2.VpcPeeringConnection) resource.TestCheckFunc { diff --git a/builtin/providers/aws/resource_aws_vpc_test.go b/builtin/providers/aws/resource_aws_vpc_test.go index e877621151..cd01bbf5d1 100644 --- a/builtin/providers/aws/resource_aws_vpc_test.go +++ b/builtin/providers/aws/resource_aws_vpc_test.go @@ -206,6 +206,23 @@ func TestAccAWSVpc_bothDnsOptionsSet(t *testing.T) { }) } +func TestAccAWSVpc_classiclinkOptionSet(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccVpcConfig_ClassiclinkOption, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_vpc.bar", "enable_classiclink", "true"), + ), + }, + }, + }) +} + const testAccVpcConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" @@ -254,3 +271,11 @@ resource "aws_vpc" "bar" { enable_dns_support = true } ` + +const testAccVpcConfig_ClassiclinkOption = ` +resource "aws_vpc" "bar" { + cidr_block = "172.2.0.0/16" + + enable_classiclink = true +} +` diff --git a/builtin/providers/aws/resource_aws_vpn_connection.go b/builtin/providers/aws/resource_aws_vpn_connection.go index 5f3a395645..9a0d15442e 100644 --- a/builtin/providers/aws/resource_aws_vpn_connection.go +++ b/builtin/providers/aws/resource_aws_vpn_connection.go @@ -171,7 +171,7 @@ func resourceAwsVpnConnectionCreate(d *schema.ResourceData, meta interface{}) er // more frequently than every ten seconds. stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "available", + Target: []string{"available"}, Refresh: vpnConnectionRefreshFunc(conn, *vpnConnection.VpnConnectionId), Timeout: 30 * time.Minute, Delay: 10 * time.Second, @@ -303,7 +303,7 @@ func resourceAwsVpnConnectionDelete(d *schema.ResourceData, meta interface{}) er // VPC stack can safely run. stateConf := &resource.StateChangeConf{ Pending: []string{"deleting"}, - Target: "deleted", + Target: []string{"deleted"}, Refresh: vpnConnectionRefreshFunc(conn, d.Id()), Timeout: 30 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/aws/resource_aws_vpn_connection_test.go b/builtin/providers/aws/resource_aws_vpn_connection_test.go index 137694a610..cf151fc854 100644 --- a/builtin/providers/aws/resource_aws_vpn_connection_test.go +++ b/builtin/providers/aws/resource_aws_vpn_connection_test.go @@ -5,13 +5,14 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) -func TestAccAwsVpnConnection_basic(t *testing.T) { +func TestAccAWSVpnConnection_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -44,8 +45,40 @@ func TestAccAwsVpnConnection_basic(t *testing.T) { } func testAccAwsVpnConnectionDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) + conn := testAccProvider.Meta().(*AWSClient).ec2conn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_vpn_connection" { + continue + } + + resp, err := conn.DescribeVpnConnections(&ec2.DescribeVpnConnectionsInput{ + VpnConnectionIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVpnConnectionID.NotFound" { + // not found + return nil + } + return err + } + + var vpn *ec2.VpnConnection + for _, v := range resp.VpnConnections { + if v.VpnConnectionId != nil && *v.VpnConnectionId == rs.Primary.ID { + vpn = v + } + } + + if vpn == nil { + // vpn connection not found + return nil + } + + if vpn.State != nil && *vpn.State == "deleted" { + return nil + } + } return nil diff --git a/builtin/providers/aws/resource_aws_vpn_gateway.go b/builtin/providers/aws/resource_aws_vpn_gateway.go index 4d7860dec5..4a07e716d8 100644 --- a/builtin/providers/aws/resource_aws_vpn_gateway.go +++ b/builtin/providers/aws/resource_aws_vpn_gateway.go @@ -195,7 +195,7 @@ func resourceAwsVpnGatewayAttach(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Waiting for VPN gateway (%s) to attach", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"detached", "attaching"}, - Target: "attached", + Target: []string{"attached"}, Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "available"), Timeout: 1 * time.Minute, } @@ -256,7 +256,7 @@ func resourceAwsVpnGatewayDetach(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Waiting for VPN gateway (%s) to detach", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{"attached", "detaching", "available"}, - Target: "detached", + Target: []string{"detached"}, Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "detached"), Timeout: 1 * time.Minute, } diff --git a/builtin/providers/aws/resource_aws_vpn_gateway_test.go b/builtin/providers/aws/resource_aws_vpn_gateway_test.go index d6b01f3134..3a4bb17472 100644 --- a/builtin/providers/aws/resource_aws_vpn_gateway_test.go +++ b/builtin/providers/aws/resource_aws_vpn_gateway_test.go @@ -128,10 +128,21 @@ func testAccCheckVpnGatewayDestroy(s *terraform.State) error { VpnGatewayIds: []*string{aws.String(rs.Primary.ID)}, }) if err == nil { - if len(resp.VpnGateways) > 0 { - return fmt.Errorf("still exists") + var v *ec2.VpnGateway + for _, g := range resp.VpnGateways { + if *g.VpnGatewayId == rs.Primary.ID { + v = g + } } + if v == nil { + // wasn't found + return nil + } + + if *v.State != "deleted" { + return fmt.Errorf("Expected VpnGateway to be in deleted state, but was not: %s", v) + } return nil } diff --git a/builtin/providers/aws/resource_vpn_connection_route_test.go b/builtin/providers/aws/resource_vpn_connection_route_test.go index b80feaae66..328638a05a 100644 --- a/builtin/providers/aws/resource_vpn_connection_route_test.go +++ b/builtin/providers/aws/resource_vpn_connection_route_test.go @@ -5,13 +5,14 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) -func TestAccAwsVpnConnectionRoute_basic(t *testing.T) { +func TestAccAWSVpnConnectionRoute_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -44,11 +45,57 @@ func TestAccAwsVpnConnectionRoute_basic(t *testing.T) { } func testAccAwsVpnConnectionRouteDestroy(s *terraform.State) error { - if len(s.RootModule().Resources) > 0 { - return fmt.Errorf("Expected all resources to be gone, but found: %#v", s.RootModule().Resources) - } + conn := testAccProvider.Meta().(*AWSClient).ec2conn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_vpn_connection_route" { + continue + } - return nil + cidrBlock, vpnConnectionId := resourceAwsVpnConnectionRouteParseId(rs.Primary.ID) + + routeFilters := []*ec2.Filter{ + &ec2.Filter{ + Name: aws.String("route.destination-cidr-block"), + Values: []*string{aws.String(cidrBlock)}, + }, + &ec2.Filter{ + Name: aws.String("vpn-connection-id"), + Values: []*string{aws.String(vpnConnectionId)}, + }, + } + + resp, err := conn.DescribeVpnConnections(&ec2.DescribeVpnConnectionsInput{ + Filters: routeFilters, + }) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVpnConnectionID.NotFound" { + // not found, all good + return nil + } + return err + } + + var vpnc *ec2.VpnConnection + if resp != nil { + // range over the connections and isolate the one we created + for _, v := range resp.VpnConnections { + if *v.VpnConnectionId == vpnConnectionId { + vpnc = v + } + } + + if vpnc == nil { + // vpn connection not found, so that's good... + return nil + } + + if vpnc.State != nil && *vpnc.State == "deleted" { + return nil + } + } + + } + return fmt.Errorf("Fall through error, Check Destroy criteria not met") } func testAccAwsVpnConnectionRoute( diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go index b5ca83a797..5bc684433b 100644 --- a/builtin/providers/aws/structure.go +++ b/builtin/providers/aws/structure.go @@ -4,7 +4,6 @@ import ( "bytes" "encoding/json" "fmt" - "regexp" "sort" "strings" @@ -17,6 +16,7 @@ import ( elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/elb" "github.com/aws/aws-sdk-go/service/rds" + "github.com/aws/aws-sdk-go/service/redshift" "github.com/aws/aws-sdk-go/service/route53" "github.com/hashicorp/terraform/helper/schema" ) @@ -233,6 +233,29 @@ func expandParameters(configured []interface{}) ([]*rds.Parameter, error) { return parameters, nil } +func expandRedshiftParameters(configured []interface{}) ([]*redshift.Parameter, error) { + var parameters []*redshift.Parameter + + // Loop over our configured parameters and create + // an array of aws-sdk-go compatabile objects + for _, pRaw := range configured { + data := pRaw.(map[string]interface{}) + + if data["name"].(string) == "" { + continue + } + + p := &redshift.Parameter{ + ParameterName: aws.String(data["name"].(string)), + ParameterValue: aws.String(data["value"].(string)), + } + + parameters = append(parameters, p) + } + + return parameters, nil +} + // Takes the result of flatmap.Expand for an array of parameters and // returns Parameter API compatible objects func expandElastiCacheParameters(configured []interface{}) ([]*elasticache.ParameterNameValue, error) { @@ -397,6 +420,24 @@ func flattenEcsContainerDefinitions(definitions []*ecs.ContainerDefinition) (str // Flattens an array of Parameters into a []map[string]interface{} func flattenParameters(list []*rds.Parameter) []map[string]interface{} { + result := make([]map[string]interface{}, 0, len(list)) + for _, i := range list { + if i.ParameterName != nil { + r := make(map[string]interface{}) + r["name"] = strings.ToLower(*i.ParameterName) + // Default empty string, guard against nil parameter values + r["value"] = "" + if i.ParameterValue != nil { + r["value"] = strings.ToLower(*i.ParameterValue) + } + result = append(result, r) + } + } + return result +} + +// Flattens an array of Redshift Parameters into a []map[string]interface{} +func flattenRedshiftParameters(list []*redshift.Parameter) []map[string]interface{} { result := make([]map[string]interface{}, 0, len(list)) for _, i := range list { result = append(result, map[string]interface{}{ @@ -510,27 +551,6 @@ func expandResourceRecords(recs []interface{}, typeStr string) []*route53.Resour return records } -func validateRdsId(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "only lowercase alphanumeric characters and hyphens allowed in %q", k)) - } - if !regexp.MustCompile(`^[a-z]`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "first character of %q must be a letter", k)) - } - if regexp.MustCompile(`--`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot contain two consecutive hyphens", k)) - } - if regexp.MustCompile(`-$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q cannot end with a hyphen", k)) - } - return -} - func expandESClusterConfig(m map[string]interface{}) *elasticsearch.ElasticsearchClusterConfig { config := elasticsearch.ElasticsearchClusterConfig{} @@ -645,6 +665,28 @@ func flattenDSVpcSettings( s *directoryservice.DirectoryVpcSettingsDescription) []map[string]interface{} { settings := make(map[string]interface{}, 0) + if s == nil { + return nil + } + + settings["subnet_ids"] = schema.NewSet(schema.HashString, flattenStringList(s.SubnetIds)) + settings["vpc_id"] = *s.VpcId + + return []map[string]interface{}{settings} +} + +func flattenDSConnectSettings( + customerDnsIps []*string, + s *directoryservice.DirectoryConnectSettingsDescription) []map[string]interface{} { + if s == nil { + return nil + } + + settings := make(map[string]interface{}, 0) + + settings["customer_dns_ips"] = schema.NewSet(schema.HashString, flattenStringList(customerDnsIps)) + settings["connect_ips"] = schema.NewSet(schema.HashString, flattenStringList(s.ConnectIps)) + settings["customer_username"] = *s.CustomerUserName settings["subnet_ids"] = schema.NewSet(schema.HashString, flattenStringList(s.SubnetIds)) settings["vpc_id"] = *s.VpcId diff --git a/builtin/providers/aws/structure_test.go b/builtin/providers/aws/structure_test.go index 8e41b631f9..998a25747c 100644 --- a/builtin/providers/aws/structure_test.go +++ b/builtin/providers/aws/structure_test.go @@ -10,6 +10,7 @@ import ( "github.com/aws/aws-sdk-go/service/elasticache" "github.com/aws/aws-sdk-go/service/elb" "github.com/aws/aws-sdk-go/service/rds" + "github.com/aws/aws-sdk-go/service/redshift" "github.com/aws/aws-sdk-go/service/route53" "github.com/hashicorp/terraform/flatmap" "github.com/hashicorp/terraform/helper/schema" @@ -426,7 +427,32 @@ func TestExpandParameters(t *testing.T) { } } -func TestExpandElasticacheParameters(t *testing.T) { +func TestexpandRedshiftParameters(t *testing.T) { + expanded := []interface{}{ + map[string]interface{}{ + "name": "character_set_client", + "value": "utf8", + }, + } + parameters, err := expandRedshiftParameters(expanded) + if err != nil { + t.Fatalf("bad: %#v", err) + } + + expected := &redshift.Parameter{ + ParameterName: aws.String("character_set_client"), + ParameterValue: aws.String("utf8"), + } + + if !reflect.DeepEqual(parameters[0], expected) { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + parameters[0], + expected) + } +} + +func TestexpandElasticacheParameters(t *testing.T) { expanded := []interface{}{ map[string]interface{}{ "name": "activerehashing", @@ -481,7 +507,36 @@ func TestFlattenParameters(t *testing.T) { } } -func TestFlattenElasticacheParameters(t *testing.T) { +func TestflattenRedshiftParameters(t *testing.T) { + cases := []struct { + Input []*redshift.Parameter + Output []map[string]interface{} + }{ + { + Input: []*redshift.Parameter{ + &redshift.Parameter{ + ParameterName: aws.String("character_set_client"), + ParameterValue: aws.String("utf8"), + }, + }, + Output: []map[string]interface{}{ + map[string]interface{}{ + "name": "character_set_client", + "value": "utf8", + }, + }, + }, + } + + for _, tc := range cases { + output := flattenRedshiftParameters(tc.Input) + if !reflect.DeepEqual(output, tc.Output) { + t.Fatalf("Got:\n\n%#v\n\nExpected:\n\n%#v", output, tc.Output) + } + } +} + +func TestflattenElasticacheParameters(t *testing.T) { cases := []struct { Input []*elasticache.Parameter Output []map[string]interface{} diff --git a/builtin/providers/aws/tagsRedshift.go b/builtin/providers/aws/tagsRedshift.go new file mode 100644 index 0000000000..06d6fda232 --- /dev/null +++ b/builtin/providers/aws/tagsRedshift.go @@ -0,0 +1,27 @@ +package aws + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/redshift" +) + +func tagsFromMapRedshift(m map[string]interface{}) []*redshift.Tag { + result := make([]*redshift.Tag, 0, len(m)) + for k, v := range m { + result = append(result, &redshift.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +func tagsToMapRedshift(ts []*redshift.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/test-fixtures/cloudformation-template.json b/builtin/providers/aws/test-fixtures/cloudformation-template.json new file mode 100644 index 0000000000..a01c4e5e06 --- /dev/null +++ b/builtin/providers/aws/test-fixtures/cloudformation-template.json @@ -0,0 +1,19 @@ +{ + "Parameters" : { + "VpcCIDR" : { + "Description" : "CIDR to be used for the VPC", + "Type" : "String" + } + }, + "Resources" : { + "MyVPC": { + "Type" : "AWS::EC2::VPC", + "Properties" : { + "CidrBlock" : {"Ref": "VpcCIDR"}, + "Tags" : [ + {"Key": "Name", "Value": "Primary_CF_VPC"} + ] + } + } + } +} diff --git a/builtin/providers/aws/validators.go b/builtin/providers/aws/validators.go new file mode 100644 index 0000000000..ede6b36dd7 --- /dev/null +++ b/builtin/providers/aws/validators.go @@ -0,0 +1,136 @@ +package aws + +import ( + "fmt" + "regexp" + "time" +) + +func validateRdsId(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + return +} + +func validateASGScheduleTimestamp(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + _, err := time.Parse(awsAutoscalingScheduleTimeLayout, value) + if err != nil { + errors = append(errors, fmt.Errorf( + "%q cannot be parsed as iso8601 Timestamp Format", value)) + } + + return +} + +// validateTagFilters confirms the "value" component of a tag filter is one of +// AWS's three allowed types. +func validateTagFilters(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if value != "KEY_ONLY" && value != "VALUE_ONLY" && value != "KEY_AND_VALUE" { + errors = append(errors, fmt.Errorf( + "%q must be one of \"KEY_ONLY\", \"VALUE_ONLY\", or \"KEY_AND_VALUE\"", k)) + } + return +} + +func validateDbParamGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 255 characters", k)) + } + return + +} + +func validateStreamViewType(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + viewTypes := map[string]bool{ + "KEYS_ONLY": true, + "NEW_IMAGE": true, + "OLD_IMAGE": true, + "NEW_AND_OLD_IMAGES": true, + } + + if !viewTypes[value] { + errors = append(errors, fmt.Errorf("%q be a valid DynamoDB StreamViewType", k)) + } + return +} + +func validateElbName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q: %q", + k, value)) + } + if len(value) > 32 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 32 characters: %q", k, value)) + } + if regexp.MustCompile(`^-`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot begin with a hyphen: %q", k, value)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen: %q", k, value)) + } + return + +} + +func validateEcrRepositoryName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) < 2 { + errors = append(errors, fmt.Errorf( + "%q must be at least 2 characters long: %q", k, value)) + } + if len(value) > 256 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 256 characters: %q", k, value)) + } + + // http://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html + pattern := `^(?:[a-z0-9]+(?:[._-][a-z0-9]+)*/)*[a-z0-9]+(?:[._-][a-z0-9]+)*$` + if !regexp.MustCompile(pattern).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q doesn't comply with restrictions (%q): %q", + k, pattern, value)) + } + + return +} diff --git a/builtin/providers/aws/validators_test.go b/builtin/providers/aws/validators_test.go new file mode 100644 index 0000000000..0b2ee011ea --- /dev/null +++ b/builtin/providers/aws/validators_test.go @@ -0,0 +1,45 @@ +package aws + +import ( + "testing" +) + +func TestValidateEcrRepositoryName(t *testing.T) { + validNames := []string{ + "nginx-web-app", + "project-a/nginx-web-app", + "domain.ltd/nginx-web-app", + "3chosome-thing.com/01different-pattern", + "0123456789/999999999", + "double/forward/slash", + "000000000000000", + } + for _, v := range validNames { + _, errors := validateEcrRepositoryName(v, "name") + if len(errors) != 0 { + t.Fatalf("%q should be a valid ECR repository name: %q", v, errors) + } + } + + invalidNames := []string{ + // length > 256 + "3cho_some-thing.com/01different.-_pattern01different.-_pattern01diff" + + "erent.-_pattern01different.-_pattern01different.-_pattern01different" + + ".-_pattern01different.-_pattern01different.-_pattern01different.-_pa" + + "ttern01different.-_pattern01different.-_pattern234567", + // length < 2 + "i", + "special@character", + "different+special=character", + "double//slash", + "double..dot", + "/slash-at-the-beginning", + "slash-at-the-end/", + } + for _, v := range invalidNames { + _, errors := validateEcrRepositoryName(v, "name") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid ECR repository name", v) + } + } +} diff --git a/builtin/providers/aws/website_endpoint_url_test.go b/builtin/providers/aws/website_endpoint_url_test.go index bbe282e2cf..2193ff5124 100644 --- a/builtin/providers/aws/website_endpoint_url_test.go +++ b/builtin/providers/aws/website_endpoint_url_test.go @@ -15,6 +15,7 @@ var websiteEndpoints = []struct { {"ap-southeast-1", "bucket-name.s3-website-ap-southeast-1.amazonaws.com"}, {"ap-northeast-1", "bucket-name.s3-website-ap-northeast-1.amazonaws.com"}, {"ap-southeast-2", "bucket-name.s3-website-ap-southeast-2.amazonaws.com"}, + {"ap-northeast-2", "bucket-name.s3-website-ap-northeast-2.amazonaws.com"}, {"sa-east-1", "bucket-name.s3-website-sa-east-1.amazonaws.com"}, } diff --git a/builtin/providers/azure/provider_test.go b/builtin/providers/azure/provider_test.go index d06cf896d2..57e852ce0f 100644 --- a/builtin/providers/azure/provider_test.go +++ b/builtin/providers/azure/provider_test.go @@ -51,12 +51,22 @@ func TestProvider_impl(t *testing.T) { } func testAccPreCheck(t *testing.T) { + sf := os.Getenv("PUBLISH_SETTINGS_FILE") + if sf != "" { + publishSettings, err := ioutil.ReadFile(sf) + if err != nil { + t.Fatalf("Error reading AZURE_SETTINGS_FILE path: %s", err) + } + + os.Setenv("AZURE_PUBLISH_SETTINGS", string(publishSettings)) + } + if v := os.Getenv("AZURE_PUBLISH_SETTINGS"); v == "" { subscriptionID := os.Getenv("AZURE_SUBSCRIPTION_ID") certificate := os.Getenv("AZURE_CERTIFICATE") if subscriptionID == "" || certificate == "" { - t.Fatal("either AZURE_PUBLISH_SETTINGS, or AZURE_SUBSCRIPTION_ID " + + t.Fatal("either AZURE_PUBLISH_SETTINGS, PUBLISH_SETTINGS_FILE, or AZURE_SUBSCRIPTION_ID " + "and AZURE_CERTIFICATE must be set for acceptance tests") } } @@ -127,50 +137,20 @@ func TestAzure_validateSettingsFile(t *testing.T) { } func TestAzure_providerConfigure(t *testing.T) { - home, err := homedir.Dir() - if err != nil { - t.Fatalf("Error fetching homedir: %s", err) + rp := Provider() + raw := map[string]interface{}{ + "publish_settings": testAzurePublishSettingsStr, } - fh, err := ioutil.TempFile(home, "tf-test-home") - if err != nil { - t.Fatalf("Error creating homedir-based temporary file: %s", err) - } - defer os.Remove(fh.Name()) - _, err = io.WriteString(fh, testAzurePublishSettingsStr) + rawConfig, err := config.NewRawConfig(raw) if err != nil { t.Fatalf("err: %s", err) } - fh.Close() - r := strings.NewReplacer(home, "~") - homePath := r.Replace(fh.Name()) - - cases := []struct { - SettingsFile string // String of XML or a path to an XML file - NilMeta bool // whether meta is expected to be nil - }{ - {testAzurePublishSettingsStr, false}, - {homePath, false}, - } - - for _, tc := range cases { - rp := Provider() - raw := map[string]interface{}{ - "settings_file": tc.SettingsFile, - } - - rawConfig, err := config.NewRawConfig(raw) - if err != nil { - t.Fatalf("err: %s", err) - } - - err = rp.Configure(terraform.NewResourceConfig(rawConfig)) - meta := rp.(*schema.Provider).Meta() - if (meta == nil) != tc.NilMeta { - t.Fatalf("expected NilMeta: %t, got meta: %#v, settings_file: %q", - tc.NilMeta, meta, tc.SettingsFile) - } + err = rp.Configure(terraform.NewResourceConfig(rawConfig)) + meta := rp.(*schema.Provider).Meta() + if meta == nil { + t.Fatalf("Expected metadata, got nil: err: %s", err) } } diff --git a/builtin/providers/azure/resource_azure_dns_server_test.go b/builtin/providers/azure/resource_azure_dns_server_test.go index 8b8e335b4b..ac87ebc262 100644 --- a/builtin/providers/azure/resource_azure_dns_server_test.go +++ b/builtin/providers/azure/resource_azure_dns_server_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/Azure/azure-sdk-for-go/management" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -98,6 +99,10 @@ func testAccCheckAzureDnsServerDestroy(s *terraform.State) error { netConf, err := vnetClient.GetVirtualNetworkConfiguration() if err != nil { + // This is desirable - if there is no network config there can't be any DNS Servers + if management.IsResourceNotFoundError(err) { + continue + } return fmt.Errorf("Error retrieving networking configuration from Azure: %s", err) } diff --git a/builtin/providers/azure/resource_azure_instance.go b/builtin/providers/azure/resource_azure_instance.go index 8a643931c3..1ad82c1f62 100644 --- a/builtin/providers/azure/resource_azure_instance.go +++ b/builtin/providers/azure/resource_azure_instance.go @@ -622,7 +622,7 @@ func resourceAzureInstanceDelete(d *schema.ResourceData, meta interface{}) error return err } - err = resource.Retry(5*time.Minute, func() error { + err = resource.Retry(15*time.Minute, func() error { exists, err := blobClient.BlobExists( storageContainterName, fmt.Sprintf(osDiskBlobNameFormat, name), ) @@ -682,7 +682,7 @@ func retrieveImageDetails( func retrieveVMImageDetails( vmImageClient virtualmachineimage.Client, label string) (func(*virtualmachine.Role) error, string, []string, error) { - imgs, err := vmImageClient.ListVirtualMachineImages() + imgs, err := vmImageClient.ListVirtualMachineImages(virtualmachineimage.ListParameters{}) if err != nil { return nil, "", nil, fmt.Errorf("Error retrieving image details: %s", err) } @@ -695,7 +695,7 @@ func retrieveVMImageDetails( } configureForImage := func(role *virtualmachine.Role) error { - return vmutils.ConfigureDeploymentFromVMImage( + return vmutils.ConfigureDeploymentFromPublishedVMImage( role, img.Name, "", diff --git a/builtin/providers/azure/resource_azure_instance_test.go b/builtin/providers/azure/resource_azure_instance_test.go index 1ed9fffb83..adbd5bfd9a 100644 --- a/builtin/providers/azure/resource_azure_instance_test.go +++ b/builtin/providers/azure/resource_azure_instance_test.go @@ -94,7 +94,7 @@ func TestAccAzureInstance_advanced(t *testing.T) { resource.TestCheckResourceAttr( "azure_instance.foo", "subnet", "subnet1"), resource.TestCheckResourceAttr( - "azure_instance.foo", "virtual_network", "terraform-vnet"), + "azure_instance.foo", "virtual_network", "terraform-vnet-advanced-test"), resource.TestCheckResourceAttr( "azure_instance.foo", "security_group", "terraform-security-group1"), resource.TestCheckResourceAttr( @@ -128,7 +128,7 @@ func TestAccAzureInstance_update(t *testing.T) { resource.TestCheckResourceAttr( "azure_instance.foo", "subnet", "subnet1"), resource.TestCheckResourceAttr( - "azure_instance.foo", "virtual_network", "terraform-vnet"), + "azure_instance.foo", "virtual_network", "terraform-vnet-advanced-test"), resource.TestCheckResourceAttr( "azure_instance.foo", "security_group", "terraform-security-group1"), resource.TestCheckResourceAttr( @@ -145,7 +145,7 @@ func TestAccAzureInstance_update(t *testing.T) { resource.TestCheckResourceAttr( "azure_instance.foo", "size", "Basic_A2"), resource.TestCheckResourceAttr( - "azure_instance.foo", "security_group", "terraform-security-group2"), + "azure_instance.foo", "security_group", "terraform-security-update-group2"), resource.TestCheckResourceAttr( "azure_instance.foo", "endpoint.1814039778.public_port", "3389"), resource.TestCheckResourceAttr( @@ -224,7 +224,64 @@ func testAccCheckAzureInstanceAdvancedAttributes( return fmt.Errorf("Bad name: %s", dpmt.Name) } - if dpmt.VirtualNetworkName != "terraform-vnet" { + if dpmt.VirtualNetworkName != "terraform-vnet-advanced-test" { + return fmt.Errorf("Bad virtual network: %s", dpmt.VirtualNetworkName) + } + + if len(dpmt.RoleList) != 1 { + return fmt.Errorf( + "Instance %s has an unexpected number of roles: %d", dpmt.Name, len(dpmt.RoleList)) + } + + if dpmt.RoleList[0].RoleSize != "Basic_A1" { + return fmt.Errorf("Bad size: %s", dpmt.RoleList[0].RoleSize) + } + + for _, c := range dpmt.RoleList[0].ConfigurationSets { + if c.ConfigurationSetType == virtualmachine.ConfigurationSetTypeNetwork { + if len(c.InputEndpoints) != 1 { + return fmt.Errorf( + "Instance %s has an unexpected number of endpoints %d", + dpmt.Name, len(c.InputEndpoints)) + } + + if c.InputEndpoints[0].Name != "RDP" { + return fmt.Errorf("Bad endpoint name: %s", c.InputEndpoints[0].Name) + } + + if c.InputEndpoints[0].Port != 3389 { + return fmt.Errorf("Bad endpoint port: %d", c.InputEndpoints[0].Port) + } + + if len(c.SubnetNames) != 1 { + return fmt.Errorf( + "Instance %s has an unexpected number of associated subnets %d", + dpmt.Name, len(c.SubnetNames)) + } + + if c.SubnetNames[0] != "subnet1" { + return fmt.Errorf("Bad subnet: %s", c.SubnetNames[0]) + } + + if c.NetworkSecurityGroup != "terraform-security-group1" { + return fmt.Errorf("Bad security group: %s", c.NetworkSecurityGroup) + } + } + } + + return nil + } +} + +func testAccCheckAzureInstanceAdvancedUpdatedAttributes( + dpmt *virtualmachine.DeploymentResponse) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if dpmt.Name != "terraform-test1" { + return fmt.Errorf("Bad name: %s", dpmt.Name) + } + + if dpmt.VirtualNetworkName != "terraform-vnet-update-test" { return fmt.Errorf("Bad virtual network: %s", dpmt.VirtualNetworkName) } @@ -281,7 +338,7 @@ func testAccCheckAzureInstanceUpdatedAttributes( return fmt.Errorf("Bad name: %s", dpmt.Name) } - if dpmt.VirtualNetworkName != "terraform-vnet" { + if dpmt.VirtualNetworkName != "terraform-vnet-update-test" { return fmt.Errorf("Bad virtual network: %s", dpmt.VirtualNetworkName) } @@ -320,7 +377,7 @@ func testAccCheckAzureInstanceUpdatedAttributes( return fmt.Errorf("Bad subnet: %s", c.SubnetNames[0]) } - if c.NetworkSecurityGroup != "terraform-security-group2" { + if c.NetworkSecurityGroup != "terraform-security-update-group2" { return fmt.Errorf("Bad security group: %s", c.NetworkSecurityGroup) } } @@ -411,7 +468,7 @@ resource "azure_instance" "foo" { var testAccAzureInstance_advanced = fmt.Sprintf(` resource "azure_virtual_network" "foo" { - name = "terraform-vnet" + name = "terraform-vnet-advanced-test" address_space = ["10.1.2.0/24"] location = "West US" @@ -467,7 +524,7 @@ resource "azure_instance" "foo" { var testAccAzureInstance_update = fmt.Sprintf(` resource "azure_virtual_network" "foo" { - name = "terraform-vnet" + name = "terraform-vnet-update-test" address_space = ["10.1.2.0/24"] location = "West US" @@ -501,7 +558,7 @@ resource "azure_security_group_rule" "foo" { } resource "azure_security_group" "bar" { - name = "terraform-security-group2" + name = "terraform-security-update-group2" location = "West US" } diff --git a/builtin/providers/azure/resource_azure_local_network_test.go b/builtin/providers/azure/resource_azure_local_network_test.go index 2f9f0fdda7..18e09de34c 100644 --- a/builtin/providers/azure/resource_azure_local_network_test.go +++ b/builtin/providers/azure/resource_azure_local_network_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/Azure/azure-sdk-for-go/management" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -109,6 +110,10 @@ func testAccAzureLocalNetworkConnectionDestroyed(s *terraform.State) error { netConf, err := vnetClient.GetVirtualNetworkConfiguration() if err != nil { + // This is desirable - if there is no network config there can be no gateways + if management.IsResourceNotFoundError(err) { + continue + } return err } diff --git a/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule.go b/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule.go index a5cb0b2147..06df80ce14 100644 --- a/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule.go +++ b/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule.go @@ -209,6 +209,9 @@ func resourceAzureSqlDatabaseServerFirewallRuleDelete(d *schema.ResourceData, me // go ahead and delete the rule: log.Printf("[INFO] Issuing deletion of Azure Database Server Firewall Rule %q in Server %q.", name, serverName) if err := sqlClient.DeleteFirewallRule(serverName, name); err != nil { + if strings.Contains(err.Error(), "Cannot open server") { + break + } return fmt.Errorf("Error deleting Azure Database Server Firewall Rule %q for Server %q: %s", name, serverName, err) } diff --git a/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule_test.go b/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule_test.go index 9202be7e10..ff64f3b95a 100644 --- a/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule_test.go +++ b/builtin/providers/azure/resource_azure_sql_database_server_firewall_rule_test.go @@ -2,8 +2,11 @@ package azure import ( "fmt" + "strings" "testing" + "time" + "github.com/Azure/azure-sdk-for-go/management/sql" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -45,11 +48,11 @@ func TestAccAzureSqlDatabaseServerFirewallRuleAdvanced(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccAzureSqlDatabaseServerGetNames, testAccAzureSqlDatabaseServersNumber(2), - testAccAzureDatabaseServerFirewallRuleExists(name1, testAccAzureSqlServerNames), + //testAccAzureDatabaseServerFirewallRuleExists(name1, testAccAzureSqlServerNames), resource.TestCheckResourceAttr(name1, "name", "terraform-testing-rule1"), resource.TestCheckResourceAttr(name1, "start_ip", "10.0.0.0"), resource.TestCheckResourceAttr(name1, "end_ip", "10.0.0.255"), - testAccAzureDatabaseServerFirewallRuleExists(name2, testAccAzureSqlServerNames), + //testAccAzureDatabaseServerFirewallRuleExists(name2, testAccAzureSqlServerNames), resource.TestCheckResourceAttr(name2, "name", "terraform-testing-rule2"), resource.TestCheckResourceAttr(name2, "start_ip", "200.0.0.0"), resource.TestCheckResourceAttr(name2, "end_ip", "200.255.255.255"), @@ -73,11 +76,11 @@ func TestAccAzureSqlDatabaseServerFirewallRuleUpdate(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccAzureSqlDatabaseServerGetNames, testAccAzureSqlDatabaseServersNumber(2), - testAccAzureDatabaseServerFirewallRuleExists(name1, testAccAzureSqlServerNames), + //testAccAzureDatabaseServerFirewallRuleExists(name1, testAccAzureSqlServerNames), resource.TestCheckResourceAttr(name1, "name", "terraform-testing-rule1"), resource.TestCheckResourceAttr(name1, "start_ip", "10.0.0.0"), resource.TestCheckResourceAttr(name1, "end_ip", "10.0.0.255"), - testAccAzureDatabaseServerFirewallRuleExists(name2, testAccAzureSqlServerNames), + //testAccAzureDatabaseServerFirewallRuleExists(name2, testAccAzureSqlServerNames), resource.TestCheckResourceAttr(name2, "name", "terraform-testing-rule2"), resource.TestCheckResourceAttr(name2, "start_ip", "200.0.0.0"), resource.TestCheckResourceAttr(name2, "end_ip", "200.255.255.255"), @@ -88,7 +91,7 @@ func TestAccAzureSqlDatabaseServerFirewallRuleUpdate(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccAzureSqlDatabaseServerGetNames, testAccAzureSqlDatabaseServersNumber(2), - testAccAzureDatabaseServerFirewallRuleExists(name1, testAccAzureSqlServerNames), + //testAccAzureDatabaseServerFirewallRuleExists(name1, testAccAzureSqlServerNames), resource.TestCheckResourceAttr(name1, "name", "terraform-testing-rule1"), resource.TestCheckResourceAttr(name1, "start_ip", "11.0.0.0"), resource.TestCheckResourceAttr(name1, "end_ip", "11.0.0.255"), @@ -100,32 +103,42 @@ func TestAccAzureSqlDatabaseServerFirewallRuleUpdate(t *testing.T) { func testAccAzureDatabaseServerFirewallRuleExists(name string, servers []string) resource.TestCheckFunc { return func(s *terraform.State) error { - resource, ok := s.RootModule().Resources[name] + res, ok := s.RootModule().Resources[name] if !ok { return fmt.Errorf("Azure Database Server Firewall Rule %q doesn't exist.", name) } - if resource.Primary.ID == "" { - return fmt.Errorf("Azure Database Server Firewall Rule %q resource ID not set.", name) + if res.Primary.ID == "" { + return fmt.Errorf("Azure Database Server Firewall Rule %q res ID not set.", name) } sqlClient := testAccProvider.Meta().(*Client).sqlClient for _, server := range servers { - rules, err := sqlClient.ListFirewallRules(server) + var rules sql.ListFirewallRulesResponse + + err := resource.Retry(15*time.Minute, func() error { + var erri error + rules, erri = sqlClient.ListFirewallRules(server) + if erri != nil { + return fmt.Errorf("Error listing Azure Database Server Firewall Rules for Server %q: %s", server, erri) + } + + return nil + }) if err != nil { - return fmt.Errorf("Error listing Azure Database Server Firewall Rules for Server %q: %s", server, err) + return err } var found bool for _, rule := range rules.FirewallRules { - if rule.Name == resource.Primary.ID { + if rule.Name == res.Primary.ID { found = true break } } if !found { - return fmt.Errorf("Azure Database Server Firewall Rule %q doesn't exists on server %q.", resource.Primary.ID, server) + return fmt.Errorf("Azure Database Server Firewall Rule %q doesn't exists on server %q.", res.Primary.ID, server) } } @@ -149,6 +162,10 @@ func testAccAzureDatabaseServerFirewallRuleDeleted(servers []string) resource.Te for _, server := range servers { rules, err := sqlClient.ListFirewallRules(server) if err != nil { + // ¯\_(ツ)_/¯ + if strings.Contains(err.Error(), "Cannot open server") { + return nil + } return fmt.Errorf("Error listing Azure Database Server Firewall Rules for Server %q: %s", server, err) } diff --git a/builtin/providers/azure/resource_azure_sql_database_service_test.go b/builtin/providers/azure/resource_azure_sql_database_service_test.go index 24d8657748..31ea8990e6 100644 --- a/builtin/providers/azure/resource_azure_sql_database_service_test.go +++ b/builtin/providers/azure/resource_azure_sql_database_service_test.go @@ -2,6 +2,7 @@ package azure import ( "fmt" + "strings" "testing" "github.com/hashicorp/terraform/helper/resource" @@ -146,6 +147,10 @@ func testAccCheckAzureSqlDatabaseServiceDeleted(s *terraform.State) error { sqlClient := testAccProvider.Meta().(*Client).sqlClient dbs, err := sqlClient.ListDatabases(*testAccAzureSqlServerName) if err != nil { + // ¯\_(ツ)_/¯ + if strings.Contains(err.Error(), "Cannot open server") { + return nil + } return fmt.Errorf("Error issuing Azure SQL Service list request: %s", err) } diff --git a/builtin/providers/azure/resource_azure_storage_service_test.go b/builtin/providers/azure/resource_azure_storage_service_test.go index e3ac588d23..4067e2a94c 100644 --- a/builtin/providers/azure/resource_azure_storage_service_test.go +++ b/builtin/providers/azure/resource_azure_storage_service_test.go @@ -20,7 +20,7 @@ func TestAccAzureStorageService(t *testing.T) { Config: testAccAzureStorageServiceConfig, Check: resource.ComposeTestCheckFunc( testAccAzureStorageServiceExists(name), - resource.TestCheckResourceAttr(name, "name", "tftesting"), + resource.TestCheckResourceAttr(name, "name", "tftestingdis"), resource.TestCheckResourceAttr(name, "location", "West US"), resource.TestCheckResourceAttr(name, "description", "very descriptive"), resource.TestCheckResourceAttr(name, "account_type", "Standard_LRS"), @@ -70,7 +70,7 @@ func testAccAzureStorageServiceDestroyed(s *terraform.State) error { var testAccAzureStorageServiceConfig = ` resource "azure_storage_service" "foo" { # NOTE: storage service names constrained to lowercase letters only. - name = "tftesting" + name = "tftestingdis" location = "West US" description = "very descriptive" account_type = "Standard_LRS" diff --git a/builtin/providers/azure/resource_azure_virtual_network_test.go b/builtin/providers/azure/resource_azure_virtual_network_test.go index f6d637f16c..716556bbd4 100644 --- a/builtin/providers/azure/resource_azure_virtual_network_test.go +++ b/builtin/providers/azure/resource_azure_virtual_network_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/Azure/azure-sdk-for-go/management" "github.com/Azure/azure-sdk-for-go/management/virtualnetwork" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -185,6 +186,10 @@ func testAccCheckAzureVirtualNetworkDestroy(s *terraform.State) error { nc, err := vnetClient.GetVirtualNetworkConfiguration() if err != nil { + if management.IsResourceNotFoundError(err) { + // This is desirable - no configuration = no networks + continue + } return fmt.Errorf("Error retrieving Virtual Network Configuration: %s", err) } diff --git a/builtin/providers/azurerm/config.go b/builtin/providers/azurerm/config.go new file mode 100644 index 0000000000..de4da13825 --- /dev/null +++ b/builtin/providers/azurerm/config.go @@ -0,0 +1,283 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest" + "github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/azure-sdk-for-go/arm/cdn" + "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/Azure/azure-sdk-for-go/arm/resources/resources" + "github.com/Azure/azure-sdk-for-go/arm/scheduler" + "github.com/Azure/azure-sdk-for-go/arm/storage" + "github.com/hashicorp/terraform/terraform" +) + +// ArmClient contains the handles to all the specific Azure Resource Manager +// resource classes' respective clients. +type ArmClient struct { + availSetClient compute.AvailabilitySetsClient + usageOpsClient compute.UsageOperationsClient + vmExtensionImageClient compute.VirtualMachineExtensionImagesClient + vmExtensionClient compute.VirtualMachineExtensionsClient + vmImageClient compute.VirtualMachineImagesClient + vmClient compute.VirtualMachinesClient + + appGatewayClient network.ApplicationGatewaysClient + ifaceClient network.InterfacesClient + loadBalancerClient network.LoadBalancersClient + localNetConnClient network.LocalNetworkGatewaysClient + publicIPClient network.PublicIPAddressesClient + secGroupClient network.SecurityGroupsClient + secRuleClient network.SecurityRulesClient + subnetClient network.SubnetsClient + netUsageClient network.UsagesClient + vnetGatewayConnectionsClient network.VirtualNetworkGatewayConnectionsClient + vnetGatewayClient network.VirtualNetworkGatewaysClient + vnetClient network.VirtualNetworksClient + routeTablesClient network.RouteTablesClient + routesClient network.RoutesClient + + cdnProfilesClient cdn.ProfilesClient + cdnEndpointsClient cdn.EndpointsClient + + providers resources.ProvidersClient + resourceGroupClient resources.GroupsClient + tagsClient resources.TagsClient + + jobsClient scheduler.JobsClient + jobsCollectionsClient scheduler.JobCollectionsClient + + storageServiceClient storage.AccountsClient + storageUsageClient storage.UsageOperationsClient +} + +func withRequestLogging() autorest.SendDecorator { + return func(s autorest.Sender) autorest.Sender { + return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) { + log.Printf("[DEBUG] Sending Azure RM Request %q to %q\n", r.Method, r.URL) + resp, err := s.Do(r) + if resp != nil { + log.Printf("[DEBUG] Received Azure RM Request status code %s for %s\n", resp.Status, r.URL) + } else { + log.Printf("[DEBUG] Request to %s completed with no response", r.URL) + } + return resp, err + }) + } +} + +func withPollWatcher() autorest.SendDecorator { + return func(s autorest.Sender) autorest.Sender { + return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) { + fmt.Printf("[DEBUG] Sending Azure RM Request %q to %q\n", r.Method, r.URL) + resp, err := s.Do(r) + fmt.Printf("[DEBUG] Received Azure RM Request status code %s for %s\n", resp.Status, r.URL) + if autorest.ResponseRequiresPolling(resp) { + fmt.Printf("[DEBUG] Azure RM request will poll %s after %d seconds\n", + autorest.GetPollingLocation(resp), + int(autorest.GetPollingDelay(resp, time.Duration(0))/time.Second)) + } + return resp, err + }) + } +} + +func setUserAgent(client *autorest.Client) { + var version string + if terraform.VersionPrerelease != "" { + version = fmt.Sprintf("%s-%s", terraform.Version, terraform.VersionPrerelease) + } else { + version = terraform.Version + } + + client.UserAgent = fmt.Sprintf("HashiCorp-Terraform-v%s", version) +} + +// getArmClient is a helper method which returns a fully instantiated +// *ArmClient based on the Config's current settings. +func (c *Config) getArmClient() (*ArmClient, error) { + spt, err := azure.NewServicePrincipalToken(c.ClientID, c.ClientSecret, c.TenantID, azure.AzureResourceManagerScope) + if err != nil { + return nil, err + } + + // client declarations: + client := ArmClient{} + + // NOTE: these declarations should be left separate for clarity should the + // clients be wished to be configured with custom Responders/PollingModess etc... + asc := compute.NewAvailabilitySetsClient(c.SubscriptionID) + setUserAgent(&asc.Client) + asc.Authorizer = spt + asc.Sender = autorest.CreateSender(withRequestLogging()) + client.availSetClient = asc + + uoc := compute.NewUsageOperationsClient(c.SubscriptionID) + setUserAgent(&uoc.Client) + uoc.Authorizer = spt + uoc.Sender = autorest.CreateSender(withRequestLogging()) + client.usageOpsClient = uoc + + vmeic := compute.NewVirtualMachineExtensionImagesClient(c.SubscriptionID) + setUserAgent(&vmeic.Client) + vmeic.Authorizer = spt + vmeic.Sender = autorest.CreateSender(withRequestLogging()) + client.vmExtensionImageClient = vmeic + + vmec := compute.NewVirtualMachineExtensionsClient(c.SubscriptionID) + setUserAgent(&vmec.Client) + vmec.Authorizer = spt + vmec.Sender = autorest.CreateSender(withRequestLogging()) + client.vmExtensionClient = vmec + + vmic := compute.NewVirtualMachineImagesClient(c.SubscriptionID) + setUserAgent(&vmic.Client) + vmic.Authorizer = spt + vmic.Sender = autorest.CreateSender(withRequestLogging()) + client.vmImageClient = vmic + + vmc := compute.NewVirtualMachinesClient(c.SubscriptionID) + setUserAgent(&vmc.Client) + vmc.Authorizer = spt + vmc.Sender = autorest.CreateSender(withRequestLogging()) + client.vmClient = vmc + + agc := network.NewApplicationGatewaysClient(c.SubscriptionID) + setUserAgent(&agc.Client) + agc.Authorizer = spt + agc.Sender = autorest.CreateSender(withRequestLogging()) + client.appGatewayClient = agc + + ifc := network.NewInterfacesClient(c.SubscriptionID) + setUserAgent(&ifc.Client) + ifc.Authorizer = spt + ifc.Sender = autorest.CreateSender(withRequestLogging()) + client.ifaceClient = ifc + + lbc := network.NewLoadBalancersClient(c.SubscriptionID) + setUserAgent(&lbc.Client) + lbc.Authorizer = spt + lbc.Sender = autorest.CreateSender(withRequestLogging()) + client.loadBalancerClient = lbc + + lgc := network.NewLocalNetworkGatewaysClient(c.SubscriptionID) + setUserAgent(&lgc.Client) + lgc.Authorizer = spt + lgc.Sender = autorest.CreateSender(withRequestLogging()) + client.localNetConnClient = lgc + + pipc := network.NewPublicIPAddressesClient(c.SubscriptionID) + setUserAgent(&pipc.Client) + pipc.Authorizer = spt + pipc.Sender = autorest.CreateSender(withRequestLogging()) + client.publicIPClient = pipc + + sgc := network.NewSecurityGroupsClient(c.SubscriptionID) + setUserAgent(&sgc.Client) + sgc.Authorizer = spt + sgc.Sender = autorest.CreateSender(withRequestLogging()) + client.secGroupClient = sgc + + src := network.NewSecurityRulesClient(c.SubscriptionID) + setUserAgent(&src.Client) + src.Authorizer = spt + src.Sender = autorest.CreateSender(withRequestLogging()) + client.secRuleClient = src + + snc := network.NewSubnetsClient(c.SubscriptionID) + setUserAgent(&snc.Client) + snc.Authorizer = spt + snc.Sender = autorest.CreateSender(withRequestLogging()) + client.subnetClient = snc + + vgcc := network.NewVirtualNetworkGatewayConnectionsClient(c.SubscriptionID) + setUserAgent(&vgcc.Client) + vgcc.Authorizer = spt + vgcc.Sender = autorest.CreateSender(withRequestLogging()) + client.vnetGatewayConnectionsClient = vgcc + + vgc := network.NewVirtualNetworkGatewaysClient(c.SubscriptionID) + setUserAgent(&vgc.Client) + vgc.Authorizer = spt + vgc.Sender = autorest.CreateSender(withRequestLogging()) + client.vnetGatewayClient = vgc + + vnc := network.NewVirtualNetworksClient(c.SubscriptionID) + setUserAgent(&vnc.Client) + vnc.Authorizer = spt + vnc.Sender = autorest.CreateSender(withRequestLogging()) + client.vnetClient = vnc + + rtc := network.NewRouteTablesClient(c.SubscriptionID) + setUserAgent(&rtc.Client) + rtc.Authorizer = spt + rtc.Sender = autorest.CreateSender(withRequestLogging()) + client.routeTablesClient = rtc + + rc := network.NewRoutesClient(c.SubscriptionID) + setUserAgent(&rc.Client) + rc.Authorizer = spt + rc.Sender = autorest.CreateSender(withRequestLogging()) + client.routesClient = rc + + rgc := resources.NewGroupsClient(c.SubscriptionID) + setUserAgent(&rgc.Client) + rgc.Authorizer = spt + rgc.Sender = autorest.CreateSender(withRequestLogging()) + client.resourceGroupClient = rgc + + pc := resources.NewProvidersClient(c.SubscriptionID) + setUserAgent(&pc.Client) + pc.Authorizer = spt + pc.Sender = autorest.CreateSender(withRequestLogging()) + client.providers = pc + + tc := resources.NewTagsClient(c.SubscriptionID) + setUserAgent(&tc.Client) + tc.Authorizer = spt + tc.Sender = autorest.CreateSender(withRequestLogging()) + client.tagsClient = tc + + jc := scheduler.NewJobsClient(c.SubscriptionID) + setUserAgent(&jc.Client) + jc.Authorizer = spt + jc.Sender = autorest.CreateSender(withRequestLogging()) + client.jobsClient = jc + + jcc := scheduler.NewJobCollectionsClient(c.SubscriptionID) + setUserAgent(&jcc.Client) + jcc.Authorizer = spt + jcc.Sender = autorest.CreateSender(withRequestLogging()) + client.jobsCollectionsClient = jcc + + ssc := storage.NewAccountsClient(c.SubscriptionID) + setUserAgent(&ssc.Client) + ssc.Authorizer = spt + ssc.Sender = autorest.CreateSender(withRequestLogging(), withPollWatcher()) + client.storageServiceClient = ssc + + suc := storage.NewUsageOperationsClient(c.SubscriptionID) + setUserAgent(&suc.Client) + suc.Authorizer = spt + suc.Sender = autorest.CreateSender(withRequestLogging()) + client.storageUsageClient = suc + + cpc := cdn.NewProfilesClient(c.SubscriptionID) + setUserAgent(&cpc.Client) + cpc.Authorizer = spt + cpc.Sender = autorest.CreateSender(withRequestLogging()) + client.cdnProfilesClient = cpc + + cec := cdn.NewEndpointsClient(c.SubscriptionID) + setUserAgent(&cec.Client) + cec.Authorizer = spt + cec.Sender = autorest.CreateSender(withRequestLogging()) + client.cdnEndpointsClient = cec + + return &client, nil +} diff --git a/builtin/providers/azurerm/network_security_rule.go b/builtin/providers/azurerm/network_security_rule.go new file mode 100644 index 0000000000..f7b41d559d --- /dev/null +++ b/builtin/providers/azurerm/network_security_rule.go @@ -0,0 +1,46 @@ +package azurerm + +import ( + "fmt" + "strings" +) + +func validateNetworkSecurityRuleProtocol(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + protocols := map[string]bool{ + "tcp": true, + "udp": true, + "*": true, + } + + if !protocols[value] { + errors = append(errors, fmt.Errorf("Network Security Rule Protocol can only be Tcp, Udp or *")) + } + return +} + +func validateNetworkSecurityRuleAccess(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + accessTypes := map[string]bool{ + "allow": true, + "deny": true, + } + + if !accessTypes[value] { + errors = append(errors, fmt.Errorf("Network Security Rule Access can only be Allow or Deny")) + } + return +} + +func validateNetworkSecurityRuleDirection(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + directions := map[string]bool{ + "inbound": true, + "outbound": true, + } + + if !directions[value] { + errors = append(errors, fmt.Errorf("Network Security Rule Directions can only be Inbound or Outbound")) + } + return +} diff --git a/builtin/providers/azurerm/network_security_rule_test.go b/builtin/providers/azurerm/network_security_rule_test.go new file mode 100644 index 0000000000..f1f71e8f29 --- /dev/null +++ b/builtin/providers/azurerm/network_security_rule_test.go @@ -0,0 +1,115 @@ +package azurerm + +import "testing" + +func TestResourceAzureRMNetworkSecurityRuleProtocol_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "Random", + ErrCount: 1, + }, + { + Value: "tcp", + ErrCount: 0, + }, + { + Value: "TCP", + ErrCount: 0, + }, + { + Value: "*", + ErrCount: 0, + }, + { + Value: "Udp", + ErrCount: 0, + }, + { + Value: "Tcp", + ErrCount: 0, + }, + } + + for _, tc := range cases { + _, errors := validateNetworkSecurityRuleProtocol(tc.Value, "azurerm_network_security_rule") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Azure RM Network Security Rule protocol to trigger a validation error") + } + } +} + +func TestResourceAzureRMNetworkSecurityRuleAccess_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "Random", + ErrCount: 1, + }, + { + Value: "Allow", + ErrCount: 0, + }, + { + Value: "Deny", + ErrCount: 0, + }, + { + Value: "ALLOW", + ErrCount: 0, + }, + { + Value: "deny", + ErrCount: 0, + }, + } + + for _, tc := range cases { + _, errors := validateNetworkSecurityRuleAccess(tc.Value, "azurerm_network_security_rule") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Azure RM Network Security Rule access to trigger a validation error") + } + } +} + +func TestResourceAzureRMNetworkSecurityRuleDirection_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "Random", + ErrCount: 1, + }, + { + Value: "Inbound", + ErrCount: 0, + }, + { + Value: "Outbound", + ErrCount: 0, + }, + { + Value: "INBOUND", + ErrCount: 0, + }, + { + Value: "Inbound", + ErrCount: 0, + }, + } + + for _, tc := range cases { + _, errors := validateNetworkSecurityRuleDirection(tc.Value, "azurerm_network_security_rule") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Azure RM Network Security Rule direction to trigger a validation error") + } + } +} diff --git a/builtin/providers/azurerm/provider.go b/builtin/providers/azurerm/provider.go new file mode 100644 index 0000000000..56911cfd6a --- /dev/null +++ b/builtin/providers/azurerm/provider.go @@ -0,0 +1,162 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "strings" + + "github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest" + "github.com/hashicorp/terraform/helper/mutexkv" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "subscription_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("ARM_SUBSCRIPTION_ID", ""), + }, + + "client_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("ARM_CLIENT_ID", ""), + }, + + "client_secret": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("ARM_CLIENT_SECRET", ""), + }, + + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("ARM_TENANT_ID", ""), + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "azurerm_resource_group": resourceArmResourceGroup(), + "azurerm_virtual_network": resourceArmVirtualNetwork(), + "azurerm_local_network_gateway": resourceArmLocalNetworkGateway(), + "azurerm_availability_set": resourceArmAvailabilitySet(), + "azurerm_network_security_group": resourceArmNetworkSecurityGroup(), + "azurerm_network_security_rule": resourceArmNetworkSecurityRule(), + "azurerm_public_ip": resourceArmPublicIp(), + "azurerm_subnet": resourceArmSubnet(), + "azurerm_network_interface": resourceArmNetworkInterface(), + "azurerm_route_table": resourceArmRouteTable(), + "azurerm_route": resourceArmRoute(), + "azurerm_cdn_profile": resourceArmCdnProfile(), + "azurerm_cdn_endpoint": resourceArmCdnEndpoint(), + "azurerm_storage_account": resourceArmStorageAccount(), + }, + ConfigureFunc: providerConfigure, + } +} + +// Config is the configuration structure used to instantiate a +// new Azure management client. +type Config struct { + ManagementURL string + + SubscriptionID string + ClientID string + ClientSecret string + TenantID string +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + SubscriptionID: d.Get("subscription_id").(string), + ClientID: d.Get("client_id").(string), + ClientSecret: d.Get("client_secret").(string), + TenantID: d.Get("tenant_id").(string), + } + + client, err := config.getArmClient() + if err != nil { + return nil, err + } + + err = registerAzureResourceProvidersWithSubscription(&config, client) + if err != nil { + return nil, err + } + + return client, nil +} + +// registerAzureResourceProvidersWithSubscription uses the providers client to register +// all Azure resource providers which the Terraform provider may require (regardless of +// whether they are actually used by the configuration or not). It was confirmed by Microsoft +// that this is the approach their own internal tools also take. +func registerAzureResourceProvidersWithSubscription(config *Config, client *ArmClient) error { + providerClient := client.providers + + providers := []string{"Microsoft.Network", "Microsoft.Compute", "Microsoft.Cdn", "Microsoft.Storage"} + + for _, v := range providers { + res, err := providerClient.Register(v) + if err != nil { + return err + } + + if res.StatusCode != http.StatusOK { + return fmt.Errorf("Error registering provider %q with subscription %q", v, config.SubscriptionID) + } + } + + return nil +} + +// azureRMNormalizeLocation is a function which normalises human-readable region/location +// names (e.g. "West US") to the values used and returned by the Azure API (e.g. "westus"). +// In state we track the API internal version as it is easier to go from the human form +// to the canonical form than the other way around. +func azureRMNormalizeLocation(location interface{}) string { + input := location.(string) + return strings.Replace(strings.ToLower(input), " ", "", -1) +} + +// pollIndefinitelyAsNeeded is a terrible hack which is necessary because the Azure +// Storage API (and perhaps others) can have response times way beyond the default +// retry timeouts, with no apparent upper bound. This effectively causes the client +// to continue polling when it reaches the configured timeout. My investigations +// suggest that this is neccesary when deleting and recreating a storage account with +// the same name in a short (though undetermined) time period. +// +// It is possible that this will give Terraform the appearance of being slow in +// future: I have attempted to mitigate this by logging whenever this happens. We +// may want to revisit this with configurable timeouts in the future as clearly +// unbounded wait loops is not ideal. It does seem preferable to the current situation +// where our polling loop will time out _with an operation in progress_, but no ID +// for the resource - so the state will not know about it, and conflicts will occur +// on the next run. +func pollIndefinitelyAsNeeded(client autorest.Client, response *http.Response, acceptableCodes ...int) (*http.Response, error) { + var resp *http.Response + var err error + + for { + resp, err = client.PollAsNeeded(response, acceptableCodes...) + if err != nil { + if resp.StatusCode != http.StatusAccepted { + log.Printf("[DEBUG] Starting new polling loop for %q", response.Request.URL.Path) + continue + } + + return resp, err + } + + return resp, nil + } +} + +// armMutexKV is the instance of MutexKV for ARM resources +var armMutexKV = mutexkv.NewMutexKV() diff --git a/builtin/providers/azurerm/provider_test.go b/builtin/providers/azurerm/provider_test.go new file mode 100644 index 0000000000..a26249f588 --- /dev/null +++ b/builtin/providers/azurerm/provider_test.go @@ -0,0 +1,40 @@ +package azurerm + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "azurerm": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + subscriptionID := os.Getenv("ARM_SUBSCRIPTION_ID") + clientID := os.Getenv("ARM_CLIENT_ID") + clientSecret := os.Getenv("ARM_CLIENT_SECRET") + tenantID := os.Getenv("ARM_TENANT_ID") + + if subscriptionID == "" || clientID == "" || clientSecret == "" || tenantID == "" { + t.Fatal("ARM_SUBSCRIPTION_ID, ARM_CLIENT_ID, ARM_CLIENT_SECRET and ARM_TENANT_ID must be set for acceptance tests") + } +} diff --git a/builtin/providers/azurerm/resource_arm_availability_set.go b/builtin/providers/azurerm/resource_arm_availability_set.go new file mode 100644 index 0000000000..74efc886d0 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_availability_set.go @@ -0,0 +1,146 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + + "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmAvailabilitySet() *schema.Resource { + return &schema.Resource{ + Create: resourceArmAvailabilitySetCreate, + Read: resourceArmAvailabilitySetRead, + Update: resourceArmAvailabilitySetCreate, + Delete: resourceArmAvailabilitySetDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "platform_update_domain_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 5, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + if value > 20 { + errors = append(errors, fmt.Errorf( + "Maximum value for `platform_update_domain_count` is 20")) + } + return + }, + }, + + "platform_fault_domain_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 3, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + if value > 3 { + errors = append(errors, fmt.Errorf( + "Maximum value for (%s) is 3", k)) + } + return + }, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmAvailabilitySetCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + availSetClient := client.availSetClient + + log.Printf("[INFO] preparing arguments for Azure ARM Availability Set creation.") + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + updateDomainCount := d.Get("platform_update_domain_count").(int) + faultDomainCount := d.Get("platform_fault_domain_count").(int) + tags := d.Get("tags").(map[string]interface{}) + + availSet := compute.AvailabilitySet{ + Name: &name, + Location: &location, + Properties: &compute.AvailabilitySetProperties{ + PlatformFaultDomainCount: &faultDomainCount, + PlatformUpdateDomainCount: &updateDomainCount, + }, + Tags: expandTags(tags), + } + + resp, err := availSetClient.CreateOrUpdate(resGroup, name, availSet) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + return resourceArmAvailabilitySetRead(d, meta) +} + +func resourceArmAvailabilitySetRead(d *schema.ResourceData, meta interface{}) error { + availSetClient := meta.(*ArmClient).availSetClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["availabilitySets"] + + resp, err := availSetClient.Get(resGroup, name) + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Availability Set %s: %s", name, err) + } + + availSet := *resp.Properties + d.Set("platform_update_domain_count", availSet.PlatformUpdateDomainCount) + d.Set("platform_fault_domain_count", availSet.PlatformFaultDomainCount) + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmAvailabilitySetDelete(d *schema.ResourceData, meta interface{}) error { + availSetClient := meta.(*ArmClient).availSetClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["availabilitySets"] + + _, err = availSetClient.Delete(resGroup, name) + + return err +} diff --git a/builtin/providers/azurerm/resource_arm_availability_set_test.go b/builtin/providers/azurerm/resource_arm_availability_set_test.go new file mode 100644 index 0000000000..488347b56c --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_availability_set_test.go @@ -0,0 +1,203 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMAvailabilitySet_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMAvailabilitySetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVAvailabilitySet_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMAvailabilitySetExists("azurerm_availability_set.test"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "name", "acceptanceTestAvailabilitySet1"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "platform_update_domain_count", "5"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "platform_fault_domain_count", "3"), + ), + }, + }, + }) +} + +func TestAccAzureRMAvailabilitySet_withTags(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMAvailabilitySetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVAvailabilitySet_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMAvailabilitySetExists("azurerm_availability_set.test"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMVAvailabilitySet_withUpdatedTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMAvailabilitySetExists("azurerm_availability_set.test"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func TestAccAzureRMAvailabilitySet_withDomainCounts(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMAvailabilitySetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVAvailabilitySet_withDomainCounts, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMAvailabilitySetExists("azurerm_availability_set.test"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "name", "acceptanceTestAvailabilitySet1"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "platform_update_domain_count", "10"), + resource.TestCheckResourceAttr( + "azurerm_availability_set.test", "platform_fault_domain_count", "1"), + ), + }, + }, + }) +} + +func testCheckAzureRMAvailabilitySetExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + availSetName := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for availability set: %s", availSetName) + } + + conn := testAccProvider.Meta().(*ArmClient).availSetClient + + resp, err := conn.Get(resourceGroup, availSetName) + if err != nil { + return fmt.Errorf("Bad: Get on availSetClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Availability Set %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMAvailabilitySetDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).availSetClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_availability_set" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, name) + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Availability Set still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMVAvailabilitySet_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_availability_set" "test" { + name = "acceptanceTestAvailabilitySet1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} +` + +var testAccAzureRMVAvailabilitySet_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_availability_set" "test" { + name = "acceptanceTestAvailabilitySet1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMVAvailabilitySet_withUpdatedTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_availability_set" "test" { + name = "acceptanceTestAvailabilitySet1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + tags { + environment = "staging" + } +} +` + +var testAccAzureRMVAvailabilitySet_withDomainCounts = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_availability_set" "test" { + name = "acceptanceTestAvailabilitySet1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + platform_update_domain_count = 10 + platform_fault_domain_count = 1 +} +` diff --git a/builtin/providers/azurerm/resource_arm_cdn_endpoint.go b/builtin/providers/azurerm/resource_arm_cdn_endpoint.go new file mode 100644 index 0000000000..42d0f78f1a --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_cdn_endpoint.go @@ -0,0 +1,451 @@ +package azurerm + +import ( + "bytes" + "fmt" + "log" + "net/http" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/cdn" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmCdnEndpoint() *schema.Resource { + return &schema.Resource{ + Create: resourceArmCdnEndpointCreate, + Read: resourceArmCdnEndpointRead, + Update: resourceArmCdnEndpointUpdate, + Delete: resourceArmCdnEndpointDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "profile_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "origin_host_header": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "is_http_allowed": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "is_https_allowed": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "origin": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "host_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "http_port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + + "https_port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + }, + }, + Set: resourceArmCdnEndpointOriginHash, + }, + + "origin_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "querystring_caching_behaviour": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "IgnoreQueryString", + ValidateFunc: validateCdnEndpointQuerystringCachingBehaviour, + }, + + "content_types_to_compress": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "is_compression_enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "host_name": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmCdnEndpointCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + cdnEndpointsClient := client.cdnEndpointsClient + + log.Printf("[INFO] preparing arguments for Azure ARM CDN EndPoint creation.") + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + profileName := d.Get("profile_name").(string) + http_allowed := d.Get("is_http_allowed").(bool) + https_allowed := d.Get("is_https_allowed").(bool) + compression_enabled := d.Get("is_compression_enabled").(bool) + caching_behaviour := d.Get("querystring_caching_behaviour").(string) + tags := d.Get("tags").(map[string]interface{}) + + properties := cdn.EndpointPropertiesCreateUpdateParameters{ + IsHTTPAllowed: &http_allowed, + IsHTTPSAllowed: &https_allowed, + IsCompressionEnabled: &compression_enabled, + QueryStringCachingBehavior: cdn.QueryStringCachingBehavior(caching_behaviour), + } + + origins, originsErr := expandAzureRmCdnEndpointOrigins(d) + if originsErr != nil { + return fmt.Errorf("Error Building list of CDN Endpoint Origins: %s", originsErr) + } + if len(origins) > 0 { + properties.Origins = &origins + } + + if v, ok := d.GetOk("origin_host_header"); ok { + host_header := v.(string) + properties.OriginHostHeader = &host_header + } + + if v, ok := d.GetOk("origin_path"); ok { + origin_path := v.(string) + properties.OriginPath = &origin_path + } + + if v, ok := d.GetOk("content_types_to_compress"); ok { + var content_types []string + ctypes := v.(*schema.Set).List() + for _, ct := range ctypes { + str := ct.(string) + content_types = append(content_types, str) + } + + properties.ContentTypesToCompress = &content_types + } + + cdnEndpoint := cdn.EndpointCreateParameters{ + Location: &location, + Properties: &properties, + Tags: expandTags(tags), + } + + resp, err := cdnEndpointsClient.Create(name, cdnEndpoint, profileName, resGroup) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for CDN Endpoint (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating", "Creating"}, + Target: []string{"Succeeded"}, + Refresh: cdnEndpointStateRefreshFunc(client, resGroup, profileName, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for CDN Endpoint (%s) to become available: %s", name, err) + } + + return resourceArmCdnEndpointRead(d, meta) +} + +func resourceArmCdnEndpointRead(d *schema.ResourceData, meta interface{}) error { + cdnEndpointsClient := meta.(*ArmClient).cdnEndpointsClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["endpoints"] + profileName := id.Path["profiles"] + if profileName == "" { + profileName = id.Path["Profiles"] + } + log.Printf("[INFO] Trying to find the AzureRM CDN Endpoint %s (Profile: %s, RG: %s)", name, profileName, resGroup) + resp, err := cdnEndpointsClient.Get(name, profileName, resGroup) + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure CDN Endpoint %s: %s", name, err) + } + + d.Set("name", resp.Name) + d.Set("host_name", resp.Properties.HostName) + d.Set("is_compression_enabled", resp.Properties.IsCompressionEnabled) + d.Set("is_http_allowed", resp.Properties.IsHTTPAllowed) + d.Set("is_https_allowed", resp.Properties.IsHTTPSAllowed) + d.Set("querystring_caching_behaviour", resp.Properties.QueryStringCachingBehavior) + if resp.Properties.OriginHostHeader != nil && *resp.Properties.OriginHostHeader != "" { + d.Set("origin_host_header", resp.Properties.OriginHostHeader) + } + if resp.Properties.OriginPath != nil && *resp.Properties.OriginPath != "" { + d.Set("origin_path", resp.Properties.OriginPath) + } + if resp.Properties.ContentTypesToCompress != nil && len(*resp.Properties.ContentTypesToCompress) > 0 { + d.Set("content_types_to_compress", flattenAzureRMCdnEndpointContentTypes(resp.Properties.ContentTypesToCompress)) + } + d.Set("origin", flattenAzureRMCdnEndpointOrigin(resp.Properties.Origins)) + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmCdnEndpointUpdate(d *schema.ResourceData, meta interface{}) error { + cdnEndpointsClient := meta.(*ArmClient).cdnEndpointsClient + + if !d.HasChange("tags") { + return nil + } + + name := d.Get("name").(string) + resGroup := d.Get("resource_group_name").(string) + profileName := d.Get("profile_name").(string) + http_allowed := d.Get("is_http_allowed").(bool) + https_allowed := d.Get("is_https_allowed").(bool) + compression_enabled := d.Get("is_compression_enabled").(bool) + caching_behaviour := d.Get("querystring_caching_behaviour").(string) + newTags := d.Get("tags").(map[string]interface{}) + + properties := cdn.EndpointPropertiesCreateUpdateParameters{ + IsHTTPAllowed: &http_allowed, + IsHTTPSAllowed: &https_allowed, + IsCompressionEnabled: &compression_enabled, + QueryStringCachingBehavior: cdn.QueryStringCachingBehavior(caching_behaviour), + } + + if d.HasChange("origin") { + origins, originsErr := expandAzureRmCdnEndpointOrigins(d) + if originsErr != nil { + return fmt.Errorf("Error Building list of CDN Endpoint Origins: %s", originsErr) + } + if len(origins) > 0 { + properties.Origins = &origins + } + } + + if d.HasChange("origin_host_header") { + host_header := d.Get("origin_host_header").(string) + properties.OriginHostHeader = &host_header + } + + if d.HasChange("origin_path") { + origin_path := d.Get("origin_path").(string) + properties.OriginPath = &origin_path + } + + if d.HasChange("content_types_to_compress") { + var content_types []string + ctypes := d.Get("content_types_to_compress").(*schema.Set).List() + for _, ct := range ctypes { + str := ct.(string) + content_types = append(content_types, str) + } + + properties.ContentTypesToCompress = &content_types + } + + updateProps := cdn.EndpointUpdateParameters{ + Tags: expandTags(newTags), + Properties: &properties, + } + + _, err := cdnEndpointsClient.Update(name, updateProps, profileName, resGroup) + if err != nil { + return fmt.Errorf("Error issuing Azure ARM update request to update CDN Endpoint %q: %s", name, err) + } + + return resourceArmCdnEndpointRead(d, meta) +} + +func resourceArmCdnEndpointDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient).cdnEndpointsClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + profileName := id.Path["profiles"] + if profileName == "" { + profileName = id.Path["Profiles"] + } + name := id.Path["endpoints"] + + accResp, err := client.DeleteIfExists(name, profileName, resGroup) + if err != nil { + if accResp.StatusCode == http.StatusNotFound { + return nil + } + return fmt.Errorf("Error issuing AzureRM delete request for CDN Endpoint %q: %s", name, err) + } + _, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response, http.StatusNotFound) + if err != nil { + return fmt.Errorf("Error polling for AzureRM delete request for CDN Endpoint %q: %s", name, err) + } + + return err +} + +func cdnEndpointStateRefreshFunc(client *ArmClient, resourceGroupName string, profileName string, name string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.cdnEndpointsClient.Get(name, profileName, resourceGroupName) + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in cdnEndpointStateRefreshFunc to Azure ARM for CDN Endpoint '%s' (RG: '%s'): %s", name, resourceGroupName, err) + } + return res, string(res.Properties.ProvisioningState), nil + } +} + +func validateCdnEndpointQuerystringCachingBehaviour(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + cachingTypes := map[string]bool{ + "ignorequerystring": true, + "bypasscaching": true, + "usequerystring": true, + } + + if !cachingTypes[value] { + errors = append(errors, fmt.Errorf("CDN Endpoint querystringCachingBehaviours can only be IgnoreQueryString, BypassCaching or UseQueryString")) + } + return +} + +func resourceArmCdnEndpointOriginHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["host_name"].(string))) + + return hashcode.String(buf.String()) +} + +func expandAzureRmCdnEndpointOrigins(d *schema.ResourceData) ([]cdn.DeepCreatedOrigin, error) { + configs := d.Get("origin").(*schema.Set).List() + origins := make([]cdn.DeepCreatedOrigin, 0, len(configs)) + + for _, configRaw := range configs { + data := configRaw.(map[string]interface{}) + + host_name := data["host_name"].(string) + + properties := cdn.DeepCreatedOriginProperties{ + HostName: &host_name, + } + + if v, ok := data["https_port"]; ok { + https_port := v.(int) + properties.HTTPSPort = &https_port + + } + + if v, ok := data["http_port"]; ok { + http_port := v.(int) + properties.HTTPPort = &http_port + } + + name := data["name"].(string) + + origin := cdn.DeepCreatedOrigin{ + Name: &name, + Properties: &properties, + } + + origins = append(origins, origin) + } + + return origins, nil +} + +func flattenAzureRMCdnEndpointOrigin(list *[]cdn.DeepCreatedOrigin) []map[string]interface{} { + result := make([]map[string]interface{}, 0, len(*list)) + for _, i := range *list { + l := map[string]interface{}{ + "name": *i.Name, + "host_name": *i.Properties.HostName, + } + + if i.Properties.HTTPPort != nil { + l["http_port"] = *i.Properties.HTTPPort + } + if i.Properties.HTTPSPort != nil { + l["https_port"] = *i.Properties.HTTPSPort + } + result = append(result, l) + } + return result +} + +func flattenAzureRMCdnEndpointContentTypes(list *[]string) []interface{} { + vs := make([]interface{}, 0, len(*list)) + for _, v := range *list { + vs = append(vs, v) + } + return vs +} diff --git a/builtin/providers/azurerm/resource_arm_cdn_endpoint_test.go b/builtin/providers/azurerm/resource_arm_cdn_endpoint_test.go new file mode 100644 index 0000000000..4260765a77 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_cdn_endpoint_test.go @@ -0,0 +1,201 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMCdnEndpoint_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMCdnEndpointDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMCdnEndpoint_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMCdnEndpointExists("azurerm_cdn_endpoint.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMCdnEndpoints_withTags(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMCdnEndpointDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMCdnEndpoint_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMCdnEndpointExists("azurerm_cdn_endpoint.test"), + resource.TestCheckResourceAttr( + "azurerm_cdn_endpoint.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_cdn_endpoint.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_cdn_endpoint.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMCdnEndpoint_withTagsUpdate, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMCdnEndpointExists("azurerm_cdn_endpoint.test"), + resource.TestCheckResourceAttr( + "azurerm_cdn_endpoint.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_cdn_endpoint.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func testCheckAzureRMCdnEndpointExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + name := rs.Primary.Attributes["name"] + profileName := rs.Primary.Attributes["profile_name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for cdn endpoint: %s", name) + } + + conn := testAccProvider.Meta().(*ArmClient).cdnEndpointsClient + + resp, err := conn.Get(name, profileName, resourceGroup) + if err != nil { + return fmt.Errorf("Bad: Get on cdnEndpointsClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: CDN Endpoint %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMCdnEndpointDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).cdnEndpointsClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_cdn_endpoint" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + profileName := rs.Primary.Attributes["profile_name"] + + resp, err := conn.Get(name, profileName, resourceGroup) + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("CDN Endpoint still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMCdnEndpoint_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" +} + +resource "azurerm_cdn_endpoint" "test" { + name = "acceptanceTestCdnEndpoint1" + profile_name = "${azurerm_cdn_profile.test.name}" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + origin { + name = "acceptanceTestCdnOrigin1" + host_name = "www.example.com" + } +} +` + +var testAccAzureRMCdnEndpoint_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup2" + location = "West US" +} +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile2" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" +} + +resource "azurerm_cdn_endpoint" "test" { + name = "acceptanceTestCdnEndpoint2" + profile_name = "${azurerm_cdn_profile.test.name}" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + origin { + name = "acceptanceTestCdnOrigin2" + host_name = "www.example.com" + } + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMCdnEndpoint_withTagsUpdate = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup2" + location = "West US" +} +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile2" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" +} + +resource "azurerm_cdn_endpoint" "test" { + name = "acceptanceTestCdnEndpoint2" + profile_name = "${azurerm_cdn_profile.test.name}" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + origin { + name = "acceptanceTestCdnOrigin2" + host_name = "www.example.com" + } + + tags { + environment = "staging" + } +} +` diff --git a/builtin/providers/azurerm/resource_arm_cdn_profile.go b/builtin/providers/azurerm/resource_arm_cdn_profile.go new file mode 100644 index 0000000000..49681e2eff --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_cdn_profile.go @@ -0,0 +1,186 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/cdn" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmCdnProfile() *schema.Resource { + return &schema.Resource{ + Create: resourceArmCdnProfileCreate, + Read: resourceArmCdnProfileRead, + Update: resourceArmCdnProfileUpdate, + Delete: resourceArmCdnProfileDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "sku": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateCdnProfileSku, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmCdnProfileCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + cdnProfilesClient := client.cdnProfilesClient + + log.Printf("[INFO] preparing arguments for Azure ARM CDN Profile creation.") + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + sku := d.Get("sku").(string) + tags := d.Get("tags").(map[string]interface{}) + + properties := cdn.ProfilePropertiesCreateParameters{ + Sku: &cdn.Sku{ + Name: cdn.SkuName(sku), + }, + } + + cdnProfile := cdn.ProfileCreateParameters{ + Location: &location, + Properties: &properties, + Tags: expandTags(tags), + } + + resp, err := cdnProfilesClient.Create(name, cdnProfile, resGroup) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for CDN Profile (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating", "Creating"}, + Target: []string{"Succeeded"}, + Refresh: cdnProfileStateRefreshFunc(client, resGroup, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for CDN Profile (%s) to become available: %s", name, err) + } + + return resourceArmCdnProfileRead(d, meta) +} + +func resourceArmCdnProfileRead(d *schema.ResourceData, meta interface{}) error { + cdnProfilesClient := meta.(*ArmClient).cdnProfilesClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["Profiles"] + + resp, err := cdnProfilesClient.Get(name, resGroup) + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure CDN Profile %s: %s", name, err) + } + + if resp.Properties != nil && resp.Properties.Sku != nil { + d.Set("sku", string(resp.Properties.Sku.Name)) + } + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmCdnProfileUpdate(d *schema.ResourceData, meta interface{}) error { + cdnProfilesClient := meta.(*ArmClient).cdnProfilesClient + + if !d.HasChange("tags") { + return nil + } + + name := d.Get("name").(string) + resGroup := d.Get("resource_group_name").(string) + newTags := d.Get("tags").(map[string]interface{}) + + props := cdn.ProfileUpdateParameters{ + Tags: expandTags(newTags), + } + + _, err := cdnProfilesClient.Update(name, props, resGroup) + if err != nil { + return fmt.Errorf("Error issuing Azure ARM update request to update CDN Profile %q: %s", name, err) + } + + return resourceArmCdnProfileRead(d, meta) +} + +func resourceArmCdnProfileDelete(d *schema.ResourceData, meta interface{}) error { + cdnProfilesClient := meta.(*ArmClient).cdnProfilesClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["Profiles"] + + _, err = cdnProfilesClient.DeleteIfExists(name, resGroup) + + return err +} + +func cdnProfileStateRefreshFunc(client *ArmClient, resourceGroupName string, cdnProfileName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.cdnProfilesClient.Get(cdnProfileName, resourceGroupName) + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in cdnProfileStateRefreshFunc to Azure ARM for CND Profile '%s' (RG: '%s'): %s", cdnProfileName, resourceGroupName, err) + } + return res, string(res.Properties.ProvisioningState), nil + } +} + +func validateCdnProfileSku(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + skus := map[string]bool{ + "standard": true, + "premium": true, + } + + if !skus[value] { + errors = append(errors, fmt.Errorf("CDN Profile SKU can only be Standard or Premium")) + } + return +} diff --git a/builtin/providers/azurerm/resource_arm_cdn_profile_test.go b/builtin/providers/azurerm/resource_arm_cdn_profile_test.go new file mode 100644 index 0000000000..3f58496402 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_cdn_profile_test.go @@ -0,0 +1,199 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestResourceAzureRMCdnProfileSKU_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "Random", + ErrCount: 1, + }, + { + Value: "Standard", + ErrCount: 0, + }, + { + Value: "Premium", + ErrCount: 0, + }, + { + Value: "STANDARD", + ErrCount: 0, + }, + { + Value: "PREMIUM", + ErrCount: 0, + }, + } + + for _, tc := range cases { + _, errors := validateCdnProfileSku(tc.Value, "azurerm_cdn_profile") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Azure RM CDN Profile SKU to trigger a validation error") + } + } +} + +func TestAccAzureRMCdnProfile_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMCdnProfileDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMCdnProfile_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMCdnProfileExists("azurerm_cdn_profile.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMCdnProfile_withTags(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMCdnProfileDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMCdnProfile_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMCdnProfileExists("azurerm_cdn_profile.test"), + resource.TestCheckResourceAttr( + "azurerm_cdn_profile.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_cdn_profile.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_cdn_profile.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMCdnProfile_withTagsUpdate, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMCdnProfileExists("azurerm_cdn_profile.test"), + resource.TestCheckResourceAttr( + "azurerm_cdn_profile.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_cdn_profile.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func testCheckAzureRMCdnProfileExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + name := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for cdn profile: %s", name) + } + + conn := testAccProvider.Meta().(*ArmClient).cdnProfilesClient + + resp, err := conn.Get(name, resourceGroup) + if err != nil { + return fmt.Errorf("Bad: Get on cdnProfilesClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: CDN Profile %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMCdnProfileDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).cdnProfilesClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_cdn_profile" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(name, resourceGroup) + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("CDN Profile still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMCdnProfile_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" +} +` + +var testAccAzureRMCdnProfile_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMCdnProfile_withTagsUpdate = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" + + tags { + environment = "staging" + } +} +` diff --git a/builtin/providers/azurerm/resource_arm_local_network_gateway.go b/builtin/providers/azurerm/resource_arm_local_network_gateway.go new file mode 100644 index 0000000000..ae91d665fc --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_local_network_gateway.go @@ -0,0 +1,136 @@ +package azurerm + +import ( + "fmt" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/Azure/azure-sdk-for-go/core/http" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmLocalNetworkGateway() *schema.Resource { + return &schema.Resource{ + Create: resourceArmLocalNetworkGatewayCreate, + Read: resourceArmLocalNetworkGatewayRead, + Update: resourceArmLocalNetworkGatewayCreate, + Delete: resourceArmLocalNetworkGatewayDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "gateway_address": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "address_space": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + } +} + +func resourceArmLocalNetworkGatewayCreate(d *schema.ResourceData, meta interface{}) error { + lnetClient := meta.(*ArmClient).localNetConnClient + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + ipAddress := d.Get("gateway_address").(string) + + // fetch the 'address_space_prefixes: + prefixes := []string{} + for _, pref := range d.Get("address_space").([]interface{}) { + prefixes = append(prefixes, pref.(string)) + } + + resp, err := lnetClient.CreateOrUpdate(resGroup, name, network.LocalNetworkGateway{ + Name: &name, + Location: &location, + Properties: &network.LocalNetworkGatewayPropertiesFormat{ + LocalNetworkAddressSpace: &network.AddressSpace{ + AddressPrefixes: &prefixes, + }, + GatewayIPAddress: &ipAddress, + }, + }) + if err != nil { + return fmt.Errorf("Error creating Azure ARM Local Network Gateway '%s': %s", name, err) + } + + d.SetId(*resp.ID) + + return resourceArmLocalNetworkGatewayRead(d, meta) +} + +// resourceArmLocalNetworkGatewayRead goes ahead and reads the state of the corresponding ARM local network gateway. +func resourceArmLocalNetworkGatewayRead(d *schema.ResourceData, meta interface{}) error { + lnetClient := meta.(*ArmClient).localNetConnClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + name := id.Path["localNetworkGateways"] + resGroup := id.ResourceGroup + + resp, err := lnetClient.Get(resGroup, name) + if err != nil { + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + + return fmt.Errorf("Error reading the state of Azure ARM local network gateway '%s': %s", name, err) + } + + d.Set("gateway_address", resp.Properties.GatewayIPAddress) + + prefs := []string{} + if ps := *resp.Properties.LocalNetworkAddressSpace.AddressPrefixes; ps != nil { + prefs = ps + } + d.Set("address_space", prefs) + + return nil +} + +// resourceArmLocalNetworkGatewayDelete deletes the specified ARM local network gateway. +func resourceArmLocalNetworkGatewayDelete(d *schema.ResourceData, meta interface{}) error { + lnetClient := meta.(*ArmClient).localNetConnClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + name := id.Path["localNetworkGateways"] + resGroup := id.ResourceGroup + + _, err = lnetClient.Delete(resGroup, name) + if err != nil { + return fmt.Errorf("Error issuing Azure ARM delete request of local network gateway '%s': %s", name, err) + } + + return nil +} diff --git a/builtin/providers/azurerm/resource_arm_local_network_gateway_test.go b/builtin/providers/azurerm/resource_arm_local_network_gateway_test.go new file mode 100644 index 0000000000..889a57e6eb --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_local_network_gateway_test.go @@ -0,0 +1,108 @@ +package azurerm + +import ( + "fmt" + "testing" + + "github.com/Azure/azure-sdk-for-go/core/http" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMLocalNetworkGateway_basic(t *testing.T) { + name := "azurerm_local_network_gateway.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMLocalNetworkGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMLocalNetworkGatewayConfig_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMLocalNetworkGatewayExists(name), + resource.TestCheckResourceAttr(name, "gateway_address", "127.0.0.1"), + resource.TestCheckResourceAttr(name, "address_space.0", "127.0.0.0/8"), + ), + }, + }, + }) +} + +// testCheckAzureRMLocalNetworkGatewayExists returns the resurce.TestCheckFunc +// which checks whether or not the expected local network gateway exists both +// in the schema, and on Azure. +func testCheckAzureRMLocalNetworkGatewayExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // first check within the schema for the local network gateway: + res, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Local network gateway '%s' not found.", name) + } + + // then, extract the name and the resource group: + id, err := parseAzureResourceID(res.Primary.ID) + if err != nil { + return err + } + localNetName := id.Path["localNetworkGateways"] + resGrp := id.ResourceGroup + + // and finally, check that it exists on Azure: + lnetClient := testAccProvider.Meta().(*ArmClient).localNetConnClient + + resp, err := lnetClient.Get(resGrp, localNetName) + if err != nil { + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Local network gateway '%s' (resource group '%s') does not exist on Azure.", localNetName, resGrp) + } + + return fmt.Errorf("Error reading the state of local network gateway '%s'.", localNetName) + } + + return nil + } +} + +func testCheckAzureRMLocalNetworkGatewayDestroy(s *terraform.State) error { + for _, res := range s.RootModule().Resources { + if res.Type != "azurerm_local_network_gateway" { + continue + } + + id, err := parseAzureResourceID(res.Primary.ID) + if err != nil { + return err + } + localNetName := id.Path["localNetworkGateways"] + resGrp := id.ResourceGroup + + lnetClient := testAccProvider.Meta().(*ArmClient).localNetConnClient + resp, err := lnetClient.Get(resGrp, localNetName) + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Local network gateway still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMLocalNetworkGatewayConfig_basic = ` +resource "azurerm_resource_group" "test" { + name = "tftestingResourceGroup" + location = "West US" +} + +resource "azurerm_local_network_gateway" "test" { + name = "tftestingLocalNetworkGateway" + location = "${azurerm_resource_group.test.location}" + resource_group_name = "${azurerm_resource_group.test.name}" + gateway_address = "127.0.0.1" + address_space = ["127.0.0.0/8"] +} +` diff --git a/builtin/providers/azurerm/resource_arm_network_interface_card.go b/builtin/providers/azurerm/resource_arm_network_interface_card.go new file mode 100644 index 0000000000..f2dbbed344 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_network_interface_card.go @@ -0,0 +1,399 @@ +package azurerm + +import ( + "bytes" + "fmt" + "log" + "net/http" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmNetworkInterface() *schema.Resource { + return &schema.Resource{ + Create: resourceArmNetworkInterfaceCreate, + Read: resourceArmNetworkInterfaceRead, + Update: resourceArmNetworkInterfaceCreate, + Delete: resourceArmNetworkInterfaceDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "network_security_group_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "mac_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "virtual_machine_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "ip_configuration": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "private_ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "private_ip_address_allocation": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateNetworkInterfacePrivateIpAddressAllocation, + }, + + "public_ip_address_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "load_balancer_backend_address_pools_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "load_balancer_inbound_nat_rules_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + }, + Set: resourceArmNetworkInterfaceIpConfigurationHash, + }, + + "dns_servers": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "internal_dns_name_label": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "applied_dns_servers": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "internal_fqdn": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmNetworkInterfaceCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + ifaceClient := client.ifaceClient + + log.Printf("[INFO] preparing arguments for Azure ARM Network Interface creation.") + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + tags := d.Get("tags").(map[string]interface{}) + + properties := network.InterfacePropertiesFormat{} + + if v, ok := d.GetOk("network_security_group_id"); ok { + nsgId := v.(string) + properties.NetworkSecurityGroup = &network.SecurityGroup{ + ID: &nsgId, + } + } + + dns, hasDns := d.GetOk("dns_servers") + nameLabel, hasNameLabel := d.GetOk("internal_dns_name_label") + if hasDns || hasNameLabel { + ifaceDnsSettings := network.InterfaceDNSSettings{} + + if hasDns { + var dnsServers []string + dns := dns.(*schema.Set).List() + for _, v := range dns { + str := v.(string) + dnsServers = append(dnsServers, str) + } + ifaceDnsSettings.DNSServers = &dnsServers + } + + if hasNameLabel { + name_label := nameLabel.(string) + ifaceDnsSettings.InternalDNSNameLabel = &name_label + + } + + properties.DNSSettings = &ifaceDnsSettings + } + + ipConfigs, sgErr := expandAzureRmNetworkInterfaceIpConfigurations(d) + if sgErr != nil { + return fmt.Errorf("Error Building list of Network Interface IP Configurations: %s", sgErr) + } + if len(ipConfigs) > 0 { + properties.IPConfigurations = &ipConfigs + } + + iface := network.Interface{ + Name: &name, + Location: &location, + Properties: &properties, + Tags: expandTags(tags), + } + + resp, err := ifaceClient.CreateOrUpdate(resGroup, name, iface) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Network Interface (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: networkInterfaceStateRefreshFunc(client, resGroup, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Network Interface (%s) to become available: %s", name, err) + } + + return resourceArmNetworkInterfaceRead(d, meta) +} + +func resourceArmNetworkInterfaceRead(d *schema.ResourceData, meta interface{}) error { + ifaceClient := meta.(*ArmClient).ifaceClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["networkInterfaces"] + + resp, err := ifaceClient.Get(resGroup, name, "") + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Netowkr Interface %s: %s", name, err) + } + + iface := *resp.Properties + + if iface.MacAddress != nil { + if *iface.MacAddress != "" { + d.Set("mac_address", iface.MacAddress) + } + } + + if iface.VirtualMachine != nil { + if *iface.VirtualMachine.ID != "" { + d.Set("virtual_machine_id", *iface.VirtualMachine.ID) + } + } + + if iface.DNSSettings != nil { + if iface.DNSSettings.AppliedDNSServers != nil && len(*iface.DNSSettings.AppliedDNSServers) > 0 { + dnsServers := make([]string, 0, len(*iface.DNSSettings.AppliedDNSServers)) + for _, dns := range *iface.DNSSettings.AppliedDNSServers { + dnsServers = append(dnsServers, dns) + } + + if err := d.Set("applied_dns_servers", dnsServers); err != nil { + return err + } + } + + if iface.DNSSettings.InternalFqdn != nil && *iface.DNSSettings.InternalFqdn != "" { + d.Set("internal_fqdn", iface.DNSSettings.InternalFqdn) + } + } + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmNetworkInterfaceDelete(d *schema.ResourceData, meta interface{}) error { + ifaceClient := meta.(*ArmClient).ifaceClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["networkInterfaces"] + + _, err = ifaceClient.Delete(resGroup, name) + + return err +} + +func networkInterfaceStateRefreshFunc(client *ArmClient, resourceGroupName string, ifaceName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.ifaceClient.Get(resourceGroupName, ifaceName, "") + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in networkInterfaceStateRefreshFunc to Azure ARM for network interace '%s' (RG: '%s'): %s", ifaceName, resourceGroupName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} + +func resourceArmNetworkInterfaceIpConfigurationHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["subnet_id"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["private_ip_address_allocation"].(string))) + + return hashcode.String(buf.String()) +} + +func validateNetworkInterfacePrivateIpAddressAllocation(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + allocations := map[string]bool{ + "static": true, + "dynamic": true, + } + + if !allocations[value] { + errors = append(errors, fmt.Errorf("Network Interface Allocations can only be Static or Dynamic")) + } + return +} + +func expandAzureRmNetworkInterfaceIpConfigurations(d *schema.ResourceData) ([]network.InterfaceIPConfiguration, error) { + configs := d.Get("ip_configuration").(*schema.Set).List() + ipConfigs := make([]network.InterfaceIPConfiguration, 0, len(configs)) + + for _, configRaw := range configs { + data := configRaw.(map[string]interface{}) + + subnet_id := data["subnet_id"].(string) + private_ip_allocation_method := data["private_ip_address_allocation"].(string) + + properties := network.InterfaceIPConfigurationPropertiesFormat{ + Subnet: &network.Subnet{ + ID: &subnet_id, + }, + PrivateIPAllocationMethod: &private_ip_allocation_method, + } + + if v := data["private_ip_address"].(string); v != "" { + properties.PrivateIPAddress = &v + } + + if v := data["public_ip_address_id"].(string); v != "" { + properties.PublicIPAddress = &network.PublicIPAddress{ + ID: &v, + } + } + + if v, ok := data["load_balancer_backend_address_pools_ids"]; ok { + var ids []network.BackendAddressPool + pools := v.(*schema.Set).List() + for _, p := range pools { + pool_id := p.(string) + id := network.BackendAddressPool{ + ID: &pool_id, + } + + ids = append(ids, id) + } + + properties.LoadBalancerBackendAddressPools = &ids + } + + if v, ok := data["load_balancer_inbound_nat_rules_ids"]; ok { + var natRules []network.InboundNatRule + rules := v.(*schema.Set).List() + for _, r := range rules { + rule_id := r.(string) + rule := network.InboundNatRule{ + ID: &rule_id, + } + + natRules = append(natRules, rule) + } + + properties.LoadBalancerInboundNatRules = &natRules + } + + name := data["name"].(string) + ipConfig := network.InterfaceIPConfiguration{ + Name: &name, + Properties: &properties, + } + + ipConfigs = append(ipConfigs, ipConfig) + } + + return ipConfigs, nil +} diff --git a/builtin/providers/azurerm/resource_arm_network_interface_card_test.go b/builtin/providers/azurerm/resource_arm_network_interface_card_test.go new file mode 100644 index 0000000000..8936010134 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_network_interface_card_test.go @@ -0,0 +1,300 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMNetworkInterface_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMNetworkInterfaceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMNetworkInterface_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkInterfaceExists("azurerm_network_interface.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMNetworkInterface_withTags(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMNetworkInterfaceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMNetworkInterface_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkInterfaceExists("azurerm_network_interface.test"), + resource.TestCheckResourceAttr( + "azurerm_network_interface.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_network_interface.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_network_interface.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMNetworkInterface_withTagsUpdate, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkInterfaceExists("azurerm_network_interface.test"), + resource.TestCheckResourceAttr( + "azurerm_network_interface.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_network_interface.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +///TODO: Re-enable this test when https://github.com/Azure/azure-sdk-for-go/issues/259 is fixed +//func TestAccAzureRMNetworkInterface_addingIpConfigurations(t *testing.T) { +// +// resource.Test(t, resource.TestCase{ +// PreCheck: func() { testAccPreCheck(t) }, +// Providers: testAccProviders, +// CheckDestroy: testCheckAzureRMNetworkInterfaceDestroy, +// Steps: []resource.TestStep{ +// resource.TestStep{ +// Config: testAccAzureRMNetworkInterface_basic, +// Check: resource.ComposeTestCheckFunc( +// testCheckAzureRMNetworkInterfaceExists("azurerm_network_interface.test"), +// resource.TestCheckResourceAttr( +// "azurerm_network_interface.test", "ip_configuration.#", "1"), +// ), +// }, +// +// resource.TestStep{ +// Config: testAccAzureRMNetworkInterface_extraIpConfiguration, +// Check: resource.ComposeTestCheckFunc( +// testCheckAzureRMNetworkInterfaceExists("azurerm_network_interface.test"), +// resource.TestCheckResourceAttr( +// "azurerm_network_interface.test", "ip_configuration.#", "2"), +// ), +// }, +// }, +// }) +//} + +func testCheckAzureRMNetworkInterfaceExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + name := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for availability set: %s", name) + } + + conn := testAccProvider.Meta().(*ArmClient).ifaceClient + + resp, err := conn.Get(resourceGroup, name, "") + if err != nil { + return fmt.Errorf("Bad: Get on ifaceClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Network Interface %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMNetworkInterfaceDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).ifaceClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_network_interface" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, name, "") + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Network Interface still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMNetworkInterface_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_subnet" "test" { + name = "testsubnet" + resource_group_name = "${azurerm_resource_group.test.name}" + virtual_network_name = "${azurerm_virtual_network.test.name}" + address_prefix = "10.0.2.0/24" +} + +resource "azurerm_network_interface" "test" { + name = "acceptanceTestNetworkInterface1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + ip_configuration { + name = "testconfiguration1" + subnet_id = "${azurerm_subnet.test.id}" + private_ip_address_allocation = "dynamic" + } +} +` + +var testAccAzureRMNetworkInterface_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_subnet" "test" { + name = "testsubnet" + resource_group_name = "${azurerm_resource_group.test.name}" + virtual_network_name = "${azurerm_virtual_network.test.name}" + address_prefix = "10.0.2.0/24" +} + +resource "azurerm_network_interface" "test" { + name = "acceptanceTestNetworkInterface1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + ip_configuration { + name = "testconfiguration1" + subnet_id = "${azurerm_subnet.test.id}" + private_ip_address_allocation = "dynamic" + } + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMNetworkInterface_withTagsUpdate = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_subnet" "test" { + name = "testsubnet" + resource_group_name = "${azurerm_resource_group.test.name}" + virtual_network_name = "${azurerm_virtual_network.test.name}" + address_prefix = "10.0.2.0/24" +} + +resource "azurerm_network_interface" "test" { + name = "acceptanceTestNetworkInterface1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + ip_configuration { + name = "testconfiguration1" + subnet_id = "${azurerm_subnet.test.id}" + private_ip_address_allocation = "dynamic" + } + + tags { + environment = "staging" + } +} +` + +//TODO: Re-enable this test when https://github.com/Azure/azure-sdk-for-go/issues/259 is fixed +//var testAccAzureRMNetworkInterface_extraIpConfiguration = ` +//resource "azurerm_resource_group" "test" { +// name = "acceptanceTestResourceGroup1" +// location = "West US" +//} +// +//resource "azurerm_virtual_network" "test" { +// name = "acceptanceTestVirtualNetwork1" +// address_space = ["10.0.0.0/16"] +// location = "West US" +// resource_group_name = "${azurerm_resource_group.test.name}" +//} +// +//resource "azurerm_subnet" "test" { +// name = "testsubnet" +// resource_group_name = "${azurerm_resource_group.test.name}" +// virtual_network_name = "${azurerm_virtual_network.test.name}" +// address_prefix = "10.0.2.0/24" +//} +// +//resource "azurerm_subnet" "test1" { +// name = "testsubnet1" +// resource_group_name = "${azurerm_resource_group.test.name}" +// virtual_network_name = "${azurerm_virtual_network.test.name}" +// address_prefix = "10.0.1.0/24" +//} +// +//resource "azurerm_network_interface" "test" { +// name = "acceptanceTestNetworkInterface1" +// location = "West US" +// resource_group_name = "${azurerm_resource_group.test.name}" +// +// ip_configuration { +// name = "testconfiguration1" +// subnet_id = "${azurerm_subnet.test.id}" +// private_ip_address_allocation = "dynamic" +// } +// +// ip_configuration { +// name = "testconfiguration2" +// subnet_id = "${azurerm_subnet.test1.id}" +// private_ip_address_allocation = "dynamic" +// primary = true +// } +//} +//` diff --git a/builtin/providers/azurerm/resource_arm_network_security_group.go b/builtin/providers/azurerm/resource_arm_network_security_group.go new file mode 100644 index 0000000000..cc73f509c7 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_network_security_group.go @@ -0,0 +1,301 @@ +package azurerm + +import ( + "bytes" + "fmt" + "log" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmNetworkSecurityGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceArmNetworkSecurityGroupCreate, + Read: resourceArmNetworkSecurityGroupRead, + Update: resourceArmNetworkSecurityGroupCreate, + Delete: resourceArmNetworkSecurityGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "security_rule": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 140 { + errors = append(errors, fmt.Errorf( + "The network security rule description can be no longer than 140 chars")) + } + return + }, + }, + + "protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateNetworkSecurityRuleProtocol, + }, + + "source_port_range": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "destination_port_range": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "source_address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "destination_address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "access": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateNetworkSecurityRuleAccess, + }, + + "priority": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + if value < 100 || value > 4096 { + errors = append(errors, fmt.Errorf( + "The `priority` can only be between 100 and 4096")) + } + return + }, + }, + + "direction": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateNetworkSecurityRuleDirection, + }, + }, + }, + Set: resourceArmNetworkSecurityGroupRuleHash, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmNetworkSecurityGroupCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + secClient := client.secGroupClient + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + tags := d.Get("tags").(map[string]interface{}) + + sgRules, sgErr := expandAzureRmSecurityRules(d) + if sgErr != nil { + return fmt.Errorf("Error Building list of Network Security Group Rules: %s", sgErr) + } + + sg := network.SecurityGroup{ + Name: &name, + Location: &location, + Properties: &network.SecurityGroupPropertiesFormat{ + SecurityRules: &sgRules, + }, + Tags: expandTags(tags), + } + + resp, err := secClient.CreateOrUpdate(resGroup, name, sg) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Network Security Group (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: securityGroupStateRefreshFunc(client, resGroup, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Network Securty Group (%s) to become available: %s", name, err) + } + + return resourceArmNetworkSecurityGroupRead(d, meta) +} + +func resourceArmNetworkSecurityGroupRead(d *schema.ResourceData, meta interface{}) error { + secGroupClient := meta.(*ArmClient).secGroupClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["networkSecurityGroups"] + + resp, err := secGroupClient.Get(resGroup, name, "") + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Network Security Group %s: %s", name, err) + } + + if resp.Properties.SecurityRules != nil { + d.Set("security_rule", flattenNetworkSecurityRules(resp.Properties.SecurityRules)) + } + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmNetworkSecurityGroupDelete(d *schema.ResourceData, meta interface{}) error { + secGroupClient := meta.(*ArmClient).secGroupClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["networkSecurityGroups"] + + _, err = secGroupClient.Delete(resGroup, name) + + return err +} + +func resourceArmNetworkSecurityGroupRuleHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["protocol"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["source_port_range"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["destination_port_range"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["source_address_prefix"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["destination_address_prefix"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["access"].(string))) + buf.WriteString(fmt.Sprintf("%d-", m["priority"].(int))) + buf.WriteString(fmt.Sprintf("%s-", m["direction"].(string))) + + return hashcode.String(buf.String()) +} + +func securityGroupStateRefreshFunc(client *ArmClient, resourceGroupName string, securityGroupName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.secGroupClient.Get(resourceGroupName, securityGroupName, "") + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in securityGroupStateRefreshFunc to Azure ARM for network security group '%s' (RG: '%s'): %s", securityGroupName, resourceGroupName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} + +func flattenNetworkSecurityRules(rules *[]network.SecurityRule) []map[string]interface{} { + result := make([]map[string]interface{}, 0, len(*rules)) + for _, rule := range *rules { + sgRule := make(map[string]interface{}) + sgRule["name"] = *rule.Name + sgRule["destination_address_prefix"] = *rule.Properties.DestinationAddressPrefix + sgRule["destination_port_range"] = *rule.Properties.DestinationPortRange + sgRule["source_address_prefix"] = *rule.Properties.SourceAddressPrefix + sgRule["source_port_range"] = *rule.Properties.SourcePortRange + sgRule["priority"] = int(*rule.Properties.Priority) + sgRule["access"] = rule.Properties.Access + sgRule["direction"] = rule.Properties.Direction + sgRule["protocol"] = rule.Properties.Protocol + + if rule.Properties.Description != nil { + sgRule["description"] = *rule.Properties.Description + } + + result = append(result, sgRule) + } + return result +} + +func expandAzureRmSecurityRules(d *schema.ResourceData) ([]network.SecurityRule, error) { + sgRules := d.Get("security_rule").(*schema.Set).List() + rules := make([]network.SecurityRule, 0, len(sgRules)) + + for _, sgRaw := range sgRules { + data := sgRaw.(map[string]interface{}) + + source_port_range := data["source_port_range"].(string) + destination_port_range := data["destination_port_range"].(string) + source_address_prefix := data["source_address_prefix"].(string) + destination_address_prefix := data["destination_address_prefix"].(string) + priority := data["priority"].(int) + + properties := network.SecurityRulePropertiesFormat{ + SourcePortRange: &source_port_range, + DestinationPortRange: &destination_port_range, + SourceAddressPrefix: &source_address_prefix, + DestinationAddressPrefix: &destination_address_prefix, + Priority: &priority, + Access: network.SecurityRuleAccess(data["access"].(string)), + Direction: network.SecurityRuleDirection(data["direction"].(string)), + Protocol: network.SecurityRuleProtocol(data["protocol"].(string)), + } + + if v := data["description"].(string); v != "" { + properties.Description = &v + } + + name := data["name"].(string) + rule := network.SecurityRule{ + Name: &name, + Properties: &properties, + } + + rules = append(rules, rule) + } + + return rules, nil +} diff --git a/builtin/providers/azurerm/resource_arm_network_security_group_test.go b/builtin/providers/azurerm/resource_arm_network_security_group_test.go new file mode 100644 index 0000000000..64c4f9944f --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_network_security_group_test.go @@ -0,0 +1,265 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMNetworkSecurityGroup_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMNetworkSecurityGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityGroup_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityGroupExists("azurerm_network_security_group.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMNetworkSecurityGroup_withTags(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMNetworkSecurityGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityGroup_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityGroupExists("azurerm_network_security_group.test"), + resource.TestCheckResourceAttr( + "azurerm_network_security_group.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_network_security_group.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_network_security_group.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityGroup_withTagsUpdate, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityGroupExists("azurerm_network_security_group.test"), + resource.TestCheckResourceAttr( + "azurerm_network_security_group.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_network_security_group.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func TestAccAzureRMNetworkSecurityGroup_addingExtraRules(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMNetworkSecurityGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityGroup_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityGroupExists("azurerm_network_security_group.test"), + resource.TestCheckResourceAttr( + "azurerm_network_security_group.test", "security_rule.#", "1"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityGroup_anotherRule, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityGroupExists("azurerm_network_security_group.test"), + resource.TestCheckResourceAttr( + "azurerm_network_security_group.test", "security_rule.#", "2"), + ), + }, + }, + }) +} + +func testCheckAzureRMNetworkSecurityGroupExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + sgName := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for network security group: %s", sgName) + } + + conn := testAccProvider.Meta().(*ArmClient).secGroupClient + + resp, err := conn.Get(resourceGroup, sgName, "") + if err != nil { + return fmt.Errorf("Bad: Get on secGroupClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Network Security Group %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMNetworkSecurityGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).secGroupClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_network_security_group" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, name, "") + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Network Security Group still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMNetworkSecurityGroup_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_network_security_group" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + security_rule { + name = "test123" + priority = 100 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + } +} +` + +var testAccAzureRMNetworkSecurityGroup_anotherRule = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_network_security_group" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + security_rule { + name = "test123" + priority = 100 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + } + + security_rule { + name = "testDeny" + priority = 101 + direction = "Inbound" + access = "Deny" + protocol = "Udp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + } +} +` + +var testAccAzureRMNetworkSecurityGroup_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_network_security_group" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + security_rule { + name = "test123" + priority = 100 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + } + + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMNetworkSecurityGroup_withTagsUpdate = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_network_security_group" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + security_rule { + name = "test123" + priority = 100 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + } + + tags { + environment = "staging" + } +} +` diff --git a/builtin/providers/azurerm/resource_arm_network_security_rule.go b/builtin/providers/azurerm/resource_arm_network_security_rule.go new file mode 100644 index 0000000000..491e331bc0 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_network_security_rule.go @@ -0,0 +1,219 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmNetworkSecurityRule() *schema.Resource { + return &schema.Resource{ + Create: resourceArmNetworkSecurityRuleCreate, + Read: resourceArmNetworkSecurityRuleRead, + Update: resourceArmNetworkSecurityRuleCreate, + Delete: resourceArmNetworkSecurityRuleDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "network_security_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 140 { + errors = append(errors, fmt.Errorf( + "The network security rule description can be no longer than 140 chars")) + } + return + }, + }, + + "protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateNetworkSecurityRuleProtocol, + }, + + "source_port_range": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "destination_port_range": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "source_address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "destination_address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "access": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateNetworkSecurityRuleAccess, + }, + + "priority": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + if value < 100 || value > 4096 { + errors = append(errors, fmt.Errorf( + "The `priority` can only be between 100 and 4096")) + } + return + }, + }, + + "direction": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateNetworkSecurityRuleDirection, + }, + }, + } +} + +func resourceArmNetworkSecurityRuleCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + secClient := client.secRuleClient + + name := d.Get("name").(string) + nsgName := d.Get("network_security_group_name").(string) + resGroup := d.Get("resource_group_name").(string) + + source_port_range := d.Get("source_port_range").(string) + destination_port_range := d.Get("destination_port_range").(string) + source_address_prefix := d.Get("source_address_prefix").(string) + destination_address_prefix := d.Get("destination_address_prefix").(string) + priority := d.Get("priority").(int) + access := d.Get("access").(string) + direction := d.Get("direction").(string) + protocol := d.Get("protocol").(string) + + armMutexKV.Lock(nsgName) + defer armMutexKV.Unlock(nsgName) + + properties := network.SecurityRulePropertiesFormat{ + SourcePortRange: &source_port_range, + DestinationPortRange: &destination_port_range, + SourceAddressPrefix: &source_address_prefix, + DestinationAddressPrefix: &destination_address_prefix, + Priority: &priority, + Access: network.SecurityRuleAccess(access), + Direction: network.SecurityRuleDirection(direction), + Protocol: network.SecurityRuleProtocol(protocol), + } + + if v, ok := d.GetOk("description"); ok { + description := v.(string) + properties.Description = &description + } + + sgr := network.SecurityRule{ + Name: &name, + Properties: &properties, + } + + resp, err := secClient.CreateOrUpdate(resGroup, nsgName, name, sgr) + if err != nil { + return err + } + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Network Security Rule (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: securityRuleStateRefreshFunc(client, resGroup, nsgName, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Network Securty Rule (%s) to become available: %s", name, err) + } + + return resourceArmNetworkSecurityRuleRead(d, meta) +} + +func resourceArmNetworkSecurityRuleRead(d *schema.ResourceData, meta interface{}) error { + secRuleClient := meta.(*ArmClient).secRuleClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + networkSGName := id.Path["networkSecurityGroups"] + sgRuleName := id.Path["securityRules"] + + resp, err := secRuleClient.Get(resGroup, networkSGName, sgRuleName) + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Network Security Rule %s: %s", sgRuleName, err) + } + + return nil +} + +func resourceArmNetworkSecurityRuleDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + secRuleClient := client.secRuleClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + nsgName := id.Path["networkSecurityGroups"] + sgRuleName := id.Path["securityRules"] + + armMutexKV.Lock(nsgName) + defer armMutexKV.Unlock(nsgName) + + _, err = secRuleClient.Delete(resGroup, nsgName, sgRuleName) + + return err +} + +func securityRuleStateRefreshFunc(client *ArmClient, resourceGroupName string, networkSecurityGroupName string, securityRuleName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.secRuleClient.Get(resourceGroupName, networkSecurityGroupName, securityRuleName) + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in securityGroupStateRefreshFunc to Azure ARM for network security rule '%s' (RG: '%s') (NSG: '%s'): %s", securityRuleName, resourceGroupName, networkSecurityGroupName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} diff --git a/builtin/providers/azurerm/resource_arm_network_security_rule_test.go b/builtin/providers/azurerm/resource_arm_network_security_rule_test.go new file mode 100644 index 0000000000..b3b9c9d02d --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_network_security_rule_test.go @@ -0,0 +1,203 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMNetworkSecurityRule_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMNetworkSecurityRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityRule_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityRuleExists("azurerm_network_security_rule.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMNetworkSecurityRule_addingRules(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMNetworkSecurityRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityRule_updateBasic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityRuleExists("azurerm_network_security_rule.test1"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMNetworkSecurityRule_updateExtraRule, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMNetworkSecurityRuleExists("azurerm_network_security_rule.test2"), + ), + }, + }, + }) +} + +func testCheckAzureRMNetworkSecurityRuleExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + sgName := rs.Primary.Attributes["network_security_group_name"] + sgrName := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for network security rule: %s", sgName) + } + + conn := testAccProvider.Meta().(*ArmClient).secRuleClient + + resp, err := conn.Get(resourceGroup, sgName, sgrName) + if err != nil { + return fmt.Errorf("Bad: Get on secRuleClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Network Security Rule %q (resource group: %q) (network security group: %q) does not exist", sgrName, sgName, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMNetworkSecurityRuleDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).secRuleClient + + for _, rs := range s.RootModule().Resources { + + if rs.Type != "azurerm_network_security_rule" { + continue + } + + sgName := rs.Primary.Attributes["network_security_group_name"] + sgrName := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, sgName, sgrName) + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Network Security Rule still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMNetworkSecurityRule_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_network_security_group" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_network_security_rule" "test" { + name = "test123" + priority = 100 + direction = "Outbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + resource_group_name = "${azurerm_resource_group.test.name}" + network_security_group_name = "${azurerm_network_security_group.test.name}" +} +` + +var testAccAzureRMNetworkSecurityRule_updateBasic = ` +resource "azurerm_resource_group" "test1" { + name = "acceptanceTestResourceGroup2" + location = "West US" +} + +resource "azurerm_network_security_group" "test1" { + name = "acceptanceTestSecurityGroup2" + location = "West US" + resource_group_name = "${azurerm_resource_group.test1.name}" +} + +resource "azurerm_network_security_rule" "test1" { + name = "test123" + priority = 100 + direction = "Outbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + resource_group_name = "${azurerm_resource_group.test1.name}" + network_security_group_name = "${azurerm_network_security_group.test1.name}" +} +` + +var testAccAzureRMNetworkSecurityRule_updateExtraRule = ` +resource "azurerm_resource_group" "test1" { + name = "acceptanceTestResourceGroup2" + location = "West US" +} + +resource "azurerm_network_security_group" "test1" { + name = "acceptanceTestSecurityGroup2" + location = "West US" + resource_group_name = "${azurerm_resource_group.test1.name}" +} + +resource "azurerm_network_security_rule" "test1" { + name = "test123" + priority = 100 + direction = "Outbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + resource_group_name = "${azurerm_resource_group.test1.name}" + network_security_group_name = "${azurerm_network_security_group.test1.name}" +} + +resource "azurerm_network_security_rule" "test2" { + name = "testing456" + priority = 101 + direction = "Inbound" + access = "Deny" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + resource_group_name = "${azurerm_resource_group.test1.name}" + network_security_group_name = "${azurerm_network_security_group.test1.name}" +} +` diff --git a/builtin/providers/azurerm/resource_arm_public_ip.go b/builtin/providers/azurerm/resource_arm_public_ip.go new file mode 100644 index 0000000000..5b3c8999c6 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_public_ip.go @@ -0,0 +1,251 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "regexp" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmPublicIp() *schema.Resource { + return &schema.Resource{ + Create: resourceArmPublicIpCreate, + Read: resourceArmPublicIpRead, + Update: resourceArmPublicIpCreate, + Delete: resourceArmPublicIpDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "public_ip_address_allocation": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validatePublicIpAllocation, + }, + + "idle_timeout_in_minutes": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + if value < 4 || value > 30 { + errors = append(errors, fmt.Errorf( + "The idle timeout must be between 4 and 30 minutes")) + } + return + }, + }, + + "domain_name_label": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validatePublicIpDomainNameLabel, + }, + + "reverse_fqdn": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "fqdn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "ip_address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmPublicIpCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + publicIPClient := client.publicIPClient + + log.Printf("[INFO] preparing arguments for Azure ARM Public IP creation.") + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + tags := d.Get("tags").(map[string]interface{}) + + properties := network.PublicIPAddressPropertiesFormat{ + PublicIPAllocationMethod: network.IPAllocationMethod(d.Get("public_ip_address_allocation").(string)), + } + + dnl, hasDnl := d.GetOk("domain_name_label") + rfqdn, hasRfqdn := d.GetOk("reverse_fqdn") + + if hasDnl || hasRfqdn { + dnsSettings := network.PublicIPAddressDNSSettings{} + + if hasRfqdn { + reverse_fqdn := rfqdn.(string) + dnsSettings.ReverseFqdn = &reverse_fqdn + } + + if hasDnl { + domain_name_label := dnl.(string) + dnsSettings.DomainNameLabel = &domain_name_label + + } + + properties.DNSSettings = &dnsSettings + } + + if v, ok := d.GetOk("idle_timeout_in_minutes"); ok { + idle_timeout := v.(int) + properties.IdleTimeoutInMinutes = &idle_timeout + } + + publicIp := network.PublicIPAddress{ + Name: &name, + Location: &location, + Properties: &properties, + Tags: expandTags(tags), + } + + resp, err := publicIPClient.CreateOrUpdate(resGroup, name, publicIp) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Public IP (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: publicIPStateRefreshFunc(client, resGroup, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Public IP (%s) to become available: %s", name, err) + } + + return resourceArmPublicIpRead(d, meta) +} + +func resourceArmPublicIpRead(d *schema.ResourceData, meta interface{}) error { + publicIPClient := meta.(*ArmClient).publicIPClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["publicIPAddresses"] + + resp, err := publicIPClient.Get(resGroup, name, "") + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure public ip %s: %s", name, err) + } + + if resp.Properties.DNSSettings != nil && resp.Properties.DNSSettings.Fqdn != nil && *resp.Properties.DNSSettings.Fqdn != "" { + d.Set("fqdn", resp.Properties.DNSSettings.Fqdn) + } + + if resp.Properties.IPAddress != nil && *resp.Properties.IPAddress != "" { + d.Set("ip_address", resp.Properties.IPAddress) + } + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmPublicIpDelete(d *schema.ResourceData, meta interface{}) error { + publicIPClient := meta.(*ArmClient).publicIPClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["publicIPAddresses"] + + _, err = publicIPClient.Delete(resGroup, name) + + return err +} + +func publicIPStateRefreshFunc(client *ArmClient, resourceGroupName string, publicIpName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.publicIPClient.Get(resourceGroupName, publicIpName, "") + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in publicIPStateRefreshFunc to Azure ARM for public ip '%s' (RG: '%s'): %s", publicIpName, resourceGroupName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} + +func validatePublicIpAllocation(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + allocations := map[string]bool{ + "static": true, + "dynamic": true, + } + + if !allocations[value] { + errors = append(errors, fmt.Errorf("Public IP Allocation can only be Static of Dynamic")) + } + return +} + +func validatePublicIpDomainNameLabel(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[a-z0-9-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q: %q", + k, value)) + } + + if len(value) > 61 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 61 characters: %q", k, value)) + } + + if len(value) == 0 { + errors = append(errors, fmt.Errorf( + "%q cannot be an empty string: %q", k, value)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen: %q", k, value)) + } + + return + +} diff --git a/builtin/providers/azurerm/resource_arm_public_ip_test.go b/builtin/providers/azurerm/resource_arm_public_ip_test.go new file mode 100644 index 0000000000..e9857acf0c --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_public_ip_test.go @@ -0,0 +1,302 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestResourceAzureRMPublicIpAllocation_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "Random", + ErrCount: 1, + }, + { + Value: "Static", + ErrCount: 0, + }, + { + Value: "Dynamic", + ErrCount: 0, + }, + { + Value: "STATIC", + ErrCount: 0, + }, + { + Value: "static", + ErrCount: 0, + }, + } + + for _, tc := range cases { + _, errors := validatePublicIpAllocation(tc.Value, "azurerm_public_ip") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Azure RM Public IP allocation to trigger a validation error") + } + } +} + +func TestResourceAzureRMPublicIpDomainNameLabel_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting123", + ErrCount: 1, + }, + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "testing123-", + ErrCount: 1, + }, + { + Value: acctest.RandString(80), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validatePublicIpDomainNameLabel(tc.Value, "azurerm_public_ip") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Azure RM Public IP Domain Name Label to trigger a validation error") + } + } +} + +func TestAccAzureRMPublicIpStatic_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMPublicIpDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVPublicIpStatic_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMPublicIpExists("azurerm_public_ip.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMPublicIpStatic_withTags(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMPublicIpDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVPublicIpStatic_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMPublicIpExists("azurerm_public_ip.test"), + resource.TestCheckResourceAttr( + "azurerm_public_ip.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_public_ip.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_public_ip.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMVPublicIpStatic_withTagsUpdate, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMPublicIpExists("azurerm_public_ip.test"), + resource.TestCheckResourceAttr( + "azurerm_public_ip.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_public_ip.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func TestAccAzureRMPublicIpStatic_update(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMPublicIpDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVPublicIpStatic_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMPublicIpExists("azurerm_public_ip.test"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMVPublicIpStatic_update, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMPublicIpExists("azurerm_public_ip.test"), + resource.TestCheckResourceAttr( + "azurerm_public_ip.test", "domain_name_label", "mylabel01"), + ), + }, + }, + }) +} + +func TestAccAzureRMPublicIpDynamic_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMPublicIpDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVPublicIpDynamic_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMPublicIpExists("azurerm_public_ip.test"), + ), + }, + }, + }) +} + +func testCheckAzureRMPublicIpExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + availSetName := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for public ip: %s", availSetName) + } + + conn := testAccProvider.Meta().(*ArmClient).publicIPClient + + resp, err := conn.Get(resourceGroup, availSetName, "") + if err != nil { + return fmt.Errorf("Bad: Get on publicIPClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Public IP %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMPublicIpDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).publicIPClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_public_ip" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, name, "") + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Public IP still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMVPublicIpStatic_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_public_ip" "test" { + name = "acceptanceTestPublicIp1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + public_ip_address_allocation = "static" +} +` + +var testAccAzureRMVPublicIpStatic_update = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_public_ip" "test" { + name = "acceptanceTestPublicIp1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + public_ip_address_allocation = "static" + domain_name_label = "mylabel01" +} +` + +var testAccAzureRMVPublicIpDynamic_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup2" + location = "West US" +} +resource "azurerm_public_ip" "test" { + name = "acceptanceTestPublicIp2" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + public_ip_address_allocation = "dynamic" +} +` + +var testAccAzureRMVPublicIpStatic_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_public_ip" "test" { + name = "acceptanceTestPublicIp1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + public_ip_address_allocation = "static" + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMVPublicIpStatic_withTagsUpdate = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_public_ip" "test" { + name = "acceptanceTestPublicIp1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + public_ip_address_allocation = "static" + + tags { + environment = "staging" + } +} +` diff --git a/builtin/providers/azurerm/resource_arm_resource_group.go b/builtin/providers/azurerm/resource_arm_resource_group.go new file mode 100644 index 0000000000..58fcb3bdbf --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_resource_group.go @@ -0,0 +1,190 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "regexp" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/resources/resources" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmResourceGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceArmResourceGroupCreate, + Read: resourceArmResourceGroupRead, + Update: resourceArmResourceGroupUpdate, + Exists: resourceArmResourceGroupExists, + Delete: resourceArmResourceGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArmResourceGroupName, + }, + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "tags": tagsSchema(), + }, + } +} + +func validateArmResourceGroupName(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + + if len(value) > 80 { + es = append(es, fmt.Errorf("%q may not exceed 80 characters in length", k)) + } + + if strings.HasSuffix(value, ".") { + es = append(es, fmt.Errorf("%q may not end with a period", k)) + } + + if matched := regexp.MustCompile(`[\(\)\.a-zA-Z0-9_-]`).Match([]byte(value)); !matched { + es = append(es, fmt.Errorf("%q may only contain alphanumeric characters, dash, underscores, parentheses and periods", k)) + } + + return +} + +func resourceArmResourceGroupUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + resGroupClient := client.resourceGroupClient + + if !d.HasChange("tags") { + return nil + } + + name := d.Get("name").(string) + + newTags := d.Get("tags").(map[string]interface{}) + _, err := resGroupClient.Patch(name, resources.ResourceGroup{ + Tags: expandTags(newTags), + }) + if err != nil { + return fmt.Errorf("Error issuing Azure ARM create request to update resource group %q: %s", name, err) + } + + return resourceArmResourceGroupRead(d, meta) +} + +func resourceArmResourceGroupCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + resGroupClient := client.resourceGroupClient + + name := d.Get("name").(string) + location := d.Get("location").(string) + tags := d.Get("tags").(map[string]interface{}) + + rg := resources.ResourceGroup{ + Name: &name, + Location: &location, + Tags: expandTags(tags), + } + + resp, err := resGroupClient.CreateOrUpdate(name, rg) + if err != nil { + return fmt.Errorf("Error issuing Azure ARM create request for resource group '%s': %s", name, err) + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Resource Group (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted"}, + Target: []string{"Succeeded"}, + Refresh: resourceGroupStateRefreshFunc(client, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Resource Group (%s) to become available: %s", name, err) + } + + return resourceArmResourceGroupRead(d, meta) +} + +func resourceArmResourceGroupRead(d *schema.ResourceData, meta interface{}) error { + resGroupClient := meta.(*ArmClient).resourceGroupClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + name := id.ResourceGroup + + res, err := resGroupClient.Get(name) + if err != nil { + if res.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + return fmt.Errorf("Error issuing read request to Azure ARM for resource group '%s': %s", name, err) + } + + d.Set("name", res.Name) + d.Set("location", res.Location) + + flattenAndSetTags(d, res.Tags) + + return nil +} + +func resourceArmResourceGroupExists(d *schema.ResourceData, meta interface{}) (bool, error) { + resGroupClient := meta.(*ArmClient).resourceGroupClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return false, err + } + name := id.ResourceGroup + + resp, err := resGroupClient.CheckExistence(name) + if err != nil { + if resp.StatusCode != 200 { + return false, err + } + + return true, nil + } + + return true, nil +} + +func resourceArmResourceGroupDelete(d *schema.ResourceData, meta interface{}) error { + resGroupClient := meta.(*ArmClient).resourceGroupClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + name := id.ResourceGroup + + _, err = resGroupClient.Delete(name) + if err != nil { + return err + } + + return nil +} + +func resourceGroupStateRefreshFunc(client *ArmClient, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.resourceGroupClient.Get(id) + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in resourceGroupStateRefreshFunc to Azure ARM for resource group '%s': %s", id, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} diff --git a/builtin/providers/azurerm/resource_arm_resource_group_test.go b/builtin/providers/azurerm/resource_arm_resource_group_test.go new file mode 100644 index 0000000000..9f2f540bb6 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_resource_group_test.go @@ -0,0 +1,138 @@ +package azurerm + +import ( + "fmt" + "testing" + + "github.com/Azure/azure-sdk-for-go/core/http" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMResourceGroup_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMResourceGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMResourceGroup_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMResourceGroupExists("azurerm_resource_group.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMResourceGroup_withTags(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMResourceGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMResourceGroup_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMResourceGroupExists("azurerm_resource_group.test"), + resource.TestCheckResourceAttr( + "azurerm_resource_group.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_resource_group.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_resource_group.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMResourceGroup_withTagsUpdated, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMResourceGroupExists("azurerm_resource_group.test"), + resource.TestCheckResourceAttr( + "azurerm_resource_group.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_resource_group.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func testCheckAzureRMResourceGroupExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + resourceGroup := rs.Primary.Attributes["name"] + + // Ensure resource group exists in API + conn := testAccProvider.Meta().(*ArmClient).resourceGroupClient + + resp, err := conn.Get(resourceGroup) + if err != nil { + return fmt.Errorf("Bad: Get on resourceGroupClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Virtual Network %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMResourceGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).resourceGroupClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_resource_group" { + continue + } + + resourceGroup := rs.Primary.ID + + resp, err := conn.Get(resourceGroup) + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Resource Group still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMResourceGroup_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1_basic" + location = "West US" +} +` + +var testAccAzureRMResourceGroup_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1_basic" + location = "West US" + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMResourceGroup_withTagsUpdated = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1_basic" + location = "West US" + + tags { + environment = "staging" + } +} +` diff --git a/builtin/providers/azurerm/resource_arm_route.go b/builtin/providers/azurerm/resource_arm_route.go new file mode 100644 index 0000000000..ee2b71b35e --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_route.go @@ -0,0 +1,161 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmRoute() *schema.Resource { + return &schema.Resource{ + Create: resourceArmRouteCreate, + Read: resourceArmRouteRead, + Update: resourceArmRouteCreate, + Delete: resourceArmRouteDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "route_table_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "next_hop_type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRouteTableNextHopType, + }, + + "next_hop_in_ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + } +} + +func resourceArmRouteCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + routesClient := client.routesClient + + name := d.Get("name").(string) + rtName := d.Get("route_table_name").(string) + resGroup := d.Get("resource_group_name").(string) + + addressPrefix := d.Get("address_prefix").(string) + nextHopType := d.Get("next_hop_type").(string) + + armMutexKV.Lock(rtName) + defer armMutexKV.Unlock(rtName) + + properties := network.RoutePropertiesFormat{ + AddressPrefix: &addressPrefix, + NextHopType: network.RouteNextHopType(nextHopType), + } + + if v, ok := d.GetOk("next_hop_in_ip_address"); ok { + nextHopInIpAddress := v.(string) + properties.NextHopIPAddress = &nextHopInIpAddress + } + + route := network.Route{ + Name: &name, + Properties: &properties, + } + + resp, err := routesClient.CreateOrUpdate(resGroup, rtName, name, route) + if err != nil { + return err + } + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Route (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: routeStateRefreshFunc(client, resGroup, rtName, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Route (%s) to become available: %s", name, err) + } + + return resourceArmRouteRead(d, meta) +} + +func resourceArmRouteRead(d *schema.ResourceData, meta interface{}) error { + routesClient := meta.(*ArmClient).routesClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + rtName := id.Path["routeTables"] + routeName := id.Path["routes"] + + resp, err := routesClient.Get(resGroup, rtName, routeName) + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Route %s: %s", routeName, err) + } + + return nil +} + +func resourceArmRouteDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + routesClient := client.routesClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + rtName := id.Path["routeTables"] + routeName := id.Path["routes"] + + armMutexKV.Lock(rtName) + defer armMutexKV.Unlock(rtName) + + _, err = routesClient.Delete(resGroup, rtName, routeName) + + return err +} + +func routeStateRefreshFunc(client *ArmClient, resourceGroupName string, routeTableName string, routeName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.routesClient.Get(resourceGroupName, routeTableName, routeName) + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in routeStateRefreshFunc to Azure ARM for route '%s' (RG: '%s') (NSG: '%s'): %s", routeName, resourceGroupName, routeTableName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} diff --git a/builtin/providers/azurerm/resource_arm_route_table.go b/builtin/providers/azurerm/resource_arm_route_table.go new file mode 100644 index 0000000000..5e6a663db4 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_route_table.go @@ -0,0 +1,258 @@ +package azurerm + +import ( + "bytes" + "fmt" + "log" + "net/http" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmRouteTable() *schema.Resource { + return &schema.Resource{ + Create: resourceArmRouteTableCreate, + Read: resourceArmRouteTableRead, + Update: resourceArmRouteTableCreate, + Delete: resourceArmRouteTableDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "route": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "next_hop_type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRouteTableNextHopType, + }, + + "next_hop_in_ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + Set: resourceArmRouteTableRouteHash, + }, + + "subnets": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmRouteTableCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + routeTablesClient := client.routeTablesClient + + log.Printf("[INFO] preparing arguments for Azure ARM Route Table creation.") + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + tags := d.Get("tags").(map[string]interface{}) + + routeSet := network.RouteTable{ + Name: &name, + Location: &location, + Tags: expandTags(tags), + } + + if _, ok := d.GetOk("route"); ok { + properties := network.RouteTablePropertiesFormat{} + routes, routeErr := expandAzureRmRouteTableRoutes(d) + if routeErr != nil { + return fmt.Errorf("Error Building list of Route Table Routes: %s", routeErr) + } + if len(routes) > 0 { + routeSet.Properties = &properties + } + + } + + resp, err := routeTablesClient.CreateOrUpdate(resGroup, name, routeSet) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Route Table (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: routeTableStateRefreshFunc(client, resGroup, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting forRoute Table (%s) to become available: %s", name, err) + } + + return resourceArmRouteTableRead(d, meta) +} + +func resourceArmRouteTableRead(d *schema.ResourceData, meta interface{}) error { + routeTablesClient := meta.(*ArmClient).routeTablesClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["routeTables"] + + resp, err := routeTablesClient.Get(resGroup, name, "") + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Route Table %s: %s", name, err) + } + + if resp.Properties.Subnets != nil { + if len(*resp.Properties.Subnets) > 0 { + subnets := make([]string, 0, len(*resp.Properties.Subnets)) + for _, subnet := range *resp.Properties.Subnets { + id := subnet.ID + subnets = append(subnets, *id) + } + + if err := d.Set("subnets", subnets); err != nil { + return err + } + } + } + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmRouteTableDelete(d *schema.ResourceData, meta interface{}) error { + routeTablesClient := meta.(*ArmClient).routeTablesClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["routeTables"] + + _, err = routeTablesClient.Delete(resGroup, name) + + return err +} + +func expandAzureRmRouteTableRoutes(d *schema.ResourceData) ([]network.Route, error) { + configs := d.Get("route").(*schema.Set).List() + routes := make([]network.Route, 0, len(configs)) + + for _, configRaw := range configs { + data := configRaw.(map[string]interface{}) + + address_prefix := data["address_prefix"].(string) + next_hop_type := data["next_hop_type"].(string) + + properties := network.RoutePropertiesFormat{ + AddressPrefix: &address_prefix, + NextHopType: network.RouteNextHopType(next_hop_type), + } + + if v := data["next_hop_in_ip_address"].(string); v != "" { + properties.NextHopIPAddress = &v + } + + name := data["name"].(string) + route := network.Route{ + Name: &name, + Properties: &properties, + } + + routes = append(routes, route) + } + + return routes, nil +} + +func routeTableStateRefreshFunc(client *ArmClient, resourceGroupName string, routeTableName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.routeTablesClient.Get(resourceGroupName, routeTableName, "") + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in routeTableStateRefreshFunc to Azure ARM for route table '%s' (RG: '%s'): %s", routeTableName, resourceGroupName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} + +func resourceArmRouteTableRouteHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["address_prefix"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["next_hop_type"].(string))) + + return hashcode.String(buf.String()) +} + +func validateRouteTableNextHopType(v interface{}, k string) (ws []string, errors []error) { + value := strings.ToLower(v.(string)) + hopTypes := map[string]bool{ + "virtualnetworkgateway": true, + "vnetlocal": true, + "internet": true, + "virtualappliance": true, + "none": true, + } + + if !hopTypes[value] { + errors = append(errors, fmt.Errorf("Route Table NextHopType Protocol can only be VirtualNetworkGateway, VnetLocal, Internet or VirtualAppliance")) + } + return +} diff --git a/builtin/providers/azurerm/resource_arm_route_table_test.go b/builtin/providers/azurerm/resource_arm_route_table_test.go new file mode 100644 index 0000000000..552dfd94fd --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_route_table_test.go @@ -0,0 +1,282 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestResourceAzureRMRouteTableNextHopType_validation(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "Random", + ErrCount: 1, + }, + { + Value: "VirtualNetworkGateway", + ErrCount: 0, + }, + { + Value: "VNETLocal", + ErrCount: 0, + }, + { + Value: "Internet", + ErrCount: 0, + }, + { + Value: "VirtualAppliance", + ErrCount: 0, + }, + { + Value: "None", + ErrCount: 0, + }, + { + Value: "VIRTUALNETWORKGATEWAY", + ErrCount: 0, + }, + { + Value: "virtualnetworkgateway", + ErrCount: 0, + }, + } + + for _, tc := range cases { + _, errors := validateRouteTableNextHopType(tc.Value, "azurerm_route_table") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Azure RM Route Table nextHopType to trigger a validation error") + } + } +} + +func TestAccAzureRMRouteTable_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMRouteTableDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMRouteTable_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteTableExists("azurerm_route_table.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMRouteTable_withTags(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMRouteTableDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMRouteTable_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteTableExists("azurerm_route_table.test"), + resource.TestCheckResourceAttr( + "azurerm_route_table.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_route_table.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_route_table.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMRouteTable_withTagsUpdate, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteTableExists("azurerm_route_table.test"), + resource.TestCheckResourceAttr( + "azurerm_route_table.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_route_table.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func TestAccAzureRMRouteTable_multipleRoutes(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMRouteTableDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMRouteTable_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteTableExists("azurerm_route_table.test"), + resource.TestCheckResourceAttr( + "azurerm_route_table.test", "route.#", "1"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMRouteTable_multipleRoutes, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteTableExists("azurerm_route_table.test"), + resource.TestCheckResourceAttr( + "azurerm_route_table.test", "route.#", "2"), + ), + }, + }, + }) +} + +func testCheckAzureRMRouteTableExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + name := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for route table: %s", name) + } + + conn := testAccProvider.Meta().(*ArmClient).routeTablesClient + + resp, err := conn.Get(resourceGroup, name, "") + if err != nil { + return fmt.Errorf("Bad: Get on routeTablesClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Route Table %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMRouteTableDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).routeTablesClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_route_table" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, name, "") + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Route Table still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMRouteTable_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + route { + name = "route1" + address_prefix = "*" + next_hop_type = "internet" + } +} +` + +var testAccAzureRMRouteTable_multipleRoutes = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + route { + name = "route1" + address_prefix = "*" + next_hop_type = "internet" + } + + route { + name = "route2" + address_prefix = "*" + next_hop_type = "virtualappliance" + } +} +` + +var testAccAzureRMRouteTable_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + route { + name = "route1" + address_prefix = "*" + next_hop_type = "internet" + } + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMRouteTable_withTagsUpdate = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + route { + name = "route1" + address_prefix = "*" + next_hop_type = "internet" + } + + tags { + environment = "staging" + } +} +` diff --git a/builtin/providers/azurerm/resource_arm_route_test.go b/builtin/providers/azurerm/resource_arm_route_test.go new file mode 100644 index 0000000000..3c8d6e8be4 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_route_test.go @@ -0,0 +1,151 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMRoute_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMRouteDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMRoute_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteExists("azurerm_route.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMRoute_multipleRoutes(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMRouteDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMRoute_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteExists("azurerm_route.test"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMRoute_multipleRoutes, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMRouteExists("azurerm_route.test1"), + ), + }, + }, + }) +} + +func testCheckAzureRMRouteExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + name := rs.Primary.Attributes["name"] + rtName := rs.Primary.Attributes["route_table_name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for route: %s", name) + } + + conn := testAccProvider.Meta().(*ArmClient).routesClient + + resp, err := conn.Get(resourceGroup, rtName, name) + if err != nil { + return fmt.Errorf("Bad: Get on routesClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Route %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMRouteDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).routesClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_route" { + continue + } + + name := rs.Primary.Attributes["name"] + rtName := rs.Primary.Attributes["route_table_name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, rtName, name) + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Route still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMRoute_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestRouteTable1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_route" "test" { + name = "acceptanceTestRoute1" + resource_group_name = "${azurerm_resource_group.test.name}" + route_table_name = "${azurerm_route_table.test.name}" + + address_prefix = "10.1.0.0/16" + next_hop_type = "vnetlocal" +} +` + +var testAccAzureRMRoute_multipleRoutes = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestRouteTable1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_route" "test1" { + name = "acceptanceTestRoute2" + resource_group_name = "${azurerm_resource_group.test.name}" + route_table_name = "${azurerm_route_table.test.name}" + + address_prefix = "10.2.0.0/16" + next_hop_type = "none" +} +` diff --git a/builtin/providers/azurerm/resource_arm_storage_account.go b/builtin/providers/azurerm/resource_arm_storage_account.go new file mode 100644 index 0000000000..bd2a9cdf8b --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_storage_account.go @@ -0,0 +1,292 @@ +package azurerm + +import ( + "fmt" + "net/http" + "regexp" + "strings" + + "github.com/Azure/azure-sdk-for-go/arm/storage" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmStorageAccount() *schema.Resource { + return &schema.Resource{ + Create: resourceArmStorageAccountCreate, + Read: resourceArmStorageAccountRead, + Update: resourceArmStorageAccountUpdate, + Delete: resourceArmStorageAccountDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArmStorageAccountName, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "account_type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArmStorageAccountType, + }, + + "primary_location": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "secondary_location": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "primary_blob_endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "secondary_blob_endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "primary_queue_endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "secondary_queue_endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "primary_table_endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "secondary_table_endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + // NOTE: The API does not appear to expose a secondary file endpoint + "primary_file_endpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmStorageAccountCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient).storageServiceClient + + resourceGroupName := d.Get("resource_group_name").(string) + storageAccountName := d.Get("name").(string) + accountType := d.Get("account_type").(string) + location := d.Get("location").(string) + tags := d.Get("tags").(map[string]interface{}) + + opts := storage.AccountCreateParameters{ + Location: &location, + Properties: &storage.AccountPropertiesCreateParameters{ + AccountType: storage.AccountType(accountType), + }, + Tags: expandTags(tags), + } + + accResp, err := client.Create(resourceGroupName, storageAccountName, opts) + if err != nil { + return fmt.Errorf("Error creating Azure Storage Account '%s': %s", storageAccountName, err) + } + _, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK) + if err != nil { + return fmt.Errorf("Error creating Azure Storage Account %q: %s", storageAccountName, err) + } + + // The only way to get the ID back apparently is to read the resource again + account, err := client.GetProperties(resourceGroupName, storageAccountName) + if err != nil { + return fmt.Errorf("Error retrieving Azure Storage Account %q: %s", storageAccountName, err) + } + + d.SetId(*account.ID) + + return resourceArmStorageAccountRead(d, meta) +} + +// resourceArmStorageAccountUpdate is unusual in the ARM API where most resources have a combined +// and idempotent operation for CreateOrUpdate. In particular updating all of the parameters +// available requires a call to Update per parameter... +func resourceArmStorageAccountUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient).storageServiceClient + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + storageAccountName := id.Path["storageAccounts"] + resourceGroupName := id.ResourceGroup + + d.Partial(true) + + if d.HasChange("account_type") { + accountType := d.Get("account_type").(string) + + opts := storage.AccountUpdateParameters{ + Properties: &storage.AccountPropertiesUpdateParameters{ + AccountType: storage.AccountType(accountType), + }, + } + accResp, err := client.Update(resourceGroupName, storageAccountName, opts) + if err != nil { + return fmt.Errorf("Error updating Azure Storage Account type %q: %s", storageAccountName, err) + } + _, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK) + if err != nil { + return fmt.Errorf("Error updating Azure Storage Account type %q: %s", storageAccountName, err) + } + + d.SetPartial("account_type") + } + + if d.HasChange("tags") { + tags := d.Get("tags").(map[string]interface{}) + + opts := storage.AccountUpdateParameters{ + Tags: expandTags(tags), + } + accResp, err := client.Update(resourceGroupName, storageAccountName, opts) + if err != nil { + return fmt.Errorf("Error updating Azure Storage Account tags %q: %s", storageAccountName, err) + } + _, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK) + if err != nil { + return fmt.Errorf("Error updating Azure Storage Account tags %q: %s", storageAccountName, err) + } + + d.SetPartial("tags") + } + + d.Partial(false) + return nil +} + +func resourceArmStorageAccountRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient).storageServiceClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + name := id.Path["storageAccounts"] + resGroup := id.ResourceGroup + + resp, err := client.GetProperties(resGroup, name) + if err != nil { + if resp.StatusCode == http.StatusNoContent { + d.SetId("") + return nil + } + + return fmt.Errorf("Error reading the state of AzureRM Storage Account %q: %s", name, err) + } + + d.Set("location", resp.Location) + d.Set("account_type", resp.Properties.AccountType) + d.Set("primary_location", resp.Properties.PrimaryLocation) + d.Set("secondary_location", resp.Properties.SecondaryLocation) + + if resp.Properties.PrimaryEndpoints != nil { + d.Set("primary_blob_endpoint", resp.Properties.PrimaryEndpoints.Blob) + d.Set("primary_queue_endpoint", resp.Properties.PrimaryEndpoints.Queue) + d.Set("primary_table_endpoint", resp.Properties.PrimaryEndpoints.Table) + d.Set("primary_file_endpoint", resp.Properties.PrimaryEndpoints.File) + } + + if resp.Properties.SecondaryEndpoints != nil { + if resp.Properties.SecondaryEndpoints.Blob != nil { + d.Set("secondary_blob_endpoint", resp.Properties.SecondaryEndpoints.Blob) + } else { + d.Set("secondary_blob_endpoint", "") + } + if resp.Properties.SecondaryEndpoints.Queue != nil { + d.Set("secondary_queue_endpoint", resp.Properties.SecondaryEndpoints.Queue) + } else { + d.Set("secondary_queue_endpoint", "") + } + if resp.Properties.SecondaryEndpoints.Table != nil { + d.Set("secondary_table_endpoint", resp.Properties.SecondaryEndpoints.Table) + } else { + d.Set("secondary_table_endpoint", "") + } + } + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmStorageAccountDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient).storageServiceClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + name := id.Path["storageAccounts"] + resGroup := id.ResourceGroup + + accResp, err := client.Delete(resGroup, name) + if err != nil { + return fmt.Errorf("Error issuing AzureRM delete request for storage account %q: %s", name, err) + } + _, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response, http.StatusNotFound) + if err != nil { + return fmt.Errorf("Error polling for AzureRM delete request for storage account %q: %s", name, err) + } + + return nil +} + +func validateArmStorageAccountName(v interface{}, k string) (ws []string, es []error) { + input := v.(string) + + if !regexp.MustCompile(`\A([a-z0-9]{3,24})\z`).MatchString(input) { + es = append(es, fmt.Errorf("name can only consist of lowercase letters and numbers, and must be between 3 and 24 characters long")) + } + + return +} + +func validateArmStorageAccountType(v interface{}, k string) (ws []string, es []error) { + validAccountTypes := []string{"standard_lrs", "standard_zrs", + "standard_grs", "standard_ragrs", "premium_lrs"} + + input := strings.ToLower(v.(string)) + + for _, valid := range validAccountTypes { + if valid == input { + return + } + } + + es = append(es, fmt.Errorf("Invalid storage account type %q", input)) + return +} diff --git a/builtin/providers/azurerm/resource_arm_storage_account_test.go b/builtin/providers/azurerm/resource_arm_storage_account_test.go new file mode 100644 index 0000000000..6cad5a1f50 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_storage_account_test.go @@ -0,0 +1,166 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestValidateArmStorageAccountType(t *testing.T) { + testCases := []struct { + input string + shouldError bool + }{ + {"standard_lrs", false}, + {"invalid", true}, + } + + for _, test := range testCases { + _, es := validateArmStorageAccountType(test.input, "account_type") + + if test.shouldError && len(es) == 0 { + t.Fatalf("Expected validating account_type %q to fail", test.input) + } + } +} + +func TestValidateArmStorageAccountName(t *testing.T) { + testCases := []struct { + input string + shouldError bool + }{ + {"ab", true}, + {"ABC", true}, + {"abc", false}, + {"123456789012345678901234", false}, + {"1234567890123456789012345", true}, + {"abc12345", false}, + } + + for _, test := range testCases { + _, es := validateArmStorageAccountName(test.input, "name") + + if test.shouldError && len(es) == 0 { + t.Fatalf("Expected validating name %q to fail", test.input) + } + } +} + +func TestAccAzureRMStorageAccount_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMStorageAccountDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMStorageAccount_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMStorageAccountExists("azurerm_storage_account.testsa"), + resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "account_type", "Standard_LRS"), + resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.#", "1"), + resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.environment", "production"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMStorageAccount_update, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMStorageAccountExists("azurerm_storage_account.testsa"), + resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "account_type", "Standard_GRS"), + resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.#", "1"), + resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func testCheckAzureRMStorageAccountExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + storageAccount := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + // Ensure resource group exists in API + conn := testAccProvider.Meta().(*ArmClient).storageServiceClient + + resp, err := conn.GetProperties(resourceGroup, storageAccount) + if err != nil { + return fmt.Errorf("Bad: Get on storageServiceClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: StorageAccount %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMStorageAccountDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).storageServiceClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_storage_account" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.GetProperties(resourceGroup, name) + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Storage Account still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMStorageAccount_basic = ` +resource "azurerm_resource_group" "testrg" { + name = "testAccAzureRMStorageAccountBasic" + location = "westus" +} + +resource "azurerm_storage_account" "testsa" { + name = "unlikely23exst2acct1435" + resource_group_name = "${azurerm_resource_group.testrg.name}" + + location = "westus" + account_type = "Standard_LRS" + + tags { + environment = "production" + } +}` + +var testAccAzureRMStorageAccount_update = ` +resource "azurerm_resource_group" "testrg" { + name = "testAccAzureRMStorageAccountBasic" + location = "westus" +} + +resource "azurerm_storage_account" "testsa" { + name = "unlikely23exst2acct1435" + resource_group_name = "${azurerm_resource_group.testrg.name}" + + location = "westus" + account_type = "Standard_GRS" + + tags { + environment = "staging" + } +}` diff --git a/builtin/providers/azurerm/resource_arm_subnet.go b/builtin/providers/azurerm/resource_arm_subnet.go new file mode 100644 index 0000000000..58acbc6ccb --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_subnet.go @@ -0,0 +1,188 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmSubnet() *schema.Resource { + return &schema.Resource{ + Create: resourceArmSubnetCreate, + Read: resourceArmSubnetRead, + Update: resourceArmSubnetCreate, + Delete: resourceArmSubnetDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "virtual_network_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "network_security_group_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "route_table_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "ip_configurations": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func resourceArmSubnetCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + subnetClient := client.subnetClient + + log.Printf("[INFO] preparing arguments for Azure ARM Subnet creation.") + + name := d.Get("name").(string) + vnetName := d.Get("virtual_network_name").(string) + resGroup := d.Get("resource_group_name").(string) + addressPrefix := d.Get("address_prefix").(string) + + armMutexKV.Lock(vnetName) + defer armMutexKV.Unlock(vnetName) + + properties := network.SubnetPropertiesFormat{ + AddressPrefix: &addressPrefix, + } + + if v, ok := d.GetOk("network_security_group_id"); ok { + nsgId := v.(string) + properties.NetworkSecurityGroup = &network.SecurityGroup{ + ID: &nsgId, + } + } + + if v, ok := d.GetOk("route_table_id"); ok { + rtId := v.(string) + properties.RouteTable = &network.RouteTable{ + ID: &rtId, + } + } + + subnet := network.Subnet{ + Name: &name, + Properties: &properties, + } + + resp, err := subnetClient.CreateOrUpdate(resGroup, vnetName, name, subnet) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Subnet (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: subnetRuleStateRefreshFunc(client, resGroup, vnetName, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Subnet (%s) to become available: %s", name, err) + } + + return resourceArmSubnetRead(d, meta) +} + +func resourceArmSubnetRead(d *schema.ResourceData, meta interface{}) error { + subnetClient := meta.(*ArmClient).subnetClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + vnetName := id.Path["virtualNetworks"] + name := id.Path["subnets"] + + resp, err := subnetClient.Get(resGroup, vnetName, name, "") + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Subnet %s: %s", name, err) + } + + if resp.Properties.IPConfigurations != nil && len(*resp.Properties.IPConfigurations) > 0 { + ips := make([]string, 0, len(*resp.Properties.IPConfigurations)) + for _, ip := range *resp.Properties.IPConfigurations { + ips = append(ips, *ip.ID) + } + + if err := d.Set("ip_configurations", ips); err != nil { + return err + } + } + + return nil +} + +func resourceArmSubnetDelete(d *schema.ResourceData, meta interface{}) error { + subnetClient := meta.(*ArmClient).subnetClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["subnets"] + vnetName := id.Path["virtualNetworks"] + + armMutexKV.Lock(vnetName) + defer armMutexKV.Unlock(vnetName) + + _, err = subnetClient.Delete(resGroup, vnetName, name) + + return err +} + +func subnetRuleStateRefreshFunc(client *ArmClient, resourceGroupName string, virtualNetworkName string, subnetName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.subnetClient.Get(resourceGroupName, virtualNetworkName, subnetName, "") + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in subnetRuleStateRefreshFunc to Azure ARM for subnet '%s' (RG: '%s') (VNN: '%s'): %s", subnetName, resourceGroupName, virtualNetworkName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} diff --git a/builtin/providers/azurerm/resource_arm_subnet_test.go b/builtin/providers/azurerm/resource_arm_subnet_test.go new file mode 100644 index 0000000000..b00ae1d568 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_subnet_test.go @@ -0,0 +1,104 @@ +package azurerm + +import ( + "fmt" + "net/http" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMSubnet_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMSubnetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMSubnet_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMSubnetExists("azurerm_subnet.test"), + ), + }, + }, + }) +} + +func testCheckAzureRMSubnetExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + name := rs.Primary.Attributes["name"] + vnetName := rs.Primary.Attributes["virtual_network_name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for subnet: %s", name) + } + + conn := testAccProvider.Meta().(*ArmClient).subnetClient + + resp, err := conn.Get(resourceGroup, vnetName, name, "") + if err != nil { + return fmt.Errorf("Bad: Get on subnetClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Subnet %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMSubnetDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).subnetClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_subnet" { + continue + } + + name := rs.Primary.Attributes["name"] + vnetName := rs.Primary.Attributes["virtual_network_name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, vnetName, name, "") + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Subnet still exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMSubnet_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_subnet" "test" { + name = "testsubnet" + resource_group_name = "${azurerm_resource_group.test.name}" + virtual_network_name = "${azurerm_virtual_network.test.name}" + address_prefix = "10.0.2.0/24" +} +` diff --git a/builtin/providers/azurerm/resource_arm_virtual_network.go b/builtin/providers/azurerm/resource_arm_virtual_network.go new file mode 100644 index 0000000000..c6b4699071 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_virtual_network.go @@ -0,0 +1,257 @@ +package azurerm + +import ( + "fmt" + "log" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceArmVirtualNetwork() *schema.Resource { + return &schema.Resource{ + Create: resourceArmVirtualNetworkCreate, + Read: resourceArmVirtualNetworkRead, + Update: resourceArmVirtualNetworkCreate, + Delete: resourceArmVirtualNetworkDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "address_space": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "dns_servers": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + + "subnet": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "address_prefix": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "security_group": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + Set: resourceAzureSubnetHash, + }, + + "location": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: azureRMNormalizeLocation, + }, + + "resource_group_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceArmVirtualNetworkCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*ArmClient) + vnetClient := client.vnetClient + + log.Printf("[INFO] preparing arguments for Azure ARM virtual network creation.") + + name := d.Get("name").(string) + location := d.Get("location").(string) + resGroup := d.Get("resource_group_name").(string) + tags := d.Get("tags").(map[string]interface{}) + + vnet := network.VirtualNetwork{ + Name: &name, + Location: &location, + Properties: getVirtualNetworkProperties(d), + Tags: expandTags(tags), + } + + resp, err := vnetClient.CreateOrUpdate(resGroup, name, vnet) + if err != nil { + return err + } + + d.SetId(*resp.ID) + + log.Printf("[DEBUG] Waiting for Virtual Network (%s) to become available", name) + stateConf := &resource.StateChangeConf{ + Pending: []string{"Accepted", "Updating"}, + Target: []string{"Succeeded"}, + Refresh: virtualNetworkStateRefreshFunc(client, resGroup, name), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Virtual Network (%s) to become available: %s", name, err) + } + + return resourceArmVirtualNetworkRead(d, meta) +} + +func resourceArmVirtualNetworkRead(d *schema.ResourceData, meta interface{}) error { + vnetClient := meta.(*ArmClient).vnetClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["virtualNetworks"] + + resp, err := vnetClient.Get(resGroup, name, "") + if resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + if err != nil { + return fmt.Errorf("Error making Read request on Azure virtual network %s: %s", name, err) + } + vnet := *resp.Properties + + // update appropriate values + d.Set("address_space", vnet.AddressSpace.AddressPrefixes) + + subnets := &schema.Set{ + F: resourceAzureSubnetHash, + } + + for _, subnet := range *vnet.Subnets { + s := map[string]interface{}{} + + s["name"] = *subnet.Name + s["address_prefix"] = *subnet.Properties.AddressPrefix + if subnet.Properties.NetworkSecurityGroup != nil { + s["security_group"] = *subnet.Properties.NetworkSecurityGroup.ID + } + + subnets.Add(s) + } + d.Set("subnet", subnets) + + dnses := []string{} + for _, dns := range *vnet.DhcpOptions.DNSServers { + dnses = append(dnses, dns) + } + d.Set("dns_servers", dnses) + + flattenAndSetTags(d, resp.Tags) + + return nil +} + +func resourceArmVirtualNetworkDelete(d *schema.ResourceData, meta interface{}) error { + vnetClient := meta.(*ArmClient).vnetClient + + id, err := parseAzureResourceID(d.Id()) + if err != nil { + return err + } + resGroup := id.ResourceGroup + name := id.Path["virtualNetworks"] + + _, err = vnetClient.Delete(resGroup, name) + + return err +} + +func getVirtualNetworkProperties(d *schema.ResourceData) *network.VirtualNetworkPropertiesFormat { + // first; get address space prefixes: + prefixes := []string{} + for _, prefix := range d.Get("address_space").([]interface{}) { + prefixes = append(prefixes, prefix.(string)) + } + + // then; the dns servers: + dnses := []string{} + for _, dns := range d.Get("dns_servers").([]interface{}) { + dnses = append(dnses, dns.(string)) + } + + // then; the subnets: + subnets := []network.Subnet{} + if subs := d.Get("subnet").(*schema.Set); subs.Len() > 0 { + for _, subnet := range subs.List() { + subnet := subnet.(map[string]interface{}) + + name := subnet["name"].(string) + prefix := subnet["address_prefix"].(string) + secGroup := subnet["security_group"].(string) + + var subnetObj network.Subnet + subnetObj.Name = &name + subnetObj.Properties = &network.SubnetPropertiesFormat{} + subnetObj.Properties.AddressPrefix = &prefix + + if secGroup != "" { + subnetObj.Properties.NetworkSecurityGroup = &network.SecurityGroup{ + ID: &secGroup, + } + } + + subnets = append(subnets, subnetObj) + } + } + + // finally; return the struct: + return &network.VirtualNetworkPropertiesFormat{ + AddressSpace: &network.AddressSpace{ + AddressPrefixes: &prefixes, + }, + DhcpOptions: &network.DhcpOptions{ + DNSServers: &dnses, + }, + Subnets: &subnets, + } +} + +func resourceAzureSubnetHash(v interface{}) int { + m := v.(map[string]interface{}) + subnet := m["name"].(string) + m["address_prefix"].(string) + if securityGroup, present := m["security_group"]; present { + subnet = subnet + securityGroup.(string) + } + return hashcode.String(subnet) +} + +func virtualNetworkStateRefreshFunc(client *ArmClient, resourceGroupName string, networkName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + res, err := client.vnetClient.Get(resourceGroupName, networkName, "") + if err != nil { + return nil, "", fmt.Errorf("Error issuing read request in virtualNetworkStateRefreshFunc to Azure ARM for virtual network '%s' (RG: '%s'): %s", networkName, resourceGroupName, err) + } + + return res, *res.Properties.ProvisioningState, nil + } +} diff --git a/builtin/providers/azurerm/resource_arm_virtual_network_test.go b/builtin/providers/azurerm/resource_arm_virtual_network_test.go new file mode 100644 index 0000000000..9afce45283 --- /dev/null +++ b/builtin/providers/azurerm/resource_arm_virtual_network_test.go @@ -0,0 +1,180 @@ +package azurerm + +import ( + "fmt" + "testing" + + "github.com/Azure/azure-sdk-for-go/core/http" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAzureRMVirtualNetwork_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMVirtualNetworkDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVirtualNetwork_basic, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMVirtualNetworkExists("azurerm_virtual_network.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMVirtualNetwork_withTags(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMVirtualNetworkDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAzureRMVirtualNetwork_withTags, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMVirtualNetworkExists("azurerm_virtual_network.test"), + resource.TestCheckResourceAttr( + "azurerm_virtual_network.test", "tags.#", "2"), + resource.TestCheckResourceAttr( + "azurerm_virtual_network.test", "tags.environment", "Production"), + resource.TestCheckResourceAttr( + "azurerm_virtual_network.test", "tags.cost_center", "MSFT"), + ), + }, + + resource.TestStep{ + Config: testAccAzureRMVirtualNetwork_withTagsUpdated, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMVirtualNetworkExists("azurerm_virtual_network.test"), + resource.TestCheckResourceAttr( + "azurerm_virtual_network.test", "tags.#", "1"), + resource.TestCheckResourceAttr( + "azurerm_virtual_network.test", "tags.environment", "staging"), + ), + }, + }, + }) +} + +func testCheckAzureRMVirtualNetworkExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + virtualNetworkName := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for virtual network: %s", virtualNetworkName) + } + + // Ensure resource group/virtual network combination exists in API + conn := testAccProvider.Meta().(*ArmClient).vnetClient + + resp, err := conn.Get(resourceGroup, virtualNetworkName, "") + if err != nil { + return fmt.Errorf("Bad: Get on vnetClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: Virtual Network %q (resource group: %q) does not exist", name, resourceGroup) + } + + return nil + } +} + +func testCheckAzureRMVirtualNetworkDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*ArmClient).vnetClient + + for _, rs := range s.RootModule().Resources { + if rs.Type != "azurerm_virtual_network" { + continue + } + + name := rs.Primary.Attributes["name"] + resourceGroup := rs.Primary.Attributes["resource_group_name"] + + resp, err := conn.Get(resourceGroup, name, "") + + if err != nil { + return nil + } + + if resp.StatusCode != http.StatusNotFound { + return fmt.Errorf("Virtual Network sitll exists:\n%#v", resp.Properties) + } + } + + return nil +} + +var testAccAzureRMVirtualNetwork_basic = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + subnet { + name = "subnet1" + address_prefix = "10.0.1.0/24" + } +} +` + +var testAccAzureRMVirtualNetwork_withTags = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + subnet { + name = "subnet1" + address_prefix = "10.0.1.0/24" + } + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +` + +var testAccAzureRMVirtualNetwork_withTagsUpdated = ` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + subnet { + name = "subnet1" + address_prefix = "10.0.1.0/24" + } + + tags { + environment = "staging" + } +} +` diff --git a/builtin/providers/azurerm/resourceid.go b/builtin/providers/azurerm/resourceid.go new file mode 100644 index 0000000000..daf9b04b48 --- /dev/null +++ b/builtin/providers/azurerm/resourceid.go @@ -0,0 +1,90 @@ +package azurerm + +import ( + "fmt" + "net/url" + "strings" +) + +// ResourceID represents a parsed long-form Azure Resource Manager ID +// with the Subscription ID, Resource Group and the Provider as top- +// level fields, and other key-value pairs available via a map in the +// Path field. +type ResourceID struct { + SubscriptionID string + ResourceGroup string + Provider string + Path map[string]string +} + +// parseAzureResourceID converts a long-form Azure Resource Manager ID +// into a ResourceID. We make assumptions about the structure of URLs, +// which is obviously not good, but the best thing available given the +// SDK. +func parseAzureResourceID(id string) (*ResourceID, error) { + idURL, err := url.ParseRequestURI(id) + if err != nil { + return nil, fmt.Errorf("Cannot parse Azure Id: %s", err) + } + + path := idURL.Path + + path = strings.TrimSpace(path) + if strings.HasPrefix(path, "/") { + path = path[1:] + } + + if strings.HasSuffix(path, "/") { + path = path[:len(path)-1] + } + + components := strings.Split(path, "/") + + // We should have an even number of key-value pairs. + if len(components)%2 != 0 { + return nil, fmt.Errorf("The number of path segments is not divisible by 2 in %q", path) + } + + // Put the constituent key-value pairs into a map + componentMap := make(map[string]string, len(components)/2) + for current := 0; current < len(components); current += 2 { + key := components[current] + value := components[current+1] + + componentMap[key] = value + } + + // Build up a ResourceID from the map + idObj := &ResourceID{} + idObj.Path = componentMap + + if subscription, ok := componentMap["subscriptions"]; ok { + idObj.SubscriptionID = subscription + delete(componentMap, "subscriptions") + } else { + return nil, fmt.Errorf("No subscription ID found in: %q", path) + } + + if resourceGroup, ok := componentMap["resourceGroups"]; ok { + idObj.ResourceGroup = resourceGroup + delete(componentMap, "resourceGroups") + } else { + // Some Azure APIs are weird and provide things in lower case... + // However it's not clear whether the casing of other elements in the URI + // matter, so we explicitly look for that case here. + if resourceGroup, ok := componentMap["resourcegroups"]; ok { + idObj.ResourceGroup = resourceGroup + delete(componentMap, "resourcegroups") + } else { + return nil, fmt.Errorf("No resource group name found in: %q", path) + } + } + + // It is OK not to have a provider in the case of a resource group + if provider, ok := componentMap["providers"]; ok { + idObj.Provider = provider + delete(componentMap, "providers") + } + + return idObj, nil +} diff --git a/builtin/providers/azurerm/resourceid_test.go b/builtin/providers/azurerm/resourceid_test.go new file mode 100644 index 0000000000..c8e6e96a25 --- /dev/null +++ b/builtin/providers/azurerm/resourceid_test.go @@ -0,0 +1,119 @@ +package azurerm + +import ( + "reflect" + "testing" +) + +func TestParseAzureResourceID(t *testing.T) { + testCases := []struct { + id string + expectedResourceID *ResourceID + expectError bool + }{ + { + "random", + nil, + true, + }, + { + "/subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038", + nil, + true, + }, + { + "subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038", + nil, + true, + }, + { + "/subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038/resourceGroups/testGroup1", + &ResourceID{ + SubscriptionID: "6d74bdd2-9f84-11e5-9bd9-7831c1c4c038", + ResourceGroup: "testGroup1", + Provider: "", + Path: map[string]string{}, + }, + false, + }, + { + "/subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038/resourceGroups/testGroup1/providers/Microsoft.Network", + &ResourceID{ + SubscriptionID: "6d74bdd2-9f84-11e5-9bd9-7831c1c4c038", + ResourceGroup: "testGroup1", + Provider: "Microsoft.Network", + Path: map[string]string{}, + }, + false, + }, + { + // Missing leading / + "subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038/resourceGroups/testGroup1/providers/Microsoft.Network/virtualNetworks/virtualNetwork1/", + nil, + true, + }, + { + "/subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038/resourceGroups/testGroup1/providers/Microsoft.Network/virtualNetworks/virtualNetwork1", + &ResourceID{ + SubscriptionID: "6d74bdd2-9f84-11e5-9bd9-7831c1c4c038", + ResourceGroup: "testGroup1", + Provider: "Microsoft.Network", + Path: map[string]string{ + "virtualNetworks": "virtualNetwork1", + }, + }, + false, + }, + { + "/subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038/resourceGroups/testGroup1/providers/Microsoft.Network/virtualNetworks/virtualNetwork1?api-version=2006-01-02-preview", + &ResourceID{ + SubscriptionID: "6d74bdd2-9f84-11e5-9bd9-7831c1c4c038", + ResourceGroup: "testGroup1", + Provider: "Microsoft.Network", + Path: map[string]string{ + "virtualNetworks": "virtualNetwork1", + }, + }, + false, + }, + { + "/subscriptions/6d74bdd2-9f84-11e5-9bd9-7831c1c4c038/resourceGroups/testGroup1/providers/Microsoft.Network/virtualNetworks/virtualNetwork1/subnets/publicInstances1?api-version=2006-01-02-preview", + &ResourceID{ + SubscriptionID: "6d74bdd2-9f84-11e5-9bd9-7831c1c4c038", + ResourceGroup: "testGroup1", + Provider: "Microsoft.Network", + Path: map[string]string{ + "virtualNetworks": "virtualNetwork1", + "subnets": "publicInstances1", + }, + }, + false, + }, + { + "/subscriptions/34ca515c-4629-458e-bf7c-738d77e0d0ea/resourcegroups/acceptanceTestResourceGroup1/providers/Microsoft.Cdn/profiles/acceptanceTestCdnProfile1", + &ResourceID{ + SubscriptionID: "34ca515c-4629-458e-bf7c-738d77e0d0ea", + ResourceGroup: "acceptanceTestResourceGroup1", + Provider: "Microsoft.Cdn", + Path: map[string]string{ + "profiles": "acceptanceTestCdnProfile1", + }, + }, + false, + }, + } + + for _, test := range testCases { + parsed, err := parseAzureResourceID(test.id) + if test.expectError && err != nil { + continue + } + if err != nil { + t.Fatalf("Unexpected error: %s", err) + } + + if !reflect.DeepEqual(test.expectedResourceID, parsed) { + t.Fatalf("Unexpected resource ID:\nExpected: %+v\nGot: %+v\n", test.expectedResourceID, parsed) + } + } +} diff --git a/builtin/providers/azurerm/tags.go b/builtin/providers/azurerm/tags.go new file mode 100644 index 0000000000..60255a3231 --- /dev/null +++ b/builtin/providers/azurerm/tags.go @@ -0,0 +1,76 @@ +package azurerm + +import ( + "errors" + "fmt" + + "github.com/hashicorp/terraform/helper/schema" +) + +func tagsSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ValidateFunc: validateAzureRMTags, + } +} + +func tagValueToString(v interface{}) (string, error) { + switch value := v.(type) { + case string: + return value, nil + case int: + return fmt.Sprintf("%d", value), nil + default: + return "", fmt.Errorf("unknown tag type %T in tag value", value) + } +} + +func validateAzureRMTags(v interface{}, k string) (ws []string, es []error) { + tagsMap := v.(map[string]interface{}) + + if len(tagsMap) > 15 { + es = append(es, errors.New("a maximum of 15 tags can be applied to each ARM resource")) + } + + for k, v := range tagsMap { + if len(k) > 512 { + es = append(es, fmt.Errorf("the maximum length for a tag key is 512 characters: %q is %d characters", k, len(k))) + } + + value, err := tagValueToString(v) + if err != nil { + es = append(es, err) + } else if len(value) > 256 { + es = append(es, fmt.Errorf("the maximum length for a tag value is 256 characters: the value for %q is %d characters", k, len(value))) + } + } + + return +} + +func expandTags(tagsMap map[string]interface{}) *map[string]*string { + output := make(map[string]*string, len(tagsMap)) + + for i, v := range tagsMap { + //Validate should have ignored this error already + value, _ := tagValueToString(v) + output[i] = &value + } + + return &output +} + +func flattenAndSetTags(d *schema.ResourceData, tagsMap *map[string]*string) { + if tagsMap == nil { + return + } + + output := make(map[string]interface{}, len(*tagsMap)) + + for i, v := range *tagsMap { + output[i] = *v + } + + d.Set("tags", output) +} diff --git a/builtin/providers/azurerm/tags_test.go b/builtin/providers/azurerm/tags_test.go new file mode 100644 index 0000000000..fb75c04f06 --- /dev/null +++ b/builtin/providers/azurerm/tags_test.go @@ -0,0 +1,97 @@ +package azurerm + +import ( + "fmt" + "strings" + "testing" +) + +func TestValidateMaximumNumberOfARMTags(t *testing.T) { + tagsMap := make(map[string]interface{}) + for i := 0; i < 16; i++ { + tagsMap[fmt.Sprintf("key%d", i)] = fmt.Sprintf("value%d", i) + } + + _, es := validateAzureRMTags(tagsMap, "tags") + + if len(es) != 1 { + t.Fatal("Expected one validation error for too many tags") + } + + if !strings.Contains(es[0].Error(), "a maximum of 15 tags") { + t.Fatal("Wrong validation error message for too many tags") + } +} + +func TestValidateARMTagMaxKeyLength(t *testing.T) { + tooLongKey := strings.Repeat("long", 128) + "a" + tagsMap := make(map[string]interface{}) + tagsMap[tooLongKey] = "value" + + _, es := validateAzureRMTags(tagsMap, "tags") + if len(es) != 1 { + t.Fatal("Expected one validation error for a key which is > 512 chars") + } + + if !strings.Contains(es[0].Error(), "maximum length for a tag key") { + t.Fatal("Wrong validation error message maximum tag key length") + } + + if !strings.Contains(es[0].Error(), tooLongKey) { + t.Fatal("Expected validated error to contain the key name") + } + + if !strings.Contains(es[0].Error(), "513") { + t.Fatal("Expected the length in the validation error for tag key") + } +} + +func TestValidateARMTagMaxValueLength(t *testing.T) { + tagsMap := make(map[string]interface{}) + tagsMap["toolong"] = strings.Repeat("long", 64) + "a" + + _, es := validateAzureRMTags(tagsMap, "tags") + if len(es) != 1 { + t.Fatal("Expected one validation error for a value which is > 256 chars") + } + + if !strings.Contains(es[0].Error(), "maximum length for a tag value") { + t.Fatal("Wrong validation error message for maximum tag value length") + } + + if !strings.Contains(es[0].Error(), "toolong") { + t.Fatal("Expected validated error to contain the key name") + } + + if !strings.Contains(es[0].Error(), "257") { + t.Fatal("Expected the length in the validation error for value") + } +} + +func TestExpandARMTags(t *testing.T) { + testData := make(map[string]interface{}) + testData["key1"] = "value1" + testData["key2"] = 21 + testData["key3"] = "value3" + + tempExpanded := expandTags(testData) + expanded := *tempExpanded + + if len(expanded) != 3 { + t.Fatalf("Expected 3 results in expanded tag map, got %d", len(expanded)) + } + + for k, v := range testData { + var strVal string + switch v.(type) { + case string: + strVal = v.(string) + case int: + strVal = fmt.Sprintf("%d", v.(int)) + } + + if *expanded[k] != strVal { + t.Fatalf("Expanded value %q incorrect: expected %q, got %q", k, strVal, expanded[k]) + } + } +} diff --git a/builtin/providers/chef/provider.go b/builtin/providers/chef/provider.go new file mode 100644 index 0000000000..7a04b97758 --- /dev/null +++ b/builtin/providers/chef/provider.go @@ -0,0 +1,112 @@ +package chef + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "os" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" + + chefc "github.com/go-chef/chef" +) + +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "server_url": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("CHEF_SERVER_URL", nil), + Description: "URL of the root of the target Chef server or organization.", + }, + "client_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("CHEF_CLIENT_NAME", nil), + Description: "Name of a registered client within the Chef server.", + }, + "private_key_pem": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: providerPrivateKeyEnvDefault, + Description: "PEM-formatted private key for client authentication.", + }, + "allow_unverified_ssl": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Description: "If set, the Chef client will permit unverifiable SSL certificates.", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + //"chef_acl": resourceChefAcl(), + //"chef_client": resourceChefClient(), + //"chef_cookbook": resourceChefCookbook(), + "chef_data_bag": resourceChefDataBag(), + "chef_data_bag_item": resourceChefDataBagItem(), + "chef_environment": resourceChefEnvironment(), + "chef_node": resourceChefNode(), + "chef_role": resourceChefRole(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := &chefc.Config{ + Name: d.Get("client_name").(string), + Key: d.Get("private_key_pem").(string), + BaseURL: d.Get("server_url").(string), + SkipSSL: d.Get("allow_unverified_ssl").(bool), + Timeout: 10 * time.Second, + } + + return chefc.NewClient(config) +} + +func providerPrivateKeyEnvDefault() (interface{}, error) { + if fn := os.Getenv("CHEF_PRIVATE_KEY_FILE"); fn != "" { + contents, err := ioutil.ReadFile(fn) + if err != nil { + return nil, err + } + return string(contents), nil + } + + return nil, nil +} + +func jsonStateFunc(value interface{}) string { + // Parse and re-stringify the JSON to make sure it's always kept + // in a normalized form. + in, ok := value.(string) + if !ok { + return "null" + } + var tmp map[string]interface{} + + // Assuming the value must be valid JSON since it passed okay through + // our prepareDataBagItemContent function earlier. + json.Unmarshal([]byte(in), &tmp) + + jsonValue, _ := json.Marshal(&tmp) + return string(jsonValue) +} + +func runListEntryStateFunc(value interface{}) string { + // Recipes in run lists can either be naked, like "foo", or can + // be explicitly qualified as "recipe[foo]". Whichever form we use, + // the server will always normalize to the explicit form, + // so we'll normalize too and then we won't generate unnecessary + // diffs when we refresh. + in := value.(string) + if !strings.Contains(in, "[") { + return fmt.Sprintf("recipe[%s]", in) + } + return in +} diff --git a/builtin/providers/chef/provider_test.go b/builtin/providers/chef/provider_test.go new file mode 100644 index 0000000000..1d12945f46 --- /dev/null +++ b/builtin/providers/chef/provider_test.go @@ -0,0 +1,62 @@ +package chef + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// To run these acceptance tests, you will need access to a Chef server. +// An easy way to get one is to sign up for a hosted Chef server account +// at https://manage.chef.io/signup , after which your base URL will +// be something like https://api.opscode.com/organizations/example/ . +// You will also need to create a "client" and write its private key to +// a file somewhere. +// +// You can then set the following environment variables to make these +// tests work: +// CHEF_SERVER_URL to the base URL as described above. +// CHEF_CLIENT_NAME to the name of the client object you created. +// CHEF_PRIVATE_KEY_FILE to the path to the private key file you created. +// +// You will probably need to edit the global permissions on your Chef +// Server account to allow this client (or all clients, if you're lazy) +// to have both List and Create access on all types of object: +// https://manage.chef.io/organizations/saymedia/global_permissions +// +// With all of that done, you can run like this: +// make testacc TEST=./builtin/providers/chef + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "chef": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("CHEF_SERVER_URL"); v == "" { + t.Fatal("CHEF_SERVER_URL must be set for acceptance tests") + } + if v := os.Getenv("CHEF_CLIENT_NAME"); v == "" { + t.Fatal("CHEF_CLIENT_NAME must be set for acceptance tests") + } + if v := os.Getenv("CHEF_PRIVATE_KEY_FILE"); v == "" { + t.Fatal("CHEF_PRIVATE_KEY_FILE must be set for acceptance tests") + } +} diff --git a/builtin/providers/chef/resource_data_bag.go b/builtin/providers/chef/resource_data_bag.go new file mode 100644 index 0000000000..a9c08748cd --- /dev/null +++ b/builtin/providers/chef/resource_data_bag.go @@ -0,0 +1,77 @@ +package chef + +import ( + "github.com/hashicorp/terraform/helper/schema" + + chefc "github.com/go-chef/chef" +) + +func resourceChefDataBag() *schema.Resource { + return &schema.Resource{ + Create: CreateDataBag, + Read: ReadDataBag, + Delete: DeleteDataBag, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "api_uri": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func CreateDataBag(d *schema.ResourceData, meta interface{}) error { + client := meta.(*chefc.Client) + + dataBag := &chefc.DataBag{ + Name: d.Get("name").(string), + } + + result, err := client.DataBags.Create(dataBag) + if err != nil { + return err + } + + d.SetId(dataBag.Name) + d.Set("api_uri", result.URI) + return nil +} + +func ReadDataBag(d *schema.ResourceData, meta interface{}) error { + client := meta.(*chefc.Client) + + // The Chef API provides no API to read a data bag's metadata, + // but we can try to read its items and use that as a proxy for + // whether it still exists. + + name := d.Id() + + _, err := client.DataBags.ListItems(name) + if err != nil { + if errRes, ok := err.(*chefc.ErrorResponse); ok { + if errRes.Response.StatusCode == 404 { + d.SetId("") + return nil + } + } + } + return err +} + +func DeleteDataBag(d *schema.ResourceData, meta interface{}) error { + client := meta.(*chefc.Client) + + name := d.Id() + + _, err := client.DataBags.Delete(name) + if err == nil { + d.SetId("") + } + return err +} diff --git a/builtin/providers/chef/resource_data_bag_item.go b/builtin/providers/chef/resource_data_bag_item.go new file mode 100644 index 0000000000..ff6f7ac673 --- /dev/null +++ b/builtin/providers/chef/resource_data_bag_item.go @@ -0,0 +1,120 @@ +package chef + +import ( + "encoding/json" + "fmt" + + "github.com/hashicorp/terraform/helper/schema" + + chefc "github.com/go-chef/chef" +) + +func resourceChefDataBagItem() *schema.Resource { + return &schema.Resource{ + Create: CreateDataBagItem, + Read: ReadDataBagItem, + Delete: DeleteDataBagItem, + + Schema: map[string]*schema.Schema{ + "data_bag_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "content_json": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: jsonStateFunc, + }, + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func CreateDataBagItem(d *schema.ResourceData, meta interface{}) error { + client := meta.(*chefc.Client) + + dataBagName := d.Get("data_bag_name").(string) + itemId, itemContent, err := prepareDataBagItemContent(d.Get("content_json").(string)) + if err != nil { + return err + } + + err = client.DataBags.CreateItem(dataBagName, itemContent) + if err != nil { + return err + } + + d.SetId(itemId) + d.Set("id", itemId) + return nil +} + +func ReadDataBagItem(d *schema.ResourceData, meta interface{}) error { + client := meta.(*chefc.Client) + + // The Chef API provides no API to read a data bag's metadata, + // but we can try to read its items and use that as a proxy for + // whether it still exists. + + itemId := d.Id() + dataBagName := d.Get("data_bag_name").(string) + + value, err := client.DataBags.GetItem(dataBagName, itemId) + if err != nil { + if errRes, ok := err.(*chefc.ErrorResponse); ok { + if errRes.Response.StatusCode == 404 { + d.SetId("") + return nil + } + } else { + return err + } + } + + jsonContent, err := json.Marshal(value) + if err != nil { + return err + } + + d.Set("content_json", string(jsonContent)) + + return nil +} + +func DeleteDataBagItem(d *schema.ResourceData, meta interface{}) error { + client := meta.(*chefc.Client) + + itemId := d.Id() + dataBagName := d.Get("data_bag_name").(string) + + err := client.DataBags.DeleteItem(dataBagName, itemId) + if err == nil { + d.SetId("") + d.Set("id", "") + } + return err +} + +func prepareDataBagItemContent(contentJson string) (string, interface{}, error) { + var value map[string]interface{} + err := json.Unmarshal([]byte(contentJson), &value) + if err != nil { + return "", nil, err + } + + var itemId string + if itemIdI, ok := value["id"]; ok { + itemId, _ = itemIdI.(string) + } + + if itemId == "" { + return "", nil, fmt.Errorf("content_json must have id attribute, set to a string") + } + + return itemId, value, nil +} diff --git a/builtin/providers/chef/resource_data_bag_item_test.go b/builtin/providers/chef/resource_data_bag_item_test.go new file mode 100644 index 0000000000..9630d8b6c8 --- /dev/null +++ b/builtin/providers/chef/resource_data_bag_item_test.go @@ -0,0 +1,95 @@ +package chef + +import ( + "fmt" + "reflect" + "testing" + + chefc "github.com/go-chef/chef" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataBagItem_basic(t *testing.T) { + var dataBagItemName string + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccDataBagItemCheckDestroy(dataBagItemName), + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDataBagItemConfig_basic, + Check: testAccDataBagItemCheck( + "chef_data_bag_item.test", &dataBagItemName, + ), + }, + }, + }) +} + +func testAccDataBagItemCheck(rn string, name *string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[rn] + if !ok { + return fmt.Errorf("resource not found: %s", rn) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("data bag item id not set") + } + + client := testAccProvider.Meta().(*chefc.Client) + content, err := client.DataBags.GetItem("terraform-acc-test-bag-item-basic", rs.Primary.ID) + if err != nil { + return fmt.Errorf("error getting data bag item: %s", err) + } + + expectedContent := map[string]interface{}{ + "id": "terraform_acc_test", + "something_else": true, + } + if !reflect.DeepEqual(content, expectedContent) { + return fmt.Errorf("wrong content: expected %#v, got %#v", expectedContent, content) + } + + if expected := "terraform_acc_test"; rs.Primary.Attributes["id"] != expected { + return fmt.Errorf("wrong id; expected %#v, got %#v", expected, rs.Primary.Attributes["id"]) + } + + *name = rs.Primary.ID + + return nil + } +} + +func testAccDataBagItemCheckDestroy(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := testAccProvider.Meta().(*chefc.Client) + _, err := client.DataBags.GetItem("terraform-acc-test-bag-item-basic", name) + if err == nil { + return fmt.Errorf("data bag item still exists") + } + if _, ok := err.(*chefc.ErrorResponse); err != nil && !ok { + return fmt.Errorf("got something other than an HTTP error (%v) when getting data bag item", err) + } + + return nil + } +} + +const testAccDataBagItemConfig_basic = ` +resource "chef_data_bag" "test" { + name = "terraform-acc-test-bag-item-basic" +} +resource "chef_data_bag_item" "test" { + data_bag_name = "terraform-acc-test-bag-item-basic" + depends_on = ["chef_data_bag.test"] + content_json = < 0 { - + if nrs := d.Get("rule").(*schema.Set); nrs.Len() > 0 { // Create an empty schema.Set to hold all rules - rules := &schema.Set{ - F: resourceCloudStackEgressFirewallRuleHash, - } + rules := resourceCloudStackEgressFirewall().Schema["rule"].ZeroValue().(*schema.Set) - for _, rule := range rs.List() { - // Create a single rule - err := resourceCloudStackEgressFirewallCreateRule(d, meta, rule.(map[string]interface{})) + err := createEgressFirewallRules(d, meta, rules, nrs) - // We need to update this first to preserve the correct state - rules.Add(rule) - d.Set("rule", rules) + // We need to update this first to preserve the correct state + d.Set("rule", rules) - if err != nil { - return err - } + if err != nil { + return err } } return resourceCloudStackEgressFirewallRead(d, meta) } -func resourceCloudStackEgressFirewallCreateRule( - d *schema.ResourceData, meta interface{}, rule map[string]interface{}) error { +func createEgressFirewallRules( + d *schema.ResourceData, + meta interface{}, + rules *schema.Set, + nrs *schema.Set) error { + var errs *multierror.Error + + var wg sync.WaitGroup + wg.Add(nrs.Len()) + + sem := make(chan struct{}, d.Get("parallelism").(int)) + for _, rule := range nrs.List() { + // Put in a tiny sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(rule map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Create a single rule + err := createEgressFirewallRule(d, meta, rule) + + // If we have at least one UUID, we need to save the rule + if len(rule["uuids"].(map[string]interface{})) > 0 { + rules.Add(rule) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(rule.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} +func createEgressFirewallRule( + d *schema.ResourceData, + meta interface{}, + rule map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) uuids := rule["uuids"].(map[string]interface{}) @@ -137,7 +181,7 @@ func resourceCloudStackEgressFirewallCreateRule( p := cs.Firewall.NewCreateEgressFirewallRuleParams(d.Id(), rule["protocol"].(string)) // Set the CIDR list - p.SetCidrlist([]string{rule["source_cidr"].(string)}) + p.SetCidrlist(retrieveCidrList(rule)) // If the protocol is ICMP set the needed ICMP parameters if rule["protocol"].(string) == "icmp" { @@ -157,15 +201,16 @@ func resourceCloudStackEgressFirewallCreateRule( if ps := rule["ports"].(*schema.Set); ps.Len() > 0 { // Create an empty schema.Set to hold all processed ports - ports := &schema.Set{ - F: func(v interface{}) int { - return hashcode.String(v.(string)) - }, - } + ports := &schema.Set{F: schema.HashString} for _, port := range ps.List() { - re := regexp.MustCompile(`^(\d+)(?:-(\d+))?$`) - m := re.FindStringSubmatch(port.(string)) + if _, ok := uuids[port.(string)]; ok { + ports.Add(port) + rule["ports"] = ports + continue + } + + m := splitPorts.FindStringSubmatch(port.(string)) startPort, err := strconv.Atoi(m[1]) if err != nil { @@ -220,9 +265,7 @@ func resourceCloudStackEgressFirewallRead(d *schema.ResourceData, meta interface } // Create an empty schema.Set to hold all rules - rules := &schema.Set{ - F: resourceCloudStackEgressFirewallRuleHash, - } + rules := resourceCloudStackEgressFirewall().Schema["rule"].ZeroValue().(*schema.Set) // Read all rules that are configured if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { @@ -247,10 +290,10 @@ func resourceCloudStackEgressFirewallRead(d *schema.ResourceData, meta interface delete(ruleMap, id.(string)) // Update the values - rule["source_cidr"] = r.Cidrlist rule["protocol"] = r.Protocol rule["icmp_type"] = r.Icmptype rule["icmp_code"] = r.Icmpcode + setCidrList(rule, r.Cidrlist) rules.Add(rule) } @@ -259,11 +302,7 @@ func resourceCloudStackEgressFirewallRead(d *schema.ResourceData, meta interface if ps := rule["ports"].(*schema.Set); ps.Len() > 0 { // Create an empty schema.Set to hold all ports - ports := &schema.Set{ - F: func(v interface{}) int { - return hashcode.String(v.(string)) - }, - } + ports := &schema.Set{F: schema.HashString} // Loop through all ports and retrieve their info for _, port := range ps.List() { @@ -283,8 +322,8 @@ func resourceCloudStackEgressFirewallRead(d *schema.ResourceData, meta interface delete(ruleMap, id.(string)) // Update the values - rule["source_cidr"] = r.Cidrlist rule["protocol"] = r.Protocol + setCidrList(rule, r.Cidrlist) ports.Add(port) } @@ -301,21 +340,22 @@ func resourceCloudStackEgressFirewallRead(d *schema.ResourceData, meta interface // If this is a managed firewall, add all unknown rules into a single dummy rule managed := d.Get("managed").(bool) if managed && len(ruleMap) > 0 { - // Add all UUIDs to a uuids map - uuids := make(map[string]interface{}, len(ruleMap)) for uuid := range ruleMap { - uuids[uuid] = uuid - } + // We need to create and add a dummy value to a schema.Set as the + // cidr_list is a required field and thus needs a value + cidrs := &schema.Set{F: schema.HashString} + cidrs.Add(uuid) - // Make a dummy rule to hold all unknown UUIDs - rule := map[string]interface{}{ - "source_cidr": "N/A", - "protocol": "N/A", - "uuids": ruleMap, - } + // Make a dummy rule to hold the unknown UUID + rule := map[string]interface{}{ + "cidr_list": uuid, + "protocol": uuid, + "uuids": map[string]interface{}{uuid: uuid}, + } - // Add the dummy rule to the rules set - rules.Add(rule) + // Add the dummy rule to the rules set + rules.Add(rule) + } } if rules.Len() > 0 { @@ -339,27 +379,29 @@ func resourceCloudStackEgressFirewallUpdate(d *schema.ResourceData, meta interfa ors := o.(*schema.Set).Difference(n.(*schema.Set)) nrs := n.(*schema.Set).Difference(o.(*schema.Set)) - // Now first loop through all the old rules and delete any obsolete ones - for _, rule := range ors.List() { - // Delete the rule as it no longer exists in the config - err := resourceCloudStackEgressFirewallDeleteRule(d, meta, rule.(map[string]interface{})) + // We need to start with a rule set containing all the rules we + // already have and want to keep. Any rules that are not deleted + // correctly and any newly created rules, will be added to this + // set to make sure we end up in a consistent state + rules := o.(*schema.Set).Intersection(n.(*schema.Set)) + + // First loop through all the old rules and delete them + if ors.Len() > 0 { + err := deleteEgressFirewallRules(d, meta, rules, ors) + + // We need to update this first to preserve the correct state + d.Set("rule", rules) + if err != nil { return err } } - // Make sure we save the state of the currently configured rules - rules := o.(*schema.Set).Intersection(n.(*schema.Set)) - d.Set("rule", rules) - - // Then loop through all the currently configured rules and create the new ones - for _, rule := range nrs.List() { - // When successfully deleted, re-create it again if it still exists - err := resourceCloudStackEgressFirewallCreateRule( - d, meta, rule.(map[string]interface{})) + // Then loop through all the new rules and create them + if nrs.Len() > 0 { + err := createEgressFirewallRules(d, meta, rules, nrs) // We need to update this first to preserve the correct state - rules.Add(rule) d.Set("rule", rules) if err != nil { @@ -372,26 +414,69 @@ func resourceCloudStackEgressFirewallUpdate(d *schema.ResourceData, meta interfa } func resourceCloudStackEgressFirewallDelete(d *schema.ResourceData, meta interface{}) error { + // Create an empty rule set to hold all rules that where + // not deleted correctly + rules := resourceCloudStackEgressFirewall().Schema["rule"].ZeroValue().(*schema.Set) + // Delete all rules - if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { - for _, rule := range rs.List() { - // Delete a single rule - err := resourceCloudStackEgressFirewallDeleteRule(d, meta, rule.(map[string]interface{})) + if ors := d.Get("rule").(*schema.Set); ors.Len() > 0 { + err := deleteEgressFirewallRules(d, meta, rules, ors) - // We need to update this first to preserve the correct state - d.Set("rule", rs) + // We need to update this first to preserve the correct state + d.Set("rule", rules) - if err != nil { - return err - } + if err != nil { + return err } } return nil } -func resourceCloudStackEgressFirewallDeleteRule( - d *schema.ResourceData, meta interface{}, rule map[string]interface{}) error { +func deleteEgressFirewallRules( + d *schema.ResourceData, + meta interface{}, + rules *schema.Set, + ors *schema.Set) error { + var errs *multierror.Error + + var wg sync.WaitGroup + wg.Add(ors.Len()) + + sem := make(chan struct{}, d.Get("parallelism").(int)) + for _, rule := range ors.List() { + // Put a sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(rule map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Delete a single rule + err := deleteEgressFirewallRule(d, meta, rule) + + // If we have at least one UUID, we need to save the rule + if len(rule["uuids"].(map[string]interface{})) > 0 { + rules.Add(rule) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(rule.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} + +func deleteEgressFirewallRule( + d *schema.ResourceData, + meta interface{}, + rule map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) uuids := rule["uuids"].(map[string]interface{}) @@ -420,47 +505,12 @@ func resourceCloudStackEgressFirewallDeleteRule( // Delete the UUID of this rule delete(uuids, k) + rule["uuids"] = uuids } - // Update the UUIDs - rule["uuids"] = uuids - return nil } -func resourceCloudStackEgressFirewallRuleHash(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf( - "%s-%s-", m["source_cidr"].(string), m["protocol"].(string))) - - if v, ok := m["icmp_type"]; ok { - buf.WriteString(fmt.Sprintf("%d-", v.(int))) - } - - if v, ok := m["icmp_code"]; ok { - buf.WriteString(fmt.Sprintf("%d-", v.(int))) - } - - // We need to make sure to sort the strings below so that we always - // generate the same hash code no matter what is in the set. - if v, ok := m["ports"]; ok { - vs := v.(*schema.Set).List() - s := make([]string, len(vs)) - - for i, raw := range vs { - s[i] = raw.(string) - } - sort.Strings(s) - - for _, v := range s { - buf.WriteString(fmt.Sprintf("%s-", v)) - } - } - - return hashcode.String(buf.String()) -} - func verifyEgressFirewallParams(d *schema.ResourceData) error { managed := d.Get("managed").(bool) _, rules := d.GetOk("rule") @@ -474,10 +524,21 @@ func verifyEgressFirewallParams(d *schema.ResourceData) error { } func verifyEgressFirewallRuleParams(d *schema.ResourceData, rule map[string]interface{}) error { + cidrList := rule["cidr_list"].(*schema.Set) + sourceCidr := rule["source_cidr"].(string) + if cidrList.Len() == 0 && sourceCidr == "" { + return fmt.Errorf( + "Parameter cidr_list is a required parameter") + } + if cidrList.Len() > 0 && sourceCidr != "" { + return fmt.Errorf( + "Parameter source_cidr is deprecated and cannot be used together with cidr_list") + } + protocol := rule["protocol"].(string) if protocol != "tcp" && protocol != "udp" && protocol != "icmp" { return fmt.Errorf( - "%s is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol) + "%q is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol) } if protocol == "icmp" { @@ -490,9 +551,17 @@ func verifyEgressFirewallRuleParams(d *schema.ResourceData, rule map[string]inte "Parameter icmp_code is a required parameter when using protocol 'icmp'") } } else { - if _, ok := rule["ports"]; !ok { + if ports, ok := rule["ports"].(*schema.Set); ok { + for _, port := range ports.List() { + m := splitPorts.FindStringSubmatch(port.(string)) + if m == nil { + return fmt.Errorf( + "%q is not a valid port value. Valid options are '80' or '80-90'", port.(string)) + } + } + } else { return fmt.Errorf( - "Parameter port is a required parameter when using protocol 'tcp' or 'udp'") + "Parameter ports is a required parameter when *not* using protocol 'icmp'") } } diff --git a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go index dbca8c32b4..07f4e0d8a2 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go @@ -2,19 +2,15 @@ package cloudstack import ( "fmt" - "strconv" "strings" "testing" "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" "github.com/xanzy/go-cloudstack/cloudstack" ) func TestAccCloudStackEgressFirewall_basic(t *testing.T) { - hash := makeTestCloudStackEgressFirewallRuleHash([]interface{}{"1000-2000", "80"}) - resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -26,18 +22,26 @@ func TestAccCloudStackEgressFirewall_basic(t *testing.T) { testAccCheckCloudStackEgressFirewallRulesExist("cloudstack_egress_firewall.foo"), resource.TestCheckResourceAttr( "cloudstack_egress_firewall.foo", "network", CLOUDSTACK_NETWORK_1), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.#", "2"), resource.TestCheckResourceAttr( "cloudstack_egress_firewall.foo", - "rule."+hash+".source_cidr", - CLOUDSTACK_NETWORK_1_IPADDRESS+"/32"), + "rule.1081385056.cidr_list.3378711023", + CLOUDSTACK_NETWORK_1_IPADDRESS1+"/32"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash+".protocol", "tcp"), + "cloudstack_egress_firewall.foo", "rule.1081385056.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash+".ports.#", "2"), + "cloudstack_egress_firewall.foo", "rule.1081385056.ports.32925333", "8080"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash+".ports.1209010669", "1000-2000"), + "cloudstack_egress_firewall.foo", + "rule.1129999216.source_cidr", + CLOUDSTACK_NETWORK_1_IPADDRESS1+"/32"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash+".ports.1889509032", "80"), + "cloudstack_egress_firewall.foo", "rule.1129999216.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1129999216.ports.1209010669", "1000-2000"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1129999216.ports.1889509032", "80"), ), }, }, @@ -45,9 +49,6 @@ func TestAccCloudStackEgressFirewall_basic(t *testing.T) { } func TestAccCloudStackEgressFirewall_update(t *testing.T) { - hash1 := makeTestCloudStackEgressFirewallRuleHash([]interface{}{"1000-2000", "80"}) - hash2 := makeTestCloudStackEgressFirewallRuleHash([]interface{}{"443"}) - resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -60,19 +61,25 @@ func TestAccCloudStackEgressFirewall_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_egress_firewall.foo", "network", CLOUDSTACK_NETWORK_1), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule.#", "1"), + "cloudstack_egress_firewall.foo", "rule.#", "2"), resource.TestCheckResourceAttr( "cloudstack_egress_firewall.foo", - "rule."+hash1+".source_cidr", - CLOUDSTACK_NETWORK_1_IPADDRESS+"/32"), + "rule.1081385056.cidr_list.3378711023", + CLOUDSTACK_NETWORK_1_IPADDRESS1+"/32"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".protocol", "tcp"), + "cloudstack_egress_firewall.foo", "rule.1081385056.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".ports.#", "2"), + "cloudstack_egress_firewall.foo", "rule.1081385056.ports.32925333", "8080"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".ports.1209010669", "1000-2000"), + "cloudstack_egress_firewall.foo", + "rule.1129999216.source_cidr", + CLOUDSTACK_NETWORK_1_IPADDRESS1+"/32"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".ports.1889509032", "80"), + "cloudstack_egress_firewall.foo", "rule.1129999216.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1129999216.ports.1209010669", "1000-2000"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1129999216.ports.1889509032", "80"), ), }, @@ -83,29 +90,37 @@ func TestAccCloudStackEgressFirewall_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_egress_firewall.foo", "network", CLOUDSTACK_NETWORK_1), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule.#", "2"), + "cloudstack_egress_firewall.foo", "rule.#", "3"), resource.TestCheckResourceAttr( "cloudstack_egress_firewall.foo", - "rule."+hash1+".source_cidr", - CLOUDSTACK_NETWORK_1_IPADDRESS+"/32"), - resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".protocol", "tcp"), - resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".ports.#", "2"), - resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".ports.1209010669", "1000-2000"), - resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash1+".ports.1889509032", "80"), + "rule.59731059.cidr_list.1910468234", + CLOUDSTACK_NETWORK_1_IPADDRESS2+"/32"), resource.TestCheckResourceAttr( "cloudstack_egress_firewall.foo", - "rule."+hash2+".source_cidr", - CLOUDSTACK_NETWORK_1_IPADDRESS+"/32"), + "rule.59731059.cidr_list.3378711023", + CLOUDSTACK_NETWORK_1_IPADDRESS1+"/32"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash2+".protocol", "tcp"), + "cloudstack_egress_firewall.foo", "rule.59731059.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash2+".ports.#", "1"), + "cloudstack_egress_firewall.foo", "rule.59731059.ports.32925333", "8080"), resource.TestCheckResourceAttr( - "cloudstack_egress_firewall.foo", "rule."+hash2+".ports.3638101695", "443"), + "cloudstack_egress_firewall.foo", + "rule.1052669680.source_cidr", + CLOUDSTACK_NETWORK_1_IPADDRESS1+"/32"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1052669680.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1052669680.ports.3638101695", "443"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", + "rule.1129999216.source_cidr", + CLOUDSTACK_NETWORK_1_IPADDRESS1+"/32"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1129999216.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1129999216.ports.1209010669", "1000-2000"), + resource.TestCheckResourceAttr( + "cloudstack_egress_firewall.foo", "rule.1129999216.ports.1889509032", "80"), ), }, }, @@ -171,20 +186,16 @@ func testAccCheckCloudStackEgressFirewallDestroy(s *terraform.State) error { return nil } -func makeTestCloudStackEgressFirewallRuleHash(ports []interface{}) string { - return strconv.Itoa(resourceCloudStackEgressFirewallRuleHash(map[string]interface{}{ - "source_cidr": CLOUDSTACK_NETWORK_1_IPADDRESS + "/32", - "protocol": "tcp", - "ports": schema.NewSet(schema.HashString, ports), - "icmp_type": 0, - "icmp_code": 0, - })) -} - var testAccCloudStackEgressFirewall_basic = fmt.Sprintf(` resource "cloudstack_egress_firewall" "foo" { network = "%s" + rule { + cidr_list = ["%s/32"] + protocol = "tcp" + ports = ["8080"] + } + rule { source_cidr = "%s/32" protocol = "tcp" @@ -192,12 +203,19 @@ resource "cloudstack_egress_firewall" "foo" { } }`, CLOUDSTACK_NETWORK_1, - CLOUDSTACK_NETWORK_1_IPADDRESS) + CLOUDSTACK_NETWORK_1_IPADDRESS1, + CLOUDSTACK_NETWORK_1_IPADDRESS1) var testAccCloudStackEgressFirewall_update = fmt.Sprintf(` resource "cloudstack_egress_firewall" "foo" { network = "%s" + rule { + cidr_list = ["%s/32", "%s/32"] + protocol = "tcp" + ports = ["8080"] + } + rule { source_cidr = "%s/32" protocol = "tcp" @@ -211,5 +229,7 @@ resource "cloudstack_egress_firewall" "foo" { } }`, CLOUDSTACK_NETWORK_1, - CLOUDSTACK_NETWORK_1_IPADDRESS, - CLOUDSTACK_NETWORK_1_IPADDRESS) + CLOUDSTACK_NETWORK_1_IPADDRESS1, + CLOUDSTACK_NETWORK_1_IPADDRESS2, + CLOUDSTACK_NETWORK_1_IPADDRESS1, + CLOUDSTACK_NETWORK_1_IPADDRESS1) diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall.go b/builtin/providers/cloudstack/resource_cloudstack_firewall.go index 3bcced02e2..cfe3531f71 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_firewall.go +++ b/builtin/providers/cloudstack/resource_cloudstack_firewall.go @@ -1,14 +1,13 @@ package cloudstack import ( - "bytes" "fmt" - "regexp" - "sort" "strconv" "strings" + "sync" + "time" - "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/schema" "github.com/xanzy/go-cloudstack/cloudstack" ) @@ -38,9 +37,17 @@ func resourceCloudStackFirewall() *schema.Resource { Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "cidr_list": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "source_cidr": &schema.Schema{ - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Optional: true, + Deprecated: "Please use the `cidr_list` field instead", }, "protocol": &schema.Schema{ @@ -64,9 +71,7 @@ func resourceCloudStackFirewall() *schema.Resource { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: func(v interface{}) int { - return hashcode.String(v.(string)) - }, + Set: schema.HashString, }, "uuids": &schema.Schema{ @@ -75,7 +80,12 @@ func resourceCloudStackFirewall() *schema.Resource { }, }, }, - Set: resourceCloudStackFirewallRuleHash, + }, + + "parallelism": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 2, }, }, } @@ -99,32 +109,66 @@ func resourceCloudStackFirewallCreate(d *schema.ResourceData, meta interface{}) d.SetId(ipaddressid) // Create all rules that are configured - if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { - + if nrs := d.Get("rule").(*schema.Set); nrs.Len() > 0 { // Create an empty schema.Set to hold all rules - rules := &schema.Set{ - F: resourceCloudStackFirewallRuleHash, - } + rules := resourceCloudStackFirewall().Schema["rule"].ZeroValue().(*schema.Set) - for _, rule := range rs.List() { - // Create a single rule - err := resourceCloudStackFirewallCreateRule(d, meta, rule.(map[string]interface{})) + err := createFirewallRules(d, meta, rules, nrs) - // We need to update this first to preserve the correct state - rules.Add(rule) - d.Set("rule", rules) + // We need to update this first to preserve the correct state + d.Set("rule", rules) - if err != nil { - return err - } + if err != nil { + return err } } return resourceCloudStackFirewallRead(d, meta) } +func createFirewallRules( + d *schema.ResourceData, + meta interface{}, + rules *schema.Set, + nrs *schema.Set) error { + var errs *multierror.Error -func resourceCloudStackFirewallCreateRule( - d *schema.ResourceData, meta interface{}, rule map[string]interface{}) error { + var wg sync.WaitGroup + wg.Add(nrs.Len()) + + sem := make(chan struct{}, d.Get("parallelism").(int)) + for _, rule := range nrs.List() { + // Put in a tiny sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(rule map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Create a single rule + err := createFirewallRule(d, meta, rule) + + // If we have at least one UUID, we need to save the rule + if len(rule["uuids"].(map[string]interface{})) > 0 { + rules.Add(rule) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(rule.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} + +func createFirewallRule( + d *schema.ResourceData, + meta interface{}, + rule map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) uuids := rule["uuids"].(map[string]interface{}) @@ -137,7 +181,7 @@ func resourceCloudStackFirewallCreateRule( p := cs.Firewall.NewCreateFirewallRuleParams(d.Id(), rule["protocol"].(string)) // Set the CIDR list - p.SetCidrlist([]string{rule["source_cidr"].(string)}) + p.SetCidrlist(retrieveCidrList(rule)) // If the protocol is ICMP set the needed ICMP parameters if rule["protocol"].(string) == "icmp" { @@ -148,6 +192,7 @@ func resourceCloudStackFirewallCreateRule( if err != nil { return err } + uuids["icmp"] = r.Id rule["uuids"] = uuids } @@ -157,15 +202,16 @@ func resourceCloudStackFirewallCreateRule( if ps := rule["ports"].(*schema.Set); ps.Len() > 0 { // Create an empty schema.Set to hold all processed ports - ports := &schema.Set{ - F: func(v interface{}) int { - return hashcode.String(v.(string)) - }, - } + ports := &schema.Set{F: schema.HashString} for _, port := range ps.List() { - re := regexp.MustCompile(`^(\d+)(?:-(\d+))?$`) - m := re.FindStringSubmatch(port.(string)) + if _, ok := uuids[port.(string)]; ok { + ports.Add(port) + rule["ports"] = ports + continue + } + + m := splitPorts.FindStringSubmatch(port.(string)) startPort, err := strconv.Atoi(m[1]) if err != nil { @@ -220,9 +266,7 @@ func resourceCloudStackFirewallRead(d *schema.ResourceData, meta interface{}) er } // Create an empty schema.Set to hold all rules - rules := &schema.Set{ - F: resourceCloudStackFirewallRuleHash, - } + rules := resourceCloudStackFirewall().Schema["rule"].ZeroValue().(*schema.Set) // Read all rules that are configured if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { @@ -247,10 +291,10 @@ func resourceCloudStackFirewallRead(d *schema.ResourceData, meta interface{}) er delete(ruleMap, id.(string)) // Update the values - rule["source_cidr"] = r.Cidrlist rule["protocol"] = r.Protocol rule["icmp_type"] = r.Icmptype rule["icmp_code"] = r.Icmpcode + setCidrList(rule, r.Cidrlist) rules.Add(rule) } @@ -259,11 +303,7 @@ func resourceCloudStackFirewallRead(d *schema.ResourceData, meta interface{}) er if ps := rule["ports"].(*schema.Set); ps.Len() > 0 { // Create an empty schema.Set to hold all ports - ports := &schema.Set{ - F: func(v interface{}) int { - return hashcode.String(v.(string)) - }, - } + ports := &schema.Set{F: schema.HashString} // Loop through all ports and retrieve their info for _, port := range ps.List() { @@ -283,8 +323,8 @@ func resourceCloudStackFirewallRead(d *schema.ResourceData, meta interface{}) er delete(ruleMap, id.(string)) // Update the values - rule["source_cidr"] = r.Cidrlist rule["protocol"] = r.Protocol + setCidrList(rule, r.Cidrlist) ports.Add(port) } @@ -301,21 +341,22 @@ func resourceCloudStackFirewallRead(d *schema.ResourceData, meta interface{}) er // If this is a managed firewall, add all unknown rules into a single dummy rule managed := d.Get("managed").(bool) if managed && len(ruleMap) > 0 { - // Add all UUIDs to a uuids map - uuids := make(map[string]interface{}, len(ruleMap)) for uuid := range ruleMap { - uuids[uuid] = uuid - } + // We need to create and add a dummy value to a schema.Set as the + // cidr_list is a required field and thus needs a value + cidrs := &schema.Set{F: schema.HashString} + cidrs.Add(uuid) - // Make a dummy rule to hold all unknown UUIDs - rule := map[string]interface{}{ - "source_cidr": "N/A", - "protocol": "N/A", - "uuids": uuids, - } + // Make a dummy rule to hold the unknown UUID + rule := map[string]interface{}{ + "cidr_list": cidrs, + "protocol": uuid, + "uuids": map[string]interface{}{uuid: uuid}, + } - // Add the dummy rule to the rules set - rules.Add(rule) + // Add the dummy rule to the rules set + rules.Add(rule) + } } if rules.Len() > 0 { @@ -339,27 +380,29 @@ func resourceCloudStackFirewallUpdate(d *schema.ResourceData, meta interface{}) ors := o.(*schema.Set).Difference(n.(*schema.Set)) nrs := n.(*schema.Set).Difference(o.(*schema.Set)) - // Now first loop through all the old rules and delete any obsolete ones - for _, rule := range ors.List() { - // Delete the rule as it no longer exists in the config - err := resourceCloudStackFirewallDeleteRule(d, meta, rule.(map[string]interface{})) + // We need to start with a rule set containing all the rules we + // already have and want to keep. Any rules that are not deleted + // correctly and any newly created rules, will be added to this + // set to make sure we end up in a consistent state + rules := o.(*schema.Set).Intersection(n.(*schema.Set)) + + // First loop through all the old rules and delete them + if ors.Len() > 0 { + err := deleteFirewallRules(d, meta, rules, ors) + + // We need to update this first to preserve the correct state + d.Set("rule", rules) + if err != nil { return err } } - // Make sure we save the state of the currently configured rules - rules := o.(*schema.Set).Intersection(n.(*schema.Set)) - d.Set("rule", rules) - - // Then loop through all the currently configured rules and create the new ones - for _, rule := range nrs.List() { - // When successfully deleted, re-create it again if it still exists - err := resourceCloudStackFirewallCreateRule( - d, meta, rule.(map[string]interface{})) + // Then loop through all the new rules and create them + if nrs.Len() > 0 { + err := createFirewallRules(d, meta, rules, nrs) // We need to update this first to preserve the correct state - rules.Add(rule) d.Set("rule", rules) if err != nil { @@ -372,26 +415,69 @@ func resourceCloudStackFirewallUpdate(d *schema.ResourceData, meta interface{}) } func resourceCloudStackFirewallDelete(d *schema.ResourceData, meta interface{}) error { + // Create an empty rule set to hold all rules that where + // not deleted correctly + rules := resourceCloudStackFirewall().Schema["rule"].ZeroValue().(*schema.Set) + // Delete all rules - if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { - for _, rule := range rs.List() { - // Delete a single rule - err := resourceCloudStackFirewallDeleteRule(d, meta, rule.(map[string]interface{})) + if ors := d.Get("rule").(*schema.Set); ors.Len() > 0 { + err := deleteFirewallRules(d, meta, rules, ors) - // We need to update this first to preserve the correct state - d.Set("rule", rs) + // We need to update this first to preserve the correct state + d.Set("rule", rules) - if err != nil { - return err - } + if err != nil { + return err } } return nil } -func resourceCloudStackFirewallDeleteRule( - d *schema.ResourceData, meta interface{}, rule map[string]interface{}) error { +func deleteFirewallRules( + d *schema.ResourceData, + meta interface{}, + rules *schema.Set, + ors *schema.Set) error { + var errs *multierror.Error + + var wg sync.WaitGroup + wg.Add(ors.Len()) + + sem := make(chan struct{}, d.Get("parallelism").(int)) + for _, rule := range ors.List() { + // Put a sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(rule map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Delete a single rule + err := deleteFirewallRule(d, meta, rule) + + // If we have at least one UUID, we need to save the rule + if len(rule["uuids"].(map[string]interface{})) > 0 { + rules.Add(rule) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(rule.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} + +func deleteFirewallRule( + d *schema.ResourceData, + meta interface{}, + rule map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) uuids := rule["uuids"].(map[string]interface{}) @@ -420,47 +506,12 @@ func resourceCloudStackFirewallDeleteRule( // Delete the UUID of this rule delete(uuids, k) + rule["uuids"] = uuids } - // Update the UUIDs - rule["uuids"] = uuids - return nil } -func resourceCloudStackFirewallRuleHash(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf( - "%s-%s-", m["source_cidr"].(string), m["protocol"].(string))) - - if v, ok := m["icmp_type"]; ok { - buf.WriteString(fmt.Sprintf("%d-", v.(int))) - } - - if v, ok := m["icmp_code"]; ok { - buf.WriteString(fmt.Sprintf("%d-", v.(int))) - } - - // We need to make sure to sort the strings below so that we always - // generate the same hash code no matter what is in the set. - if v, ok := m["ports"]; ok { - vs := v.(*schema.Set).List() - s := make([]string, len(vs)) - - for i, raw := range vs { - s[i] = raw.(string) - } - sort.Strings(s) - - for _, v := range s { - buf.WriteString(fmt.Sprintf("%s-", v)) - } - } - - return hashcode.String(buf.String()) -} - func verifyFirewallParams(d *schema.ResourceData) error { managed := d.Get("managed").(bool) _, rules := d.GetOk("rule") @@ -474,10 +525,21 @@ func verifyFirewallParams(d *schema.ResourceData) error { } func verifyFirewallRuleParams(d *schema.ResourceData, rule map[string]interface{}) error { + cidrList := rule["cidr_list"].(*schema.Set) + sourceCidr := rule["source_cidr"].(string) + if cidrList.Len() == 0 && sourceCidr == "" { + return fmt.Errorf( + "Parameter cidr_list is a required parameter") + } + if cidrList.Len() > 0 && sourceCidr != "" { + return fmt.Errorf( + "Parameter source_cidr is deprecated and cannot be used together with cidr_list") + } + protocol := rule["protocol"].(string) if protocol != "tcp" && protocol != "udp" && protocol != "icmp" { return fmt.Errorf( - "%s is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol) + "%q is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol) } if protocol == "icmp" { @@ -490,9 +552,17 @@ func verifyFirewallRuleParams(d *schema.ResourceData, rule map[string]interface{ "Parameter icmp_code is a required parameter when using protocol 'icmp'") } } else { - if _, ok := rule["ports"]; !ok { + if ports, ok := rule["ports"].(*schema.Set); ok { + for _, port := range ports.List() { + m := splitPorts.FindStringSubmatch(port.(string)) + if m == nil { + return fmt.Errorf( + "%q is not a valid port value. Valid options are '80' or '80-90'", port.(string)) + } + } + } else { return fmt.Errorf( - "Parameter port is a required parameter when using protocol 'tcp' or 'udp'") + "Parameter ports is a required parameter when *not* using protocol 'icmp'") } } diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go index a86cdc3b2b..d93a2c73eb 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go @@ -23,15 +23,21 @@ func TestAccCloudStackFirewall_basic(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_firewall.foo", "ipaddress", CLOUDSTACK_PUBLIC_IPADDRESS), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.source_cidr", "10.0.0.0/24"), + "cloudstack_firewall.foo", "rule.#", "2"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.protocol", "tcp"), + "cloudstack_firewall.foo", "rule.60926170.cidr_list.3482919157", "10.0.0.0/24"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.#", "2"), + "cloudstack_firewall.foo", "rule.60926170.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.1209010669", "1000-2000"), + "cloudstack_firewall.foo", "rule.60926170.ports.32925333", "8080"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.1889509032", "80"), + "cloudstack_firewall.foo", "rule.716592205.source_cidr", "10.0.0.0/24"), + resource.TestCheckResourceAttr( + "cloudstack_firewall.foo", "rule.716592205.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_firewall.foo", "rule.716592205.ports.1209010669", "1000-2000"), + resource.TestCheckResourceAttr( + "cloudstack_firewall.foo", "rule.716592205.ports.1889509032", "80"), ), }, }, @@ -51,17 +57,21 @@ func TestAccCloudStackFirewall_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_firewall.foo", "ipaddress", CLOUDSTACK_PUBLIC_IPADDRESS), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.#", "1"), + "cloudstack_firewall.foo", "rule.#", "2"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.source_cidr", "10.0.0.0/24"), + "cloudstack_firewall.foo", "rule.60926170.cidr_list.3482919157", "10.0.0.0/24"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.protocol", "tcp"), + "cloudstack_firewall.foo", "rule.60926170.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.#", "2"), + "cloudstack_firewall.foo", "rule.60926170.ports.32925333", "8080"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.1209010669", "1000-2000"), + "cloudstack_firewall.foo", "rule.716592205.source_cidr", "10.0.0.0/24"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.1889509032", "80"), + "cloudstack_firewall.foo", "rule.716592205.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_firewall.foo", "rule.716592205.ports.1209010669", "1000-2000"), + resource.TestCheckResourceAttr( + "cloudstack_firewall.foo", "rule.716592205.ports.1889509032", "80"), ), }, @@ -72,27 +82,31 @@ func TestAccCloudStackFirewall_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_firewall.foo", "ipaddress", CLOUDSTACK_PUBLIC_IPADDRESS), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.#", "2"), + "cloudstack_firewall.foo", "rule.#", "3"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.source_cidr", "10.0.0.0/24"), + "cloudstack_firewall.foo", "rule.2207610982.cidr_list.80081744", "10.0.1.0/24"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.protocol", "tcp"), + "cloudstack_firewall.foo", "rule.2207610982.cidr_list.3482919157", "10.0.0.0/24"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.#", "2"), + "cloudstack_firewall.foo", "rule.2207610982.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.1209010669", "1000-2000"), + "cloudstack_firewall.foo", "rule.2207610982.ports.32925333", "8080"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.1702320581.ports.1889509032", "80"), + "cloudstack_firewall.foo", "rule.716592205.source_cidr", "10.0.0.0/24"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.3779782959.source_cidr", "172.16.100.0/24"), + "cloudstack_firewall.foo", "rule.716592205.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.3779782959.protocol", "tcp"), + "cloudstack_firewall.foo", "rule.716592205.ports.1209010669", "1000-2000"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.3779782959.ports.#", "2"), + "cloudstack_firewall.foo", "rule.716592205.ports.1889509032", "80"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.3779782959.ports.1889509032", "80"), + "cloudstack_firewall.foo", "rule.4449157.source_cidr", "172.16.100.0/24"), resource.TestCheckResourceAttr( - "cloudstack_firewall.foo", "rule.3779782959.ports.3638101695", "443"), + "cloudstack_firewall.foo", "rule.4449157.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_firewall.foo", "rule.4449157.ports.1889509032", "80"), + resource.TestCheckResourceAttr( + "cloudstack_firewall.foo", "rule.4449157.ports.3638101695", "443"), ), }, }, @@ -162,6 +176,12 @@ var testAccCloudStackFirewall_basic = fmt.Sprintf(` resource "cloudstack_firewall" "foo" { ipaddress = "%s" + rule { + cidr_list = ["10.0.0.0/24"] + protocol = "tcp" + ports = ["8080"] + } + rule { source_cidr = "10.0.0.0/24" protocol = "tcp" @@ -173,6 +193,12 @@ var testAccCloudStackFirewall_update = fmt.Sprintf(` resource "cloudstack_firewall" "foo" { ipaddress = "%s" + rule { + cidr_list = ["10.0.0.0/24", "10.0.1.0/24"] + protocol = "tcp" + ports = ["8080"] + } + rule { source_cidr = "10.0.0.0/24" protocol = "tcp" diff --git a/builtin/providers/cloudstack/resource_cloudstack_instance_test.go b/builtin/providers/cloudstack/resource_cloudstack_instance_test.go index b0f241a678..ced4514be5 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_instance_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_instance_test.go @@ -80,7 +80,7 @@ func TestAccCloudStackInstance_fixedIP(t *testing.T) { testAccCheckCloudStackInstanceExists( "cloudstack_instance.foobar", &instance), resource.TestCheckResourceAttr( - "cloudstack_instance.foobar", "ipaddress", CLOUDSTACK_NETWORK_1_IPADDRESS), + "cloudstack_instance.foobar", "ipaddress", CLOUDSTACK_NETWORK_1_IPADDRESS1), ), }, }, @@ -268,7 +268,7 @@ resource "cloudstack_instance" "foobar" { }`, CLOUDSTACK_SERVICE_OFFERING_1, CLOUDSTACK_NETWORK_1, - CLOUDSTACK_NETWORK_1_IPADDRESS, + CLOUDSTACK_NETWORK_1_IPADDRESS1, CLOUDSTACK_TEMPLATE, CLOUDSTACK_ZONE) @@ -290,7 +290,7 @@ resource "cloudstack_instance" "foobar" { }`, CLOUDSTACK_SERVICE_OFFERING_1, CLOUDSTACK_NETWORK_1, - CLOUDSTACK_NETWORK_1_IPADDRESS, + CLOUDSTACK_NETWORK_1_IPADDRESS1, CLOUDSTACK_TEMPLATE, CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resource_cloudstack_network.go b/builtin/providers/cloudstack/resource_cloudstack_network.go index a76beae325..261d0ec508 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network.go @@ -4,6 +4,7 @@ import ( "fmt" "log" "net" + "strconv" "strings" "github.com/hashicorp/terraform/helper/schema" @@ -35,11 +36,38 @@ func resourceCloudStackNetwork() *schema.Resource { ForceNew: true, }, + "gateway": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "startip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "endip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "network_offering": &schema.Schema{ Type: schema.TypeString, Required: true, }, + "vlan": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + "vpc": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -63,6 +91,8 @@ func resourceCloudStackNetwork() *schema.Resource { Required: true, ForceNew: true, }, + + "tags": tagsSchema(), }, } } @@ -89,22 +119,24 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e if !ok { displaytext = name } - // Create a new parameter struct p := cs.Network.NewCreateNetworkParams(displaytext.(string), name, networkofferingid, zoneid) - // Get the network details from the CIDR - m, err := parseCIDR(d.Get("cidr").(string)) + m, err := parseCIDR(d) if err != nil { return err } // Set the needed IP config - p.SetStartip(m["start"]) + p.SetStartip(m["startip"]) p.SetGateway(m["gateway"]) - p.SetEndip(m["end"]) + p.SetEndip(m["endip"]) p.SetNetmask(m["netmask"]) + if vlan, ok := d.GetOk("vlan"); ok { + p.SetVlan(strconv.Itoa(vlan.(int))) + } + // Check is this network needs to be created in a VPC vpc := d.Get("vpc").(string) if vpc != "" { @@ -144,6 +176,11 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e d.SetId(r.Id) + err = setTags(cs, d, "network") + if err != nil { + return fmt.Errorf("Error setting tags: %s", err) + } + return resourceCloudStackNetworkRead(d, meta) } @@ -166,6 +203,14 @@ func resourceCloudStackNetworkRead(d *schema.ResourceData, meta interface{}) err d.Set("name", n.Name) d.Set("display_text", n.Displaytext) d.Set("cidr", n.Cidr) + d.Set("gateway", n.Gateway) + + // Read the tags and store them in a map + tags := make(map[string]interface{}) + for item := range n.Tags { + tags[n.Tags[item].Key] = n.Tags[item].Value + } + d.Set("tags", tags) setValueOrID(d, "network_offering", n.Networkofferingname, n.Networkofferingid) setValueOrID(d, "project", n.Project, n.Projectid) @@ -216,6 +261,14 @@ func resourceCloudStackNetworkUpdate(d *schema.ResourceData, meta interface{}) e "Error updating network %s: %s", name, err) } + // Update tags if they have changed + if d.HasChange("tags") { + err = setTags(cs, d, "network") + if err != nil { + return fmt.Errorf("Error updating tags: %s", err) + } + } + return resourceCloudStackNetworkRead(d, meta) } @@ -240,9 +293,10 @@ func resourceCloudStackNetworkDelete(d *schema.ResourceData, meta interface{}) e return nil } -func parseCIDR(cidr string) (map[string]string, error) { +func parseCIDR(d *schema.ResourceData) (map[string]string, error) { m := make(map[string]string, 4) + cidr := d.Get("cidr").(string) ip, ipnet, err := net.ParseCIDR(cidr) if err != nil { return nil, fmt.Errorf("Unable to parse cidr %s: %s", cidr, err) @@ -252,10 +306,25 @@ func parseCIDR(cidr string) (map[string]string, error) { sub := ip.Mask(msk) m["netmask"] = fmt.Sprintf("%d.%d.%d.%d", msk[0], msk[1], msk[2], msk[3]) - m["gateway"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+1) - m["start"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+2) - m["end"] = fmt.Sprintf("%d.%d.%d.%d", - sub[0]+(0xff-msk[0]), sub[1]+(0xff-msk[1]), sub[2]+(0xff-msk[2]), sub[3]+(0xff-msk[3]-1)) + + if gateway, ok := d.GetOk("gateway"); ok { + m["gateway"] = gateway.(string) + } else { + m["gateway"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+1) + } + + if startip, ok := d.GetOk("startip"); ok { + m["startip"] = startip.(string) + } else { + m["startip"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+2) + } + + if endip, ok := d.GetOk("endip"); ok { + m["endip"] = endip.(string) + } else { + m["endip"] = fmt.Sprintf("%d.%d.%d.%d", + sub[0]+(0xff-msk[0]), sub[1]+(0xff-msk[1]), sub[2]+(0xff-msk[2]), sub[3]+(0xff-msk[3]-1)) + } return m, nil } diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go index 18446738a1..14e39d99c9 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go @@ -1,14 +1,13 @@ package cloudstack import ( - "bytes" "fmt" - "regexp" - "sort" "strconv" "strings" + "sync" + "time" - "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/schema" "github.com/xanzy/go-cloudstack/cloudstack" ) @@ -44,9 +43,17 @@ func resourceCloudStackNetworkACLRule() *schema.Resource { Default: "allow", }, + "cidr_list": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "source_cidr": &schema.Schema{ - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Optional: true, + Deprecated: "Please use the `cidr_list` field instead", }, "protocol": &schema.Schema{ @@ -70,9 +77,7 @@ func resourceCloudStackNetworkACLRule() *schema.Resource { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: func(v interface{}) int { - return hashcode.String(v.(string)) - }, + Set: schema.HashString, }, "traffic_type": &schema.Schema{ @@ -87,7 +92,12 @@ func resourceCloudStackNetworkACLRule() *schema.Resource { }, }, }, - Set: resourceCloudStackNetworkACLRuleHash, + }, + + "parallelism": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 2, }, }, } @@ -103,32 +113,67 @@ func resourceCloudStackNetworkACLRuleCreate(d *schema.ResourceData, meta interfa d.SetId(d.Get("aclid").(string)) // Create all rules that are configured - if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { + if nrs := d.Get("rule").(*schema.Set); nrs.Len() > 0 { + // Create an empty rule set to hold all newly created rules + rules := resourceCloudStackNetworkACLRule().Schema["rule"].ZeroValue().(*schema.Set) - // Create an empty schema.Set to hold all rules - rules := &schema.Set{ - F: resourceCloudStackNetworkACLRuleHash, - } + err := createNetworkACLRules(d, meta, rules, nrs) - for _, rule := range rs.List() { - // Create a single rule - err := resourceCloudStackNetworkACLRuleCreateRule(d, meta, rule.(map[string]interface{})) + // We need to update this first to preserve the correct state + d.Set("rule", rules) - // We need to update this first to preserve the correct state - rules.Add(rule) - d.Set("rule", rules) - - if err != nil { - return err - } + if err != nil { + return err } } return resourceCloudStackNetworkACLRuleRead(d, meta) } -func resourceCloudStackNetworkACLRuleCreateRule( - d *schema.ResourceData, meta interface{}, rule map[string]interface{}) error { +func createNetworkACLRules( + d *schema.ResourceData, + meta interface{}, + rules *schema.Set, + nrs *schema.Set) error { + var errs *multierror.Error + + var wg sync.WaitGroup + wg.Add(nrs.Len()) + + sem := make(chan struct{}, d.Get("parallelism").(int)) + for _, rule := range nrs.List() { + // Put in a tiny sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(rule map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Create a single rule + err := createNetworkACLRule(d, meta, rule) + + // If we have at least one UUID, we need to save the rule + if len(rule["uuids"].(map[string]interface{})) > 0 { + rules.Add(rule) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(rule.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} + +func createNetworkACLRule( + d *schema.ResourceData, + meta interface{}, + rule map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) uuids := rule["uuids"].(map[string]interface{}) @@ -147,7 +192,7 @@ func resourceCloudStackNetworkACLRuleCreateRule( p.SetAction(rule["action"].(string)) // Set the CIDR list - p.SetCidrlist([]string{rule["source_cidr"].(string)}) + p.SetCidrlist(retrieveCidrList(rule)) // Set the traffic type p.SetTraffictype(rule["traffic_type"].(string)) @@ -182,15 +227,16 @@ func resourceCloudStackNetworkACLRuleCreateRule( if ps := rule["ports"].(*schema.Set); ps.Len() > 0 { // Create an empty schema.Set to hold all processed ports - ports := &schema.Set{ - F: func(v interface{}) int { - return hashcode.String(v.(string)) - }, - } + ports := &schema.Set{F: schema.HashString} for _, port := range ps.List() { - re := regexp.MustCompile(`^(\d+)(?:-(\d+))?$`) - m := re.FindStringSubmatch(port.(string)) + if _, ok := uuids[port.(string)]; ok { + ports.Add(port) + rule["ports"] = ports + continue + } + + m := splitPorts.FindStringSubmatch(port.(string)) startPort, err := strconv.Atoi(m[1]) if err != nil { @@ -245,9 +291,7 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface } // Create an empty schema.Set to hold all rules - rules := &schema.Set{ - F: resourceCloudStackNetworkACLRuleHash, - } + rules := resourceCloudStackNetworkACLRule().Schema["rule"].ZeroValue().(*schema.Set) // Read all rules that are configured if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { @@ -273,11 +317,11 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface // Update the values rule["action"] = strings.ToLower(r.Action) - rule["source_cidr"] = r.Cidrlist rule["protocol"] = r.Protocol rule["icmp_type"] = r.Icmptype rule["icmp_code"] = r.Icmpcode rule["traffic_type"] = strings.ToLower(r.Traffictype) + setCidrList(rule, r.Cidrlist) rules.Add(rule) } @@ -299,9 +343,9 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface // Update the values rule["action"] = strings.ToLower(r.Action) - rule["source_cidr"] = r.Cidrlist rule["protocol"] = r.Protocol rule["traffic_type"] = strings.ToLower(r.Traffictype) + setCidrList(rule, r.Cidrlist) rules.Add(rule) } @@ -310,11 +354,7 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface if ps := rule["ports"].(*schema.Set); ps.Len() > 0 { // Create an empty schema.Set to hold all ports - ports := &schema.Set{ - F: func(v interface{}) int { - return hashcode.String(v.(string)) - }, - } + ports := &schema.Set{F: schema.HashString} // Loop through all ports and retrieve their info for _, port := range ps.List() { @@ -335,9 +375,9 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface // Update the values rule["action"] = strings.ToLower(r.Action) - rule["source_cidr"] = r.Cidrlist rule["protocol"] = r.Protocol rule["traffic_type"] = strings.ToLower(r.Traffictype) + setCidrList(rule, r.Cidrlist) ports.Add(port) } @@ -351,23 +391,25 @@ func resourceCloudStackNetworkACLRuleRead(d *schema.ResourceData, meta interface } } - // If this is a managed firewall, add all unknown rules into a single dummy rule + // If this is a managed firewall, add all unknown rules into dummy rules managed := d.Get("managed").(bool) if managed && len(ruleMap) > 0 { - // Add all UUIDs to a uuids map - uuids := make(map[string]interface{}, len(ruleMap)) for uuid := range ruleMap { - uuids[uuid] = uuid - } + // We need to create and add a dummy value to a schema.Set as the + // cidr_list is a required field and thus needs a value + cidrs := &schema.Set{F: schema.HashString} + cidrs.Add(uuid) - rule := map[string]interface{}{ - "source_cidr": "N/A", - "protocol": "N/A", - "uuids": uuids, - } + // Make a dummy rule to hold the unknown UUID + rule := map[string]interface{}{ + "cidr_list": cidrs, + "protocol": uuid, + "uuids": map[string]interface{}{uuid: uuid}, + } - // Add the dummy rule to the rules set - rules.Add(rule) + // Add the dummy rule to the rules set + rules.Add(rule) + } } if rules.Len() > 0 { @@ -391,26 +433,29 @@ func resourceCloudStackNetworkACLRuleUpdate(d *schema.ResourceData, meta interfa ors := o.(*schema.Set).Difference(n.(*schema.Set)) nrs := n.(*schema.Set).Difference(o.(*schema.Set)) - // Now first loop through all the old rules and delete any obsolete ones - for _, rule := range ors.List() { - // Delete the rule as it no longer exists in the config - err := resourceCloudStackNetworkACLRuleDeleteRule(d, meta, rule.(map[string]interface{})) + // We need to start with a rule set containing all the rules we + // already have and want to keep. Any rules that are not deleted + // correctly and any newly created rules, will be added to this + // set to make sure we end up in a consistent state + rules := o.(*schema.Set).Intersection(n.(*schema.Set)) + + // First loop through all the new rules and create (before destroy) them + if nrs.Len() > 0 { + err := createNetworkACLRules(d, meta, rules, nrs) + + // We need to update this first to preserve the correct state + d.Set("rule", rules) + if err != nil { return err } } - // Make sure we save the state of the currently configured rules - rules := o.(*schema.Set).Intersection(n.(*schema.Set)) - d.Set("rule", rules) - - // Then loop through all the currently configured rules and create the new ones - for _, rule := range nrs.List() { - // When successfully deleted, re-create it again if it still exists - err := resourceCloudStackNetworkACLRuleCreateRule(d, meta, rule.(map[string]interface{})) + // Then loop through all the old rules and delete them + if ors.Len() > 0 { + err := deleteNetworkACLRules(d, meta, rules, ors) // We need to update this first to preserve the correct state - rules.Add(rule) d.Set("rule", rules) if err != nil { @@ -423,26 +468,69 @@ func resourceCloudStackNetworkACLRuleUpdate(d *schema.ResourceData, meta interfa } func resourceCloudStackNetworkACLRuleDelete(d *schema.ResourceData, meta interface{}) error { + // Create an empty rule set to hold all rules that where + // not deleted correctly + rules := resourceCloudStackNetworkACLRule().Schema["rule"].ZeroValue().(*schema.Set) + // Delete all rules - if rs := d.Get("rule").(*schema.Set); rs.Len() > 0 { - for _, rule := range rs.List() { - // Delete a single rule - err := resourceCloudStackNetworkACLRuleDeleteRule(d, meta, rule.(map[string]interface{})) + if ors := d.Get("rule").(*schema.Set); ors.Len() > 0 { + err := deleteNetworkACLRules(d, meta, rules, ors) - // We need to update this first to preserve the correct state - d.Set("rule", rs) + // We need to update this first to preserve the correct state + d.Set("rule", rules) - if err != nil { - return err - } + if err != nil { + return err } } return nil } -func resourceCloudStackNetworkACLRuleDeleteRule( - d *schema.ResourceData, meta interface{}, rule map[string]interface{}) error { +func deleteNetworkACLRules( + d *schema.ResourceData, + meta interface{}, + rules *schema.Set, + ors *schema.Set) error { + var errs *multierror.Error + + var wg sync.WaitGroup + wg.Add(ors.Len()) + + sem := make(chan struct{}, d.Get("parallelism").(int)) + for _, rule := range ors.List() { + // Put a sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(rule map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Delete a single rule + err := deleteNetworkACLRule(d, meta, rule) + + // If we have at least one UUID, we need to save the rule + if len(rule["uuids"].(map[string]interface{})) > 0 { + rules.Add(rule) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(rule.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} + +func deleteNetworkACLRule( + d *schema.ResourceData, + meta interface{}, + rule map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) uuids := rule["uuids"].(map[string]interface{}) @@ -463,6 +551,7 @@ func resourceCloudStackNetworkACLRuleDeleteRule( "Invalid parameter id value=%s due to incorrect long value format, "+ "or entity does not exist", id.(string))) { delete(uuids, k) + rule["uuids"] = uuids continue } @@ -471,66 +560,12 @@ func resourceCloudStackNetworkACLRuleDeleteRule( // Delete the UUID of this rule delete(uuids, k) + rule["uuids"] = uuids } - // Update the UUIDs - rule["uuids"] = uuids - return nil } -func resourceCloudStackNetworkACLRuleHash(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - - // This is a little ugly, but it's needed because these arguments have - // a default value that needs to be part of the string to hash - var action, trafficType string - if a, ok := m["action"]; ok { - action = a.(string) - } else { - action = "allow" - } - if t, ok := m["traffic_type"]; ok { - trafficType = t.(string) - } else { - trafficType = "ingress" - } - - buf.WriteString(fmt.Sprintf( - "%s-%s-%s-%s-", - action, - m["source_cidr"].(string), - m["protocol"].(string), - trafficType)) - - if v, ok := m["icmp_type"]; ok { - buf.WriteString(fmt.Sprintf("%d-", v.(int))) - } - - if v, ok := m["icmp_code"]; ok { - buf.WriteString(fmt.Sprintf("%d-", v.(int))) - } - - // We need to make sure to sort the strings below so that we always - // generate the same hash code no matter what is in the set. - if v, ok := m["ports"]; ok { - vs := v.(*schema.Set).List() - s := make([]string, len(vs)) - - for i, raw := range vs { - s[i] = raw.(string) - } - sort.Strings(s) - - for _, v := range s { - buf.WriteString(fmt.Sprintf("%s-", v)) - } - } - - return hashcode.String(buf.String()) -} - func verifyNetworkACLParams(d *schema.ResourceData) error { managed := d.Get("managed").(bool) _, rules := d.GetOk("rule") @@ -549,6 +584,17 @@ func verifyNetworkACLRuleParams(d *schema.ResourceData, rule map[string]interfac return fmt.Errorf("Parameter action only accepts 'allow' or 'deny' as values") } + cidrList := rule["cidr_list"].(*schema.Set) + sourceCidr := rule["source_cidr"].(string) + if cidrList.Len() == 0 && sourceCidr == "" { + return fmt.Errorf( + "Parameter cidr_list is a required parameter") + } + if cidrList.Len() > 0 && sourceCidr != "" { + return fmt.Errorf( + "Parameter source_cidr is deprecated and cannot be used together with cidr_list") + } + protocol := rule["protocol"].(string) switch protocol { case "icmp": @@ -563,7 +609,15 @@ func verifyNetworkACLRuleParams(d *schema.ResourceData, rule map[string]interfac case "all": // No additional test are needed, so just leave this empty... case "tcp", "udp": - if _, ok := rule["ports"]; !ok { + if ports, ok := rule["ports"].(*schema.Set); ok { + for _, port := range ports.List() { + m := splitPorts.FindStringSubmatch(port.(string)) + if m == nil { + return fmt.Errorf( + "%q is not a valid port value. Valid options are '80' or '80-90'", port.(string)) + } + } + } else { return fmt.Errorf( "Parameter ports is a required parameter when *not* using protocol 'icmp'") } @@ -571,7 +625,7 @@ func verifyNetworkACLRuleParams(d *schema.ResourceData, rule map[string]interfac _, err := strconv.ParseInt(protocol, 0, 0) if err != nil { return fmt.Errorf( - "%s is not a valid protocol. Valid options are 'tcp', 'udp', "+ + "%q is not a valid protocol. Valid options are 'tcp', 'udp', "+ "'icmp', 'all' or a valid protocol number", protocol) } } diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go index 6f2370f5b6..862418f704 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go @@ -23,19 +23,31 @@ func TestAccCloudStackNetworkACLRule_basic(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_network_acl_rule.foo", "rule.#", "3"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.action", "allow"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.action", "allow"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.source_cidr", "172.16.100.0/24"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.source_cidr", "172.16.100.0/24"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.protocol", "tcp"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.#", "2"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.#", "2"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.1889509032", "80"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.1889509032", "80"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.3638101695", "443"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.3638101695", "443"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.traffic_type", "ingress"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.traffic_type", "ingress"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.action", "allow"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.cidr_list.#", "1"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.cidr_list.3056857544", "172.18.100.0/24"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.icmp_code", "-1"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.icmp_type", "-1"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.traffic_type", "ingress"), ), }, }, @@ -55,19 +67,31 @@ func TestAccCloudStackNetworkACLRule_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_network_acl_rule.foo", "rule.#", "3"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.action", "allow"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.action", "allow"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.source_cidr", "172.16.100.0/24"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.source_cidr", "172.16.100.0/24"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.protocol", "tcp"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.#", "2"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.#", "2"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.1889509032", "80"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.1889509032", "80"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.3638101695", "443"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.3638101695", "443"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.traffic_type", "ingress"), + "cloudstack_network_acl_rule.foo", "rule.2792403380.traffic_type", "ingress"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.action", "allow"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.cidr_list.#", "1"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.cidr_list.3056857544", "172.18.100.0/24"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.icmp_code", "-1"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.icmp_type", "-1"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.4029966697.traffic_type", "ingress"), ), }, @@ -78,33 +102,47 @@ func TestAccCloudStackNetworkACLRule_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_network_acl_rule.foo", "rule.#", "4"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.action", "allow"), + "cloudstack_network_acl_rule.foo", "rule.2254982534.action", "deny"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.source_cidr", "172.16.100.0/24"), + "cloudstack_network_acl_rule.foo", "rule.2254982534.source_cidr", "10.0.0.0/24"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.protocol", "tcp"), + "cloudstack_network_acl_rule.foo", "rule.2254982534.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.#", "2"), + "cloudstack_network_acl_rule.foo", "rule.2254982534.ports.#", "2"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.1889509032", "80"), + "cloudstack_network_acl_rule.foo", "rule.2254982534.ports.1209010669", "1000-2000"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.ports.3638101695", "443"), + "cloudstack_network_acl_rule.foo", "rule.2254982534.ports.1889509032", "80"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.3247834462.traffic_type", "ingress"), + "cloudstack_network_acl_rule.foo", "rule.2254982534.traffic_type", "egress"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.4267872693.action", "deny"), + "cloudstack_network_acl_rule.foo", "rule.2704020556.action", "deny"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.4267872693.source_cidr", "10.0.0.0/24"), + "cloudstack_network_acl_rule.foo", "rule.2704020556.cidr_list.#", "2"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.4267872693.protocol", "tcp"), + "cloudstack_network_acl_rule.foo", "rule.2704020556.cidr_list.2104435309", "172.18.101.0/24"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.4267872693.ports.#", "2"), + "cloudstack_network_acl_rule.foo", "rule.2704020556.cidr_list.3056857544", "172.18.100.0/24"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.4267872693.ports.1209010669", "1000-2000"), + "cloudstack_network_acl_rule.foo", "rule.2704020556.icmp_code", "-1"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.4267872693.ports.1889509032", "80"), + "cloudstack_network_acl_rule.foo", "rule.2704020556.icmp_type", "-1"), resource.TestCheckResourceAttr( - "cloudstack_network_acl_rule.foo", "rule.4267872693.traffic_type", "egress"), + "cloudstack_network_acl_rule.foo", "rule.2704020556.traffic_type", "ingress"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.2792403380.action", "allow"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.2792403380.source_cidr", "172.16.100.0/24"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.2792403380.protocol", "tcp"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.#", "2"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.1889509032", "80"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.2792403380.ports.3638101695", "443"), + resource.TestCheckResourceAttr( + "cloudstack_network_acl_rule.foo", "rule.2792403380.traffic_type", "ingress"), ), }, }, @@ -196,7 +234,7 @@ resource "cloudstack_network_acl_rule" "foo" { rule { action = "allow" - source_cidr = "172.18.100.0/24" + cidr_list = ["172.18.100.0/24"] protocol = "icmp" icmp_type = "-1" icmp_code = "-1" @@ -240,7 +278,7 @@ resource "cloudstack_network_acl_rule" "foo" { rule { action = "deny" - source_cidr = "172.18.100.0/24" + cidr_list = ["172.18.100.0/24", "172.18.101.0/24"] protocol = "icmp" icmp_type = "-1" icmp_code = "-1" diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_test.go index 2cc366c15d..3bc1744b9b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_test.go @@ -23,6 +23,7 @@ func TestAccCloudStackNetwork_basic(t *testing.T) { testAccCheckCloudStackNetworkExists( "cloudstack_network.foo", &network), testAccCheckCloudStackNetworkBasicAttributes(&network), + testAccCheckNetworkTags(&network, "terraform-tag", "true"), ), }, }, @@ -93,17 +94,28 @@ func testAccCheckCloudStackNetworkBasicAttributes( } if network.Cidr != CLOUDSTACK_NETWORK_2_CIDR { - return fmt.Errorf("Bad service offering: %s", network.Cidr) + return fmt.Errorf("Bad CIDR: %s", network.Cidr) } if network.Networkofferingname != CLOUDSTACK_NETWORK_2_OFFERING { - return fmt.Errorf("Bad template: %s", network.Networkofferingname) + return fmt.Errorf("Bad network offering: %s", network.Networkofferingname) } return nil } } +func testAccCheckNetworkTags( + n *cloudstack.Network, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + tags := make(map[string]string) + for item := range n.Tags { + tags[n.Tags[item].Key] = n.Tags[item].Value + } + return testAccCheckTags(tags, key, value) + } +} + func testAccCheckCloudStackNetworkVPCAttributes( network *cloudstack.Network) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -151,10 +163,13 @@ func testAccCheckCloudStackNetworkDestroy(s *terraform.State) error { var testAccCloudStackNetwork_basic = fmt.Sprintf(` resource "cloudstack_network" "foo" { - name = "terraform-network" - cidr = "%s" - network_offering = "%s" - zone = "%s" + name = "terraform-network" + cidr = "%s" + network_offering = "%s" + zone = "%s" + tags = { + terraform-tag = "true" + } }`, CLOUDSTACK_NETWORK_2_CIDR, CLOUDSTACK_NETWORK_2_OFFERING, @@ -169,11 +184,11 @@ resource "cloudstack_vpc" "foobar" { } resource "cloudstack_network" "foo" { - name = "terraform-network" - cidr = "%s" - network_offering = "%s" - vpc = "${cloudstack_vpc.foobar.name}" - zone = "${cloudstack_vpc.foobar.zone}" + name = "terraform-network" + cidr = "%s" + network_offering = "%s" + vpc = "${cloudstack_vpc.foobar.name}" + zone = "${cloudstack_vpc.foobar.zone}" }`, CLOUDSTACK_VPC_CIDR_1, CLOUDSTACK_VPC_OFFERING, diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go index 0bec41af54..044482bcb6 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go +++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go @@ -1,13 +1,14 @@ package cloudstack import ( - "bytes" "fmt" + "sync" + "time" "strconv" "strings" - "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/schema" "github.com/xanzy/go-cloudstack/cloudstack" ) @@ -63,7 +64,6 @@ func resourceCloudStackPortForward() *schema.Resource { }, }, }, - Set: resourceCloudStackPortForwardHash, }, }, } @@ -82,32 +82,66 @@ func resourceCloudStackPortForwardCreate(d *schema.ResourceData, meta interface{ d.SetId(ipaddressid) // Create all forwards that are configured - if rs := d.Get("forward").(*schema.Set); rs.Len() > 0 { - + if nrs := d.Get("forward").(*schema.Set); nrs.Len() > 0 { // Create an empty schema.Set to hold all forwards - forwards := &schema.Set{ - F: resourceCloudStackPortForwardHash, - } + forwards := resourceCloudStackPortForward().Schema["forward"].ZeroValue().(*schema.Set) - for _, forward := range rs.List() { - // Create a single forward - err := resourceCloudStackPortForwardCreateForward(d, meta, forward.(map[string]interface{})) + err := createPortForwards(d, meta, forwards, nrs) - // We need to update this first to preserve the correct state - forwards.Add(forward) - d.Set("forward", forwards) + // We need to update this first to preserve the correct state + d.Set("forward", forwards) - if err != nil { - return err - } + if err != nil { + return err } } return resourceCloudStackPortForwardRead(d, meta) } -func resourceCloudStackPortForwardCreateForward( - d *schema.ResourceData, meta interface{}, forward map[string]interface{}) error { +func createPortForwards( + d *schema.ResourceData, + meta interface{}, + forwards *schema.Set, + nrs *schema.Set) error { + var errs *multierror.Error + + var wg sync.WaitGroup + wg.Add(nrs.Len()) + + sem := make(chan struct{}, 10) + for _, forward := range nrs.List() { + // Put in a tiny sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(forward map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Create a single forward + err := createPortForward(d, meta, forward) + + // If we have a UUID, we need to save the forward + if forward["uuid"].(string) != "" { + forwards.Add(forward) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(forward.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} +func createPortForward( + d *schema.ResourceData, + meta interface{}, + forward map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) // Make sure all required parameters are there @@ -150,11 +184,25 @@ func resourceCloudStackPortForwardCreateForward( func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{}) error { cs := meta.(*cloudstack.CloudStackClient) - // Create an empty schema.Set to hold all forwards - forwards := &schema.Set{ - F: resourceCloudStackPortForwardHash, + // Get all the forwards from the running environment + p := cs.Firewall.NewListPortForwardingRulesParams() + p.SetIpaddressid(d.Id()) + p.SetListall(true) + + l, err := cs.Firewall.ListPortForwardingRules(p) + if err != nil { + return err } + // Make a map of all the forwards so we can easily find a forward + forwardMap := make(map[string]*cloudstack.PortForwardingRule, l.Count) + for _, f := range l.PortForwardingRules { + forwardMap[f.Id] = f + } + + // Create an empty schema.Set to hold all forwards + forwards := resourceCloudStackPortForward().Schema["forward"].ZeroValue().(*schema.Set) + // Read all forwards that are configured if rs := d.Get("forward").(*schema.Set); rs.Len() > 0 { for _, forward := range rs.List() { @@ -166,36 +214,34 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{}) } // Get the forward - r, count, err := cs.Firewall.GetPortForwardingRuleByID(id.(string)) - // If the count == 0, there is no object found for this ID - if err != nil { - if count == 0 { - forward["uuid"] = "" - continue - } - - return err + f, ok := forwardMap[id.(string)] + if !ok { + forward["uuid"] = "" + continue } - privPort, err := strconv.Atoi(r.Privateport) + // Delete the known rule so only unknown rules remain in the ruleMap + delete(forwardMap, id.(string)) + + privPort, err := strconv.Atoi(f.Privateport) if err != nil { return err } - pubPort, err := strconv.Atoi(r.Publicport) + pubPort, err := strconv.Atoi(f.Publicport) if err != nil { return err } // Update the values - forward["protocol"] = r.Protocol + forward["protocol"] = f.Protocol forward["private_port"] = privPort forward["public_port"] = pubPort if isID(forward["virtual_machine"].(string)) { - forward["virtual_machine"] = r.Virtualmachineid + forward["virtual_machine"] = f.Virtualmachineid } else { - forward["virtual_machine"] = r.Virtualmachinename + forward["virtual_machine"] = f.Virtualmachinename } forwards.Add(forward) @@ -204,33 +250,11 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{}) // If this is a managed resource, add all unknown forwards to dummy forwards managed := d.Get("managed").(bool) - if managed { - // Get all the forwards from the running environment - p := cs.Firewall.NewListPortForwardingRulesParams() - p.SetIpaddressid(d.Id()) - p.SetListall(true) - - r, err := cs.Firewall.ListPortForwardingRules(p) - if err != nil { - return err - } - - // Add all UUIDs to the uuids map - uuids := make(map[string]interface{}, len(r.PortForwardingRules)) - for _, r := range r.PortForwardingRules { - uuids[r.Id] = r.Id - } - - // Delete all expected UUIDs from the uuids map - for _, forward := range forwards.List() { - forward := forward.(map[string]interface{}) - delete(uuids, forward["uuid"].(string)) - } - - for uuid := range uuids { + if managed && len(forwardMap) > 0 { + for uuid := range forwardMap { // Make a dummy forward to hold the unknown UUID forward := map[string]interface{}{ - "protocol": "N/A", + "protocol": uuid, "private_port": 0, "public_port": 0, "virtual_machine": uuid, @@ -258,26 +282,29 @@ func resourceCloudStackPortForwardUpdate(d *schema.ResourceData, meta interface{ ors := o.(*schema.Set).Difference(n.(*schema.Set)) nrs := n.(*schema.Set).Difference(o.(*schema.Set)) - // Now first loop through all the old forwards and delete any obsolete ones - for _, forward := range ors.List() { - // Delete the forward as it no longer exists in the config - err := resourceCloudStackPortForwardDeleteForward(d, meta, forward.(map[string]interface{})) + // We need to start with a rule set containing all the rules we + // already have and want to keep. Any rules that are not deleted + // correctly and any newly created rules, will be added to this + // set to make sure we end up in a consistent state + forwards := o.(*schema.Set).Intersection(n.(*schema.Set)) + + // First loop through all the new forwards and create (before destroy) them + if nrs.Len() > 0 { + err := createPortForwards(d, meta, forwards, nrs) + + // We need to update this first to preserve the correct state + d.Set("forward", forwards) + if err != nil { return err } } - // Make sure we save the state of the currently configured forwards - forwards := o.(*schema.Set).Intersection(n.(*schema.Set)) - d.Set("forward", forwards) - - // Then loop through all the currently configured forwards and create the new ones - for _, forward := range nrs.List() { - err := resourceCloudStackPortForwardCreateForward( - d, meta, forward.(map[string]interface{})) + // Then loop through all the old forwards and delete them + if ors.Len() > 0 { + err := deletePortForwards(d, meta, forwards, ors) // We need to update this first to preserve the correct state - forwards.Add(forward) d.Set("forward", forwards) if err != nil { @@ -290,26 +317,69 @@ func resourceCloudStackPortForwardUpdate(d *schema.ResourceData, meta interface{ } func resourceCloudStackPortForwardDelete(d *schema.ResourceData, meta interface{}) error { + // Create an empty rule set to hold all rules that where + // not deleted correctly + forwards := resourceCloudStackPortForward().Schema["forward"].ZeroValue().(*schema.Set) + // Delete all forwards - if rs := d.Get("forward").(*schema.Set); rs.Len() > 0 { - for _, forward := range rs.List() { - // Delete a single forward - err := resourceCloudStackPortForwardDeleteForward(d, meta, forward.(map[string]interface{})) + if ors := d.Get("forward").(*schema.Set); ors.Len() > 0 { + err := deletePortForwards(d, meta, forwards, ors) - // We need to update this first to preserve the correct state - d.Set("forward", rs) + // We need to update this first to preserve the correct state + d.Set("forward", forwards) - if err != nil { - return err - } + if err != nil { + return err } } return nil } -func resourceCloudStackPortForwardDeleteForward( - d *schema.ResourceData, meta interface{}, forward map[string]interface{}) error { +func deletePortForwards( + d *schema.ResourceData, + meta interface{}, + forwards *schema.Set, + ors *schema.Set) error { + var errs *multierror.Error + + var wg sync.WaitGroup + wg.Add(ors.Len()) + + sem := make(chan struct{}, 10) + for _, forward := range ors.List() { + // Put a sleep here to avoid DoS'ing the API + time.Sleep(500 * time.Millisecond) + + go func(forward map[string]interface{}) { + defer wg.Done() + sem <- struct{}{} + + // Delete a single forward + err := deletePortForward(d, meta, forward) + + // If we have a UUID, we need to save the forward + if forward["uuid"].(string) != "" { + forwards.Add(forward) + } + + if err != nil { + errs = multierror.Append(errs, err) + } + + <-sem + }(forward.(map[string]interface{})) + } + + wg.Wait() + + return errs.ErrorOrNil() +} + +func deletePortForward( + d *schema.ResourceData, + meta interface{}, + forward map[string]interface{}) error { cs := meta.(*cloudstack.CloudStackClient) // Create the parameter struct @@ -331,19 +401,6 @@ func resourceCloudStackPortForwardDeleteForward( return nil } -func resourceCloudStackPortForwardHash(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf( - "%s-%d-%d-%s", - m["protocol"].(string), - m["private_port"].(int), - m["public_port"].(int), - m["virtual_machine"].(string))) - - return hashcode.String(buf.String()) -} - func verifyPortForwardParams(d *schema.ResourceData, forward map[string]interface{}) error { protocol := forward["protocol"].(string) if protocol != "tcp" && protocol != "udp" { diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go index b0851753f8..63dcdb001b 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go @@ -23,13 +23,13 @@ func TestAccCloudStackPortForward_basic(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_port_forward.foo", "ipaddress", CLOUDSTACK_PUBLIC_IPADDRESS), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.protocol", "tcp"), + "cloudstack_port_forward.foo", "forward.952396423.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.private_port", "443"), + "cloudstack_port_forward.foo", "forward.952396423.private_port", "443"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.public_port", "8443"), + "cloudstack_port_forward.foo", "forward.952396423.public_port", "8443"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.virtual_machine", "terraform-test"), + "cloudstack_port_forward.foo", "forward.952396423.virtual_machine", "terraform-test"), ), }, }, @@ -51,13 +51,13 @@ func TestAccCloudStackPortForward_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_port_forward.foo", "forward.#", "1"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.protocol", "tcp"), + "cloudstack_port_forward.foo", "forward.952396423.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.private_port", "443"), + "cloudstack_port_forward.foo", "forward.952396423.private_port", "443"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.public_port", "8443"), + "cloudstack_port_forward.foo", "forward.952396423.public_port", "8443"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.virtual_machine", "terraform-test"), + "cloudstack_port_forward.foo", "forward.952396423.virtual_machine", "terraform-test"), ), }, @@ -70,21 +70,21 @@ func TestAccCloudStackPortForward_update(t *testing.T) { resource.TestCheckResourceAttr( "cloudstack_port_forward.foo", "forward.#", "2"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.8416686.protocol", "tcp"), + "cloudstack_port_forward.foo", "forward.260687715.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.8416686.private_port", "80"), + "cloudstack_port_forward.foo", "forward.260687715.private_port", "80"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.8416686.public_port", "8080"), + "cloudstack_port_forward.foo", "forward.260687715.public_port", "8080"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.8416686.virtual_machine", "terraform-test"), + "cloudstack_port_forward.foo", "forward.260687715.virtual_machine", "terraform-test"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.protocol", "tcp"), + "cloudstack_port_forward.foo", "forward.952396423.protocol", "tcp"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.private_port", "443"), + "cloudstack_port_forward.foo", "forward.952396423.private_port", "443"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.public_port", "8443"), + "cloudstack_port_forward.foo", "forward.952396423.public_port", "8443"), resource.TestCheckResourceAttr( - "cloudstack_port_forward.foo", "forward.1537694805.virtual_machine", "terraform-test"), + "cloudstack_port_forward.foo", "forward.952396423.virtual_machine", "terraform-test"), ), }, }, diff --git a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go index beedcd2cb2..dd59ca3f49 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go @@ -43,7 +43,7 @@ func TestAccCloudStackSecondaryIPAddress_fixedIP(t *testing.T) { "cloudstack_secondary_ipaddress.foo", &ip), testAccCheckCloudStackSecondaryIPAddressAttributes(&ip), resource.TestCheckResourceAttr( - "cloudstack_secondary_ipaddress.foo", "ipaddress", CLOUDSTACK_NETWORK_1_IPADDRESS), + "cloudstack_secondary_ipaddress.foo", "ipaddress", CLOUDSTACK_NETWORK_1_IPADDRESS1), ), }, }, @@ -117,7 +117,7 @@ func testAccCheckCloudStackSecondaryIPAddressAttributes( ip *cloudstack.AddIpToNicResponse) resource.TestCheckFunc { return func(s *terraform.State) error { - if ip.Ipaddress != CLOUDSTACK_NETWORK_1_IPADDRESS { + if ip.Ipaddress != CLOUDSTACK_NETWORK_1_IPADDRESS1 { return fmt.Errorf("Bad IP address: %s", ip.Ipaddress) } return nil @@ -222,4 +222,4 @@ resource "cloudstack_secondary_ipaddress" "foo" { CLOUDSTACK_NETWORK_1, CLOUDSTACK_TEMPLATE, CLOUDSTACK_ZONE, - CLOUDSTACK_NETWORK_1_IPADDRESS) + CLOUDSTACK_NETWORK_1_IPADDRESS1) diff --git a/builtin/providers/cloudstack/resource_cloudstack_template_test.go b/builtin/providers/cloudstack/resource_cloudstack_template_test.go old mode 100755 new mode 100644 diff --git a/builtin/providers/cloudstack/resources.go b/builtin/providers/cloudstack/resources.go index f7115e7933..8d7090e1f5 100644 --- a/builtin/providers/cloudstack/resources.go +++ b/builtin/providers/cloudstack/resources.go @@ -4,15 +4,19 @@ import ( "fmt" "log" "regexp" + "strings" "time" "github.com/hashicorp/terraform/helper/schema" "github.com/xanzy/go-cloudstack/cloudstack" ) -// CloudStack uses a "special" ID of -1 to define an unlimited resource +// UnlimitedResourceID is a "special" ID to define an unlimited resource const UnlimitedResourceID = "-1" +// Define a regexp for parsing the port +var splitPorts = regexp.MustCompile(`^(\d+)(?:-(\d+))?$`) + type retrieveError struct { name string value string @@ -135,8 +139,8 @@ func Retry(n int, f RetryFunc) (interface{}, error) { for i := 0; i < n; i++ { r, err := f() - if err == nil { - return r, nil + if err == nil || err == cloudstack.AsyncTimeoutErr { + return r, err } lastErr = err @@ -145,3 +149,36 @@ func Retry(n int, f RetryFunc) (interface{}, error) { return nil, lastErr } + +// This is a temporary helper function to support both the new +// cidr_list and the deprecated source_cidr parameter +func retrieveCidrList(rule map[string]interface{}) []string { + sourceCidr := rule["source_cidr"].(string) + if sourceCidr != "" { + return []string{sourceCidr} + } + + var cidrList []string + for _, cidr := range rule["cidr_list"].(*schema.Set).List() { + cidrList = append(cidrList, cidr.(string)) + } + + return cidrList +} + +// This is a temporary helper function to support both the new +// cidr_list and the deprecated source_cidr parameter +func setCidrList(rule map[string]interface{}, cidrList string) { + sourceCidr := rule["source_cidr"].(string) + if sourceCidr != "" { + rule["source_cidr"] = cidrList + return + } + + cidrs := &schema.Set{F: schema.HashString} + for _, cidr := range strings.Split(cidrList, ",") { + cidrs.Add(cidr) + } + + rule["cidr_list"] = cidrs +} diff --git a/builtin/providers/cloudstack/tags.go b/builtin/providers/cloudstack/tags.go new file mode 100644 index 0000000000..389cdb47f2 --- /dev/null +++ b/builtin/providers/cloudstack/tags.go @@ -0,0 +1,77 @@ +package cloudstack + +import ( + "log" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +// tagsSchema returns the schema to use for tags +func tagsSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + Computed: true, + } +} + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTags(cs *cloudstack.CloudStackClient, d *schema.ResourceData, resourcetype string) error { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + + remove, create := diffTags(tagsFromSchema(o), tagsFromSchema(n)) + log.Printf("[DEBUG] tags to remove: %v", remove) + log.Printf("[DEBUG] tags to create: %v", create) + + // First remove any obsolete tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %v from %s", remove, d.Id()) + p := cs.Resourcetags.NewDeleteTagsParams([]string{d.Id()}, resourcetype) + p.SetTags(remove) + _, err := cs.Resourcetags.DeleteTags(p) + if err != nil { + return err + } + } + + // Then add any new tags + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %v for %s", create, d.Id()) + p := cs.Resourcetags.NewCreateTagsParams([]string{d.Id()}, resourcetype, create) + _, err := cs.Resourcetags.CreateTags(p) + if err != nil { + return err + } + } + + return nil +} + +// diffTags takes the old and the new tag sets and returns the difference of +// both. The remaining tags are those that need to be removed and created +func diffTags(oldTags, newTags map[string]string) (map[string]string, map[string]string) { + for k, old := range oldTags { + new, ok := newTags[k] + if ok && old == new { + // We should avoid removing or creating tags we already have + delete(oldTags, k) + delete(newTags, k) + } + } + + return oldTags, newTags +} + +// tagsFromSchema takes the raw schema tags and returns them as a +// properly asserted map[string]string +func tagsFromSchema(m map[string]interface{}) map[string]string { + result := make(map[string]string, len(m)) + for k, v := range m { + result[k] = v.(string) + } + return result +} diff --git a/builtin/providers/cloudstack/tags_test.go b/builtin/providers/cloudstack/tags_test.go new file mode 100644 index 0000000000..fba9cadd7f --- /dev/null +++ b/builtin/providers/cloudstack/tags_test.go @@ -0,0 +1,70 @@ +package cloudstack + +import ( + "fmt" + "reflect" + "testing" +) + +func TestDiffTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + r, c := diffTags(tagsFromSchema(tc.Old), tagsFromSchema(tc.New)) + if !reflect.DeepEqual(r, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, r) + } + if !reflect.DeepEqual(c, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, c) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckTags(tags map[string]string, key string, value string) error { + v, ok := tags[key] + if !ok { + return fmt.Errorf("Missing tag: %s", key) + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil +} diff --git a/builtin/providers/digitalocean/resource_digitalocean_domain.go b/builtin/providers/digitalocean/resource_digitalocean_domain.go index d7c4edca13..657acb21df 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_domain.go +++ b/builtin/providers/digitalocean/resource_digitalocean_domain.go @@ -3,7 +3,6 @@ package digitalocean import ( "fmt" "log" - "strings" "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/schema" @@ -56,11 +55,11 @@ func resourceDigitalOceanDomainCreate(d *schema.ResourceData, meta interface{}) func resourceDigitalOceanDomainRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*godo.Client) - domain, _, err := client.Domains.Get(d.Id()) + domain, resp, err := client.Domains.Get(d.Id()) if err != nil { // If the domain is somehow already destroyed, mark as // successfully gone - if strings.Contains(err.Error(), "404 Not Found") { + if resp.StatusCode == 404 { d.SetId("") return nil } diff --git a/builtin/providers/digitalocean/resource_digitalocean_domain_test.go b/builtin/providers/digitalocean/resource_digitalocean_domain_test.go index 2801414ee7..a5484c1e10 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_domain_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_domain_test.go @@ -5,12 +5,14 @@ import ( "testing" "github.com/digitalocean/godo" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccDigitalOceanDomain_Basic(t *testing.T) { var domain godo.Domain + domainName := fmt.Sprintf("foobar-test-terraform-%s.com", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -18,12 +20,12 @@ func TestAccDigitalOceanDomain_Basic(t *testing.T) { CheckDestroy: testAccCheckDigitalOceanDomainDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckDigitalOceanDomainConfig_basic, + Config: fmt.Sprintf(testAccCheckDigitalOceanDomainConfig_basic, domainName), Check: resource.ComposeTestCheckFunc( testAccCheckDigitalOceanDomainExists("digitalocean_domain.foobar", &domain), - testAccCheckDigitalOceanDomainAttributes(&domain), + testAccCheckDigitalOceanDomainAttributes(&domain, domainName), resource.TestCheckResourceAttr( - "digitalocean_domain.foobar", "name", "foobar-test-terraform.com"), + "digitalocean_domain.foobar", "name", domainName), resource.TestCheckResourceAttr( "digitalocean_domain.foobar", "ip_address", "192.168.0.10"), ), @@ -51,10 +53,10 @@ func testAccCheckDigitalOceanDomainDestroy(s *terraform.State) error { return nil } -func testAccCheckDigitalOceanDomainAttributes(domain *godo.Domain) resource.TestCheckFunc { +func testAccCheckDigitalOceanDomainAttributes(domain *godo.Domain, name string) resource.TestCheckFunc { return func(s *terraform.State) error { - if domain.Name != "foobar-test-terraform.com" { + if domain.Name != name { return fmt.Errorf("Bad name: %s", domain.Name) } @@ -94,6 +96,6 @@ func testAccCheckDigitalOceanDomainExists(n string, domain *godo.Domain) resourc const testAccCheckDigitalOceanDomainConfig_basic = ` resource "digitalocean_domain" "foobar" { - name = "foobar-test-terraform.com" - ip_address = "192.168.0.10" + name = "%s" + ip_address = "192.168.0.10" }` diff --git a/builtin/providers/digitalocean/resource_digitalocean_droplet.go b/builtin/providers/digitalocean/resource_digitalocean_droplet.go index 050577854e..4d493ccdce 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_droplet.go +++ b/builtin/providers/digitalocean/resource_digitalocean_droplet.go @@ -418,7 +418,7 @@ func WaitForDropletAttribute( stateConf := &resource.StateChangeConf{ Pending: pending, - Target: target, + Target: []string{target}, Refresh: newDropletStateRefreshFunc(d, attribute, meta), Timeout: 60 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go b/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go index d3a37a82ca..3a72e3c5dc 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go @@ -293,43 +293,67 @@ func testAccCheckDigitalOceanDropletRecreated(t *testing.T, // //} -const testAccCheckDigitalOceanDropletConfig_basic = ` -resource "digitalocean_droplet" "foobar" { - name = "foo" - size = "512mb" - image = "centos-5-8-x32" - region = "nyc3" - user_data = "foobar" +var testAccCheckDigitalOceanDropletConfig_basic = fmt.Sprintf(` +resource "digitalocean_ssh_key" "foobar" { + name = "foobar" + public_key = "%s" } -` -const testAccCheckDigitalOceanDropletConfig_userdata_update = ` resource "digitalocean_droplet" "foobar" { - name = "foo" - size = "512mb" - image = "centos-5-8-x32" - region = "nyc3" - user_data = "foobar foobar" + name = "foo" + size = "512mb" + image = "centos-5-8-x32" + region = "nyc3" + user_data = "foobar" + ssh_keys = ["${digitalocean_ssh_key.foobar.id}"] } -` +`, testAccValidPublicKey) -const testAccCheckDigitalOceanDropletConfig_RenameAndResize = ` -resource "digitalocean_droplet" "foobar" { - name = "baz" - size = "1gb" - image = "centos-5-8-x32" - region = "nyc3" +var testAccCheckDigitalOceanDropletConfig_userdata_update = fmt.Sprintf(` +resource "digitalocean_ssh_key" "foobar" { + name = "foobar" + public_key = "%s" } -` + +resource "digitalocean_droplet" "foobar" { + name = "foo" + size = "512mb" + image = "centos-5-8-x32" + region = "nyc3" + user_data = "foobar foobar" + ssh_keys = ["${digitalocean_ssh_key.foobar.id}"] +} +`, testAccValidPublicKey) + +var testAccCheckDigitalOceanDropletConfig_RenameAndResize = fmt.Sprintf(` +resource "digitalocean_ssh_key" "foobar" { + name = "foobar" + public_key = "%s" +} + +resource "digitalocean_droplet" "foobar" { + name = "baz" + size = "1gb" + image = "centos-5-8-x32" + region = "nyc3" + ssh_keys = ["${digitalocean_ssh_key.foobar.id}"] +} +`, testAccValidPublicKey) // IPV6 only in singapore -const testAccCheckDigitalOceanDropletConfig_PrivateNetworkingIpv6 = ` -resource "digitalocean_droplet" "foobar" { - name = "baz" - size = "1gb" - image = "centos-5-8-x32" - region = "sgp1" - ipv6 = true - private_networking = true +var testAccCheckDigitalOceanDropletConfig_PrivateNetworkingIpv6 = fmt.Sprintf(` +resource "digitalocean_ssh_key" "foobar" { + name = "foobar" + public_key = "%s" } -` + +resource "digitalocean_droplet" "foobar" { + name = "baz" + size = "1gb" + image = "centos-5-8-x32" + region = "sgp1" + ipv6 = true + private_networking = true + ssh_keys = ["${digitalocean_ssh_key.foobar.id}"] +} +`, testAccValidPublicKey) diff --git a/builtin/providers/digitalocean/resource_digitalocean_floating_ip.go b/builtin/providers/digitalocean/resource_digitalocean_floating_ip.go index 03e4b07467..cdfc280385 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_floating_ip.go +++ b/builtin/providers/digitalocean/resource_digitalocean_floating_ip.go @@ -13,6 +13,7 @@ import ( func resourceDigitalOceanFloatingIp() *schema.Resource { return &schema.Resource{ Create: resourceDigitalOceanFloatingIpCreate, + Update: resourceDigitalOceanFloatingIpUpdate, Read: resourceDigitalOceanFloatingIpRead, Delete: resourceDigitalOceanFloatingIpDelete, @@ -32,7 +33,6 @@ func resourceDigitalOceanFloatingIp() *schema.Resource { "droplet_id": &schema.Schema{ Type: schema.TypeInt, Optional: true, - ForceNew: true, }, }, } @@ -73,6 +73,42 @@ func resourceDigitalOceanFloatingIpCreate(d *schema.ResourceData, meta interface return resourceDigitalOceanFloatingIpRead(d, meta) } +func resourceDigitalOceanFloatingIpUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*godo.Client) + + if d.HasChange("droplet_id") { + if v, ok := d.GetOk("droplet_id"); ok { + log.Printf("[INFO] Assigning the Floating IP %s to the Droplet %d", d.Id(), v.(int)) + action, _, err := client.FloatingIPActions.Assign(d.Id(), v.(int)) + if err != nil { + return fmt.Errorf( + "Error Assigning FloatingIP (%s) to the droplet: %s", d.Id(), err) + } + + _, unassignedErr := waitForFloatingIPReady(d, "completed", []string{"new", "in-progress"}, "status", meta, action.ID) + if unassignedErr != nil { + return fmt.Errorf( + "Error waiting for FloatingIP (%s) to be Assigned: %s", d.Id(), unassignedErr) + } + } else { + log.Printf("[INFO] Unassigning the Floating IP %s", d.Id()) + action, _, err := client.FloatingIPActions.Unassign(d.Id()) + if err != nil { + return fmt.Errorf( + "Error Unassigning FloatingIP (%s): %s", d.Id(), err) + } + + _, unassignedErr := waitForFloatingIPReady(d, "completed", []string{"new", "in-progress"}, "status", meta, action.ID) + if unassignedErr != nil { + return fmt.Errorf( + "Error waiting for FloatingIP (%s) to be Unassigned: %s", d.Id(), unassignedErr) + } + } + } + + return resourceDigitalOceanFloatingIpRead(d, meta) +} + func resourceDigitalOceanFloatingIpRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*godo.Client) @@ -82,8 +118,9 @@ func resourceDigitalOceanFloatingIpRead(d *schema.ResourceData, meta interface{} return fmt.Errorf("Error retrieving FloatingIP: %s", err) } - if _, ok := d.GetOk("droplet_id"); ok { - log.Printf("[INFO] The region of the Droplet is %s", floatingIp.Droplet.Region) + if floatingIp.Droplet != nil { + log.Printf("[INFO] A droplet was detected on the FloatingIP so setting the Region based on the Droplet") + log.Printf("[INFO] The region of the Droplet is %s", floatingIp.Droplet.Region.Slug) d.Set("region", floatingIp.Droplet.Region.Slug) } else { d.Set("region", floatingIp.Region.Slug) @@ -130,7 +167,7 @@ func waitForFloatingIPReady( stateConf := &resource.StateChangeConf{ Pending: pending, - Target: target, + Target: []string{target}, Refresh: newFloatingIPStateRefreshFunc(d, attribute, meta, actionId), Timeout: 60 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/digitalocean/resource_digitalocean_floating_ip_test.go b/builtin/providers/digitalocean/resource_digitalocean_floating_ip_test.go index 8ae003a1d4..ae53e1a899 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_floating_ip_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_floating_ip_test.go @@ -42,7 +42,7 @@ func TestAccDigitalOceanFloatingIP_Droplet(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckDigitalOceanFloatingIPExists("digitalocean_floating_ip.foobar", &floatingIP), resource.TestCheckResourceAttr( - "digitalocean_floating_ip.foobar", "region", "sgp1"), + "digitalocean_floating_ip.foobar", "region", "nyc3"), ), }, }, @@ -101,21 +101,26 @@ func testAccCheckDigitalOceanFloatingIPExists(n string, floatingIP *godo.Floatin var testAccCheckDigitalOceanFloatingIPConfig_region = ` resource "digitalocean_floating_ip" "foobar" { - region = "nyc3" + region = "nyc3" }` -var testAccCheckDigitalOceanFloatingIPConfig_droplet = ` +var testAccCheckDigitalOceanFloatingIPConfig_droplet = fmt.Sprintf(` +resource "digitalocean_ssh_key" "foobar" { + name = "foobar" + public_key = "%s" +} resource "digitalocean_droplet" "foobar" { - name = "baz" - size = "1gb" - image = "centos-5-8-x32" - region = "sgp1" - ipv6 = true - private_networking = true + name = "baz" + size = "1gb" + image = "centos-5-8-x32" + region = "nyc3" + ipv6 = true + private_networking = true + ssh_keys = ["${digitalocean_ssh_key.foobar.id}"] } resource "digitalocean_floating_ip" "foobar" { - droplet_id = "${digitalocean_droplet.foobar.id}" - region = "${digitalocean_droplet.foobar.region}" -}` + droplet_id = "${digitalocean_droplet.foobar.id}" + region = "${digitalocean_droplet.foobar.region}" +}`, testAccValidPublicKey) diff --git a/builtin/providers/digitalocean/resource_digitalocean_record.go b/builtin/providers/digitalocean/resource_digitalocean_record.go index ebcb2e0f8f..5e8218c79c 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_record.go +++ b/builtin/providers/digitalocean/resource_digitalocean_record.go @@ -115,11 +115,11 @@ func resourceDigitalOceanRecordRead(d *schema.ResourceData, meta interface{}) er return fmt.Errorf("invalid record ID: %v", err) } - rec, _, err := client.Domains.Record(domain, id) + rec, resp, err := client.Domains.Record(domain, id) if err != nil { // If the record is somehow already destroyed, mark as // successfully gone - if strings.Contains(err.Error(), "404 Not Found") { + if resp.StatusCode == 404 { d.SetId("") return nil } @@ -183,15 +183,15 @@ func resourceDigitalOceanRecordDelete(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Deleting record: %s, %d", domain, id) - _, err = client.Domains.DeleteRecord(domain, id) - if err != nil { + resp, delErr := client.Domains.DeleteRecord(domain, id) + if delErr != nil { // If the record is somehow already destroyed, mark as // successfully gone - if strings.Contains(err.Error(), "404 Not Found") { + if resp.StatusCode == 404 { return nil } - return fmt.Errorf("Error deleting record: %s", err) + return fmt.Errorf("Error deleting record: %s", delErr) } return nil diff --git a/builtin/providers/digitalocean/resource_digitalocean_record_test.go b/builtin/providers/digitalocean/resource_digitalocean_record_test.go index 7a4123bd60..9552e031e6 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_record_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_record_test.go @@ -6,12 +6,14 @@ import ( "testing" "github.com/digitalocean/godo" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccDigitalOceanRecord_Basic(t *testing.T) { var record godo.DomainRecord + domain := fmt.Sprintf("foobar-test-terraform-%s.com", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -19,14 +21,14 @@ func TestAccDigitalOceanRecord_Basic(t *testing.T) { CheckDestroy: testAccCheckDigitalOceanRecordDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckDigitalOceanRecordConfig_basic, + Config: fmt.Sprintf(testAccCheckDigitalOceanRecordConfig_basic, domain), Check: resource.ComposeTestCheckFunc( testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), testAccCheckDigitalOceanRecordAttributes(&record), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "name", "terraform"), resource.TestCheckResourceAttr( - "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + "digitalocean_record.foobar", "domain", domain), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "value", "192.168.0.10"), ), @@ -37,6 +39,7 @@ func TestAccDigitalOceanRecord_Basic(t *testing.T) { func TestAccDigitalOceanRecord_Updated(t *testing.T) { var record godo.DomainRecord + domain := fmt.Sprintf("foobar-test-terraform-%s.com", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -44,14 +47,14 @@ func TestAccDigitalOceanRecord_Updated(t *testing.T) { CheckDestroy: testAccCheckDigitalOceanRecordDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckDigitalOceanRecordConfig_basic, + Config: fmt.Sprintf(testAccCheckDigitalOceanRecordConfig_basic, domain), Check: resource.ComposeTestCheckFunc( testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), testAccCheckDigitalOceanRecordAttributes(&record), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "name", "terraform"), resource.TestCheckResourceAttr( - "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + "digitalocean_record.foobar", "domain", domain), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "value", "192.168.0.10"), resource.TestCheckResourceAttr( @@ -59,14 +62,15 @@ func TestAccDigitalOceanRecord_Updated(t *testing.T) { ), }, resource.TestStep{ - Config: testAccCheckDigitalOceanRecordConfig_new_value, + Config: fmt.Sprintf( + testAccCheckDigitalOceanRecordConfig_new_value, domain), Check: resource.ComposeTestCheckFunc( testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), testAccCheckDigitalOceanRecordAttributesUpdated(&record), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "name", "terraform"), resource.TestCheckResourceAttr( - "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + "digitalocean_record.foobar", "domain", domain), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "value", "192.168.0.11"), resource.TestCheckResourceAttr( @@ -79,6 +83,7 @@ func TestAccDigitalOceanRecord_Updated(t *testing.T) { func TestAccDigitalOceanRecord_HostnameValue(t *testing.T) { var record godo.DomainRecord + domain := fmt.Sprintf("foobar-test-terraform-%s.com", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -86,14 +91,15 @@ func TestAccDigitalOceanRecord_HostnameValue(t *testing.T) { CheckDestroy: testAccCheckDigitalOceanRecordDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckDigitalOceanRecordConfig_cname, + Config: fmt.Sprintf( + testAccCheckDigitalOceanRecordConfig_cname, domain), Check: resource.ComposeTestCheckFunc( testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), - testAccCheckDigitalOceanRecordAttributesHostname("a", &record), + testAccCheckDigitalOceanRecordAttributesHostname("a.foobar-test-terraform.com", &record), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "name", "terraform"), resource.TestCheckResourceAttr( - "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + "digitalocean_record.foobar", "domain", domain), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "value", "a.foobar-test-terraform.com."), resource.TestCheckResourceAttr( @@ -106,6 +112,7 @@ func TestAccDigitalOceanRecord_HostnameValue(t *testing.T) { func TestAccDigitalOceanRecord_ExternalHostnameValue(t *testing.T) { var record godo.DomainRecord + domain := fmt.Sprintf("foobar-test-terraform-%s.com", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -113,14 +120,15 @@ func TestAccDigitalOceanRecord_ExternalHostnameValue(t *testing.T) { CheckDestroy: testAccCheckDigitalOceanRecordDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckDigitalOceanRecordConfig_external_cname, + Config: fmt.Sprintf( + testAccCheckDigitalOceanRecordConfig_external_cname, domain), Check: resource.ComposeTestCheckFunc( testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), testAccCheckDigitalOceanRecordAttributesHostname("a.foobar-test-terraform.net", &record), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "name", "terraform"), resource.TestCheckResourceAttr( - "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + "digitalocean_record.foobar", "domain", domain), resource.TestCheckResourceAttr( "digitalocean_record.foobar", "value", "a.foobar-test-terraform.net."), resource.TestCheckResourceAttr( @@ -225,70 +233,56 @@ func testAccCheckDigitalOceanRecordAttributesHostname(data string, record *godo. const testAccCheckDigitalOceanRecordConfig_basic = ` resource "digitalocean_domain" "foobar" { - name = "foobar-test-terraform.com" - ip_address = "192.168.0.10" + name = "%s" + ip_address = "192.168.0.10" } resource "digitalocean_record" "foobar" { - domain = "${digitalocean_domain.foobar.name}" + domain = "${digitalocean_domain.foobar.name}" - name = "terraform" - value = "192.168.0.10" - type = "A" + name = "terraform" + value = "192.168.0.10" + type = "A" }` const testAccCheckDigitalOceanRecordConfig_new_value = ` resource "digitalocean_domain" "foobar" { - name = "foobar-test-terraform.com" - ip_address = "192.168.0.10" + name = "%s" + ip_address = "192.168.0.10" } resource "digitalocean_record" "foobar" { - domain = "${digitalocean_domain.foobar.name}" + domain = "${digitalocean_domain.foobar.name}" - name = "terraform" - value = "192.168.0.11" - type = "A" + name = "terraform" + value = "192.168.0.11" + type = "A" }` const testAccCheckDigitalOceanRecordConfig_cname = ` resource "digitalocean_domain" "foobar" { - name = "foobar-test-terraform.com" - ip_address = "192.168.0.10" + name = "%s" + ip_address = "192.168.0.10" } resource "digitalocean_record" "foobar" { - domain = "${digitalocean_domain.foobar.name}" + domain = "${digitalocean_domain.foobar.name}" - name = "terraform" - value = "a.foobar-test-terraform.com." - type = "CNAME" -}` - -const testAccCheckDigitalOceanRecordConfig_relative_cname = ` -resource "digitalocean_domain" "foobar" { - name = "foobar-test-terraform.com" - ip_address = "192.168.0.10" -} - -resource "digitalocean_record" "foobar" { - domain = "${digitalocean_domain.foobar.name}" - - name = "terraform" - value = "a.b" - type = "CNAME" + name = "terraform" + value = "a.foobar-test-terraform.com." + type = "CNAME" }` const testAccCheckDigitalOceanRecordConfig_external_cname = ` resource "digitalocean_domain" "foobar" { - name = "foobar-test-terraform.com" - ip_address = "192.168.0.10" + name = "%s" + ip_address = "192.168.0.10" } resource "digitalocean_record" "foobar" { - domain = "${digitalocean_domain.foobar.name}" + domain = "${digitalocean_domain.foobar.name}" - name = "terraform" - value = "a.foobar-test-terraform.net." - type = "CNAME" + name = "terraform" + value = "a.foobar-test-terraform.net." + type = "CNAME" }` diff --git a/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go b/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go index d6eb96f09f..79614f5999 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go +++ b/builtin/providers/digitalocean/resource_digitalocean_ssh_key.go @@ -4,7 +4,6 @@ import ( "fmt" "log" "strconv" - "strings" "github.com/digitalocean/godo" "github.com/hashicorp/terraform/helper/schema" @@ -71,11 +70,11 @@ func resourceDigitalOceanSSHKeyRead(d *schema.ResourceData, meta interface{}) er return fmt.Errorf("invalid SSH key id: %v", err) } - key, _, err := client.Keys.GetByID(id) + key, resp, err := client.Keys.GetByID(id) if err != nil { // If the key is somehow already destroyed, mark as // successfully gone - if strings.Contains(err.Error(), "404 Not Found") { + if resp.StatusCode == 404 { d.SetId("") return nil } diff --git a/builtin/providers/dme/resource_dme_record.go b/builtin/providers/dme/resource_dme_record.go index 8e078e4204..4578dd5f9d 100644 --- a/builtin/providers/dme/resource_dme_record.go +++ b/builtin/providers/dme/resource_dme_record.go @@ -74,6 +74,10 @@ func resourceDMERecord() *schema.Resource { Type: schema.TypeString, Optional: true, }, + "gtdLocation": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, }, } } @@ -168,6 +172,9 @@ func getAll(d *schema.ResourceData, cr map[string]interface{}) error { if attr, ok := d.GetOk("value"); ok { cr["value"] = attr.(string) } + if attr, ok := d.GetOk("gtdLocation"); ok { + cr["gtdLocation"] = attr.(string) + } switch strings.ToUpper(d.Get("type").(string)) { case "A", "CNAME", "ANAME", "TXT", "SPF", "NS", "PTR", "AAAA": @@ -213,6 +220,10 @@ func setAll(d *schema.ResourceData, rec *dnsmadeeasy.Record) error { d.Set("name", rec.Name) d.Set("ttl", rec.TTL) d.Set("value", rec.Value) + // only set gtdLocation if it is given as this is optional. + if rec.GtdLocation != "" { + d.Set("gtdLocation", rec.GtdLocation) + } switch rec.Type { case "A", "CNAME", "ANAME", "TXT", "SPF", "NS", "PTR": diff --git a/builtin/providers/dme/resource_dme_record_test.go b/builtin/providers/dme/resource_dme_record_test.go index 430afacb28..f1b79292a0 100644 --- a/builtin/providers/dme/resource_dme_record_test.go +++ b/builtin/providers/dme/resource_dme_record_test.go @@ -36,6 +36,8 @@ func TestAccDMERecord_basic(t *testing.T) { "dme_record.test", "value", "1.1.1.1"), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -65,6 +67,8 @@ func TestAccDMERecordCName(t *testing.T) { "dme_record.test", "value", "foo"), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -131,6 +135,8 @@ func TestAccDMERecordMX(t *testing.T) { "dme_record.test", "mxLevel", "10"), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -172,6 +178,8 @@ func TestAccDMERecordHTTPRED(t *testing.T) { resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -201,6 +209,8 @@ func TestAccDMERecordTXT(t *testing.T) { "dme_record.test", "value", "\"foo\""), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -230,6 +240,8 @@ func TestAccDMERecordSPF(t *testing.T) { "dme_record.test", "value", "\"foo\""), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -259,6 +271,8 @@ func TestAccDMERecordPTR(t *testing.T) { "dme_record.test", "value", "foo"), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -288,6 +302,8 @@ func TestAccDMERecordNS(t *testing.T) { "dme_record.test", "value", "foo"), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -317,6 +333,8 @@ func TestAccDMERecordAAAA(t *testing.T) { "dme_record.test", "value", "fe80::0202:b3ff:fe1e:8329"), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -352,6 +370,8 @@ func TestAccDMERecordSRV(t *testing.T) { "dme_record.test", "port", "30"), resource.TestCheckResourceAttr( "dme_record.test", "ttl", "2000"), + resource.TestCheckResourceAttr( + "dme_record.test", "gtdLocation", "DEFAULT"), ), }, }, @@ -413,6 +433,7 @@ resource "dme_record" "test" { type = "A" value = "1.1.1.1" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigCName = ` @@ -422,6 +443,7 @@ resource "dme_record" "test" { type = "CNAME" value = "foo" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigAName = ` @@ -431,6 +453,7 @@ resource "dme_record" "test" { type = "ANAME" value = "foo" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigMX = ` @@ -441,6 +464,7 @@ resource "dme_record" "test" { value = "foo" mxLevel = 10 ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigHTTPRED = ` @@ -455,6 +479,7 @@ resource "dme_record" "test" { keywords = "terraform example" description = "This is a description" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigTXT = ` @@ -464,6 +489,7 @@ resource "dme_record" "test" { type = "TXT" value = "foo" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigSPF = ` @@ -473,6 +499,7 @@ resource "dme_record" "test" { type = "SPF" value = "foo" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigPTR = ` @@ -482,6 +509,7 @@ resource "dme_record" "test" { type = "PTR" value = "foo" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigNS = ` @@ -491,6 +519,7 @@ resource "dme_record" "test" { type = "NS" value = "foo" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigAAAA = ` @@ -500,6 +529,7 @@ resource "dme_record" "test" { type = "AAAA" value = "FE80::0202:B3FF:FE1E:8329" ttl = 2000 + gtdLocation = "DEFAULT" }` const testDMERecordConfigSRV = ` @@ -512,4 +542,5 @@ resource "dme_record" "test" { weight = 20 port = 30 ttl = 2000 + gtdLocation = "DEFAULT" }` diff --git a/builtin/providers/docker/provider.go b/builtin/providers/docker/provider.go index fdc8b77194..b842d6b62f 100644 --- a/builtin/providers/docker/provider.go +++ b/builtin/providers/docker/provider.go @@ -28,6 +28,8 @@ func Provider() terraform.ResourceProvider { ResourcesMap: map[string]*schema.Resource{ "docker_container": resourceDockerContainer(), "docker_image": resourceDockerImage(), + "docker_network": resourceDockerNetwork(), + "docker_volume": resourceDockerVolume(), }, ConfigureFunc: providerConfigure, diff --git a/builtin/providers/docker/resource_docker_container.go b/builtin/providers/docker/resource_docker_container.go index 59e65b9c16..3cff902a71 100644 --- a/builtin/providers/docker/resource_docker_container.go +++ b/builtin/providers/docker/resource_docker_container.go @@ -4,6 +4,8 @@ import ( "bytes" "fmt" + "regexp" + "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" ) @@ -71,6 +73,13 @@ func resourceDockerContainer() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, }, + "entrypoint": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "dns": &schema.Schema{ Type: schema.TypeSet, Optional: true, @@ -85,20 +94,130 @@ func resourceDockerContainer() *schema.Resource { ForceNew: true, }, + "restart": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "no", + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^(no|on-failure|always)$`).MatchString(value) { + es = append(es, fmt.Errorf( + "%q must be one of \"no\", \"on-failure\", or \"always\"", k)) + } + return + }, + }, + + "max_retry_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + "volumes": &schema.Schema{ Type: schema.TypeSet, Optional: true, ForceNew: true, - Elem: getVolumesElem(), - Set: resourceDockerVolumesHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "from_container": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "container_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "host_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^/`).MatchString(value) { + es = append(es, fmt.Errorf( + "%q must be an absolute path", k)) + } + return + }, + }, + + "volume_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "read_only": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + }, + }, + Set: resourceDockerVolumesHash, }, "ports": &schema.Schema{ Type: schema.TypeSet, Optional: true, ForceNew: true, - Elem: getPortsElem(), - Set: resourceDockerPortsHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "internal": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "external": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + + "ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "protocol": &schema.Schema{ + Type: schema.TypeString, + Default: "tcp", + Optional: true, + ForceNew: true, + }, + }, + }, + Set: resourceDockerPortsHash, + }, + + "host": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "host": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + Set: resourceDockerHostsHash, }, "env": &schema.Schema{ @@ -142,66 +261,85 @@ func resourceDockerContainer() *schema.Resource { Optional: true, ForceNew: true, }, - }, - } -} -func getVolumesElem() *schema.Resource { - return &schema.Resource{ - Schema: map[string]*schema.Schema{ - "from_container": &schema.Schema{ - Type: schema.TypeString, + "labels": &schema.Schema{ + Type: schema.TypeMap, Optional: true, ForceNew: true, }, - "container_path": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - - "host_path": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - - "read_only": &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - }, - }, - } -} - -func getPortsElem() *schema.Resource { - return &schema.Resource{ - Schema: map[string]*schema.Schema{ - "internal": &schema.Schema{ - Type: schema.TypeInt, - Required: true, - ForceNew: true, - }, - - "external": &schema.Schema{ + "memory": &schema.Schema{ Type: schema.TypeInt, Optional: true, ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(int) + if value < 0 { + es = append(es, fmt.Errorf("%q must be greater than or equal to 0", k)) + } + return + }, }, - "ip": &schema.Schema{ + "memory_swap": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(int) + if value < -1 { + es = append(es, fmt.Errorf("%q must be greater than or equal to -1", k)) + } + return + }, + }, + + "cpu_shares": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(int) + if value < 0 { + es = append(es, fmt.Errorf("%q must be greater than or equal to 0", k)) + } + return + }, + }, + + "log_driver": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "json-file", + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^(json-file|syslog|journald|gelf|fluentd)$`).MatchString(value) { + es = append(es, fmt.Errorf( + "%q must be one of \"json-file\", \"syslog\", \"journald\", \"gelf\", or \"fluentd\"", k)) + } + return + }, + }, + + "log_opts": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + + "network_mode": &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, }, - "protocol": &schema.Schema{ - Type: schema.TypeString, - Default: "tcp", + "networks": &schema.Schema{ + Type: schema.TypeSet, Optional: true, ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: stringSetHash, }, }, } @@ -228,6 +366,21 @@ func resourceDockerPortsHash(v interface{}) int { return hashcode.String(buf.String()) } +func resourceDockerHostsHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + + if v, ok := m["ip"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["host"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + return hashcode.String(buf.String()) +} + func resourceDockerVolumesHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) @@ -244,6 +397,10 @@ func resourceDockerVolumesHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%v-", v.(string))) } + if v, ok := m["volume_name"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + if v, ok := m["read_only"]; ok { buf.WriteString(fmt.Sprintf("%v-", v.(bool))) } diff --git a/builtin/providers/docker/resource_docker_container_funcs.go b/builtin/providers/docker/resource_docker_container_funcs.go index aa74a4e1d8..39d2c09f06 100644 --- a/builtin/providers/docker/resource_docker_container_funcs.go +++ b/builtin/providers/docker/resource_docker_container_funcs.go @@ -4,7 +4,6 @@ import ( "errors" "fmt" "strconv" - "strings" "time" dc "github.com/fsouza/go-dockerclient" @@ -54,6 +53,10 @@ func resourceDockerContainerCreate(d *schema.ResourceData, meta interface{}) err createOpts.Config.Cmd = stringListToStringSlice(v.([]interface{})) } + if v, ok := d.GetOk("entrypoint"); ok { + createOpts.Config.Entrypoint = stringListToStringSlice(v.([]interface{})) + } + exposedPorts := map[dc.Port]struct{}{} portBindings := map[dc.Port][]dc.PortBinding{} @@ -64,6 +67,11 @@ func resourceDockerContainerCreate(d *schema.ResourceData, meta interface{}) err createOpts.Config.ExposedPorts = exposedPorts } + extraHosts := []string{} + if v, ok := d.GetOk("host"); ok { + extraHosts = extraHostsSetToDockerExtraHosts(v.(*schema.Set)) + } + volumes := map[string]struct{}{} binds := []string{} volumesFrom := []string{} @@ -78,25 +86,28 @@ func resourceDockerContainerCreate(d *schema.ResourceData, meta interface{}) err createOpts.Config.Volumes = volumes } - var retContainer *dc.Container - if retContainer, err = client.CreateContainer(createOpts); err != nil { - return fmt.Errorf("Unable to create container: %s", err) + if v, ok := d.GetOk("labels"); ok { + createOpts.Config.Labels = mapTypeMapValsToString(v.(map[string]interface{})) } - if retContainer == nil { - return fmt.Errorf("Returned container is nil") - } - - d.SetId(retContainer.ID) hostConfig := &dc.HostConfig{ Privileged: d.Get("privileged").(bool), PublishAllPorts: d.Get("publish_all_ports").(bool), + RestartPolicy: dc.RestartPolicy{ + Name: d.Get("restart").(string), + MaximumRetryCount: d.Get("max_retry_count").(int), + }, + LogConfig: dc.LogConfig{ + Type: d.Get("log_driver").(string), + }, } if len(portBindings) != 0 { hostConfig.PortBindings = portBindings } - + if len(extraHosts) != 0 { + hostConfig.ExtraHosts = extraHosts + } if len(binds) != 0 { hostConfig.Binds = binds } @@ -112,6 +123,50 @@ func resourceDockerContainerCreate(d *schema.ResourceData, meta interface{}) err hostConfig.Links = stringSetToStringSlice(v.(*schema.Set)) } + if v, ok := d.GetOk("memory"); ok { + hostConfig.Memory = int64(v.(int)) * 1024 * 1024 + } + + if v, ok := d.GetOk("memory_swap"); ok { + swap := int64(v.(int)) + if swap > 0 { + swap = swap * 1024 * 1024 + } + hostConfig.MemorySwap = swap + } + + if v, ok := d.GetOk("cpu_shares"); ok { + hostConfig.CPUShares = int64(v.(int)) + } + + if v, ok := d.GetOk("log_opts"); ok { + hostConfig.LogConfig.Config = mapTypeMapValsToString(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("network_mode"); ok { + hostConfig.NetworkMode = v.(string) + } + + createOpts.HostConfig = hostConfig + + var retContainer *dc.Container + if retContainer, err = client.CreateContainer(createOpts); err != nil { + return fmt.Errorf("Unable to create container: %s", err) + } + if retContainer == nil { + return fmt.Errorf("Returned container is nil") + } + + d.SetId(retContainer.ID) + + if v, ok := d.GetOk("networks"); ok { + connectionOpts := dc.NetworkConnectionOptions{Container: retContainer.ID} + + for _, network := range v.(*schema.Set).List() { + client.ConnectNetwork(network.(string), connectionOpts) + } + } + creationTime = time.Now() if err := client.StartContainer(retContainer.ID, hostConfig); err != nil { return fmt.Errorf("Unable to start container: %s", err) @@ -123,7 +178,7 @@ func resourceDockerContainerCreate(d *schema.ResourceData, meta interface{}) err func resourceDockerContainerRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*dc.Client) - apiContainer, err := fetchDockerContainer(d.Get("name").(string), client) + apiContainer, err := fetchDockerContainer(d.Id(), client) if err != nil { return err } @@ -223,7 +278,15 @@ func stringSetToStringSlice(stringSet *schema.Set) []string { return ret } -func fetchDockerContainer(name string, client *dc.Client) (*dc.APIContainers, error) { +func mapTypeMapValsToString(typeMap map[string]interface{}) map[string]string { + mapped := make(map[string]string, len(typeMap)) + for k, v := range typeMap { + mapped[k] = v.(string) + } + return mapped +} + +func fetchDockerContainer(ID string, client *dc.Client) (*dc.APIContainers, error) { apiContainers, err := client.ListContainers(dc.ListContainersOptions{All: true}) if err != nil { @@ -231,20 +294,8 @@ func fetchDockerContainer(name string, client *dc.Client) (*dc.APIContainers, er } for _, apiContainer := range apiContainers { - // Sometimes the Docker API prefixes container names with / - // like it does in these commands. But if there's no - // set name, it just uses the ID without a /...ugh. - switch len(apiContainer.Names) { - case 0: - if apiContainer.ID == name { - return &apiContainer, nil - } - default: - for _, containerName := range apiContainer.Names { - if strings.TrimLeft(containerName, "/") == name { - return &apiContainer, nil - } - } + if apiContainer.ID == ID { + return &apiContainer, nil } } @@ -280,6 +331,19 @@ func portSetToDockerPorts(ports *schema.Set) (map[dc.Port]struct{}, map[dc.Port] return retExposedPorts, retPortBindings } +func extraHostsSetToDockerExtraHosts(extraHosts *schema.Set) []string { + retExtraHosts := []string{} + + for _, hostInt := range extraHosts.List() { + host := hostInt.(map[string]interface{}) + ip := host["ip"].(string) + hostname := host["host"].(string) + retExtraHosts = append(retExtraHosts, hostname+":"+ip) + } + + return retExtraHosts +} + func volumeSetToDockerVolumes(volumes *schema.Set) (map[string]struct{}, []string, []string, error) { retVolumeMap := map[string]struct{}{} retHostConfigBinds := []string{} @@ -289,7 +353,10 @@ func volumeSetToDockerVolumes(volumes *schema.Set) (map[string]struct{}, []strin volume := volumeInt.(map[string]interface{}) fromContainer := volume["from_container"].(string) containerPath := volume["container_path"].(string) - hostPath := volume["host_path"].(string) + volumeName := volume["volume_name"].(string) + if len(volumeName) == 0 { + volumeName = volume["host_path"].(string) + } readOnly := volume["read_only"].(bool) switch { @@ -299,13 +366,13 @@ func volumeSetToDockerVolumes(volumes *schema.Set) (map[string]struct{}, []strin return retVolumeMap, retHostConfigBinds, retVolumeFromContainers, errors.New("Both a container and a path specified in a volume entry") case len(fromContainer) != 0: retVolumeFromContainers = append(retVolumeFromContainers, fromContainer) - case len(hostPath) != 0: + case len(volumeName) != 0: readWrite := "rw" if readOnly { readWrite = "ro" } retVolumeMap[containerPath] = struct{}{} - retHostConfigBinds = append(retHostConfigBinds, hostPath+":"+containerPath+":"+readWrite) + retHostConfigBinds = append(retHostConfigBinds, volumeName+":"+containerPath+":"+readWrite) default: retVolumeMap[containerPath] = struct{}{} } diff --git a/builtin/providers/docker/resource_docker_container_test.go b/builtin/providers/docker/resource_docker_container_test.go index 29ecc4bb3f..8536c78cea 100644 --- a/builtin/providers/docker/resource_docker_container_test.go +++ b/builtin/providers/docker/resource_docker_container_test.go @@ -10,6 +10,7 @@ import ( ) func TestAccDockerContainer_basic(t *testing.T) { + var c dc.Container resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -17,14 +18,133 @@ func TestAccDockerContainer_basic(t *testing.T) { resource.TestStep{ Config: testAccDockerContainerConfig, Check: resource.ComposeTestCheckFunc( - testAccContainerRunning("docker_container.foo"), + testAccContainerRunning("docker_container.foo", &c), ), }, }, }) } -func testAccContainerRunning(n string) resource.TestCheckFunc { +func TestAccDockerContainer_volume(t *testing.T) { + var c dc.Container + + testCheck := func(*terraform.State) error { + if len(c.Mounts) != 2 { + return fmt.Errorf("Incorrect number of mounts: expected 2, got %d", len(c.Mounts)) + } + + for _, v := range c.Mounts { + if v.Name != "testAccDockerContainerVolume_volume" { + continue + } + + if v.Destination != "/tmp/volume" { + return fmt.Errorf("Bad destination on mount: expected /tmp/volume, got %q", v.Destination) + } + + if v.Mode != "rw" { + return fmt.Errorf("Bad mode on mount: expected rw, got %q", v.Mode) + } + + return nil + } + + return fmt.Errorf("Mount for testAccDockerContainerVolume_volume not found") + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDockerContainerVolumeConfig, + Check: resource.ComposeTestCheckFunc( + testAccContainerRunning("docker_container.foo", &c), + testCheck, + ), + }, + }, + }) +} + +func TestAccDockerContainer_customized(t *testing.T) { + var c dc.Container + + testCheck := func(*terraform.State) error { + if len(c.Config.Entrypoint) < 3 || + (c.Config.Entrypoint[0] != "/bin/bash" && + c.Config.Entrypoint[1] != "-c" && + c.Config.Entrypoint[2] != "ping localhost") { + return fmt.Errorf("Container wrong entrypoint: %s", c.Config.Entrypoint) + } + + if c.HostConfig.RestartPolicy.Name == "on-failure" { + if c.HostConfig.RestartPolicy.MaximumRetryCount != 5 { + return fmt.Errorf("Container has wrong restart policy max retry count: %d", c.HostConfig.RestartPolicy.MaximumRetryCount) + } + } else { + return fmt.Errorf("Container has wrong restart policy: %s", c.HostConfig.RestartPolicy.Name) + } + + if c.HostConfig.Memory != (512 * 1024 * 1024) { + return fmt.Errorf("Container has wrong memory setting: %d", c.HostConfig.Memory) + } + + if c.HostConfig.MemorySwap != (2048 * 1024 * 1024) { + return fmt.Errorf("Container has wrong memory swap setting: %d", c.HostConfig.MemorySwap) + } + + if c.HostConfig.CPUShares != 32 { + return fmt.Errorf("Container has wrong cpu shares setting: %d", c.HostConfig.CPUShares) + } + + if c.Config.Labels["env"] != "prod" || c.Config.Labels["role"] != "test" { + return fmt.Errorf("Container does not have the correct labels") + } + + if c.HostConfig.LogConfig.Type != "json-file" { + return fmt.Errorf("Container does not have the correct log config: %s", c.HostConfig.LogConfig.Type) + } + + if c.HostConfig.LogConfig.Config["max-size"] != "10m" { + return fmt.Errorf("Container does not have the correct max-size log option: %v", c.HostConfig.LogConfig.Config["max-size"]) + } + + if c.HostConfig.LogConfig.Config["max-file"] != "20" { + return fmt.Errorf("Container does not have the correct max-file log option: %v", c.HostConfig.LogConfig.Config["max-file"]) + } + + if len(c.HostConfig.ExtraHosts) != 2 { + return fmt.Errorf("Container does not have correct number of extra host entries, got %d", len(c.HostConfig.ExtraHosts)) + } + + if c.HostConfig.ExtraHosts[0] != "testhost2:10.0.2.0" { + return fmt.Errorf("Container has incorrect extra host string: %q", c.HostConfig.ExtraHosts[0]) + } + + if c.HostConfig.ExtraHosts[1] != "testhost:10.0.1.0" { + return fmt.Errorf("Container has incorrect extra host string: %q", c.HostConfig.ExtraHosts[1]) + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDockerContainerCustomizedConfig, + Check: resource.ComposeTestCheckFunc( + testAccContainerRunning("docker_container.foo", &c), + testCheck, + ), + }, + }, + }) +} + +func testAccContainerRunning(n string, container *dc.Container) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -43,6 +163,11 @@ func testAccContainerRunning(n string) resource.TestCheckFunc { for _, c := range containers { if c.ID == rs.Primary.ID { + inspected, err := client.InspectContainer(c.ID) + if err != nil { + return fmt.Errorf("Container could not be inspected: %s", err) + } + *container = *inspected return nil } } @@ -61,3 +186,61 @@ resource "docker_container" "foo" { image = "${docker_image.foo.latest}" } ` + +const testAccDockerContainerVolumeConfig = ` +resource "docker_image" "foo" { + name = "nginx:latest" +} + +resource "docker_volume" "foo" { + name = "testAccDockerContainerVolume_volume" +} + +resource "docker_container" "foo" { + name = "tf-test" + image = "${docker_image.foo.latest}" + + volumes { + volume_name = "${docker_volume.foo.name}" + container_path = "/tmp/volume" + read_only = false + } +} +` + +const testAccDockerContainerCustomizedConfig = ` +resource "docker_image" "foo" { + name = "nginx:latest" +} + +resource "docker_container" "foo" { + name = "tf-test" + image = "${docker_image.foo.latest}" + entrypoint = ["/bin/bash", "-c", "ping localhost"] + restart = "on-failure" + max_retry_count = 5 + memory = 512 + memory_swap = 2048 + cpu_shares = 32 + labels { + env = "prod" + role = "test" + } + log_driver = "json-file" + log_opts = { + max-size = "10m" + max-file = 20 + } + network_mode = "bridge" + + host { + host = "testhost" + ip = "10.0.1.0" + } + + host { + host = "testhost2" + ip = "10.0.2.0" + } +} +` diff --git a/builtin/providers/docker/resource_docker_image_test.go b/builtin/providers/docker/resource_docker_image_test.go index b902749d7c..81c5087420 100644 --- a/builtin/providers/docker/resource_docker_image_test.go +++ b/builtin/providers/docker/resource_docker_image_test.go @@ -1,6 +1,7 @@ package docker import ( + "regexp" "testing" "github.com/hashicorp/terraform/helper/resource" @@ -14,17 +15,14 @@ func TestAccDockerImage_basic(t *testing.T) { resource.TestStep{ Config: testAccDockerImageConfig, Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "docker_image.foo", - "latest", - "d52aff8195301dba95e8e3d14f0c3738a874237afd54233d250a2fc4489bfa83"), + resource.TestMatchResourceAttr("docker_image.foo", "latest", regexp.MustCompile(`\A[a-f0-9]{64}\z`)), ), }, }, }) } -func TestAddDockerImage_private(t *testing.T) { +func TestAccDockerImage_private(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -32,10 +30,7 @@ func TestAddDockerImage_private(t *testing.T) { resource.TestStep{ Config: testAddDockerPrivateImageConfig, Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "docker_image.foobar", - "latest", - "2c40b0526b6358710fd09e7b8c022429268cc61703b4777e528ac9d469a07ca1"), + resource.TestMatchResourceAttr("docker_image.foobar", "latest", regexp.MustCompile(`\A[a-f0-9]{64}\z`)), ), }, }, @@ -44,8 +39,8 @@ func TestAddDockerImage_private(t *testing.T) { const testAccDockerImageConfig = ` resource "docker_image" "foo" { - name = "ubuntu:trusty-20150320" - keep_updated = true + name = "alpine:3.1" + keep_updated = false } ` diff --git a/builtin/providers/docker/resource_docker_network.go b/builtin/providers/docker/resource_docker_network.go new file mode 100644 index 0000000000..4c14b2dea0 --- /dev/null +++ b/builtin/providers/docker/resource_docker_network.go @@ -0,0 +1,135 @@ +package docker + +import ( + "bytes" + "fmt" + "sort" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceDockerNetwork() *schema.Resource { + return &schema.Resource{ + Create: resourceDockerNetworkCreate, + Read: resourceDockerNetworkRead, + Delete: resourceDockerNetworkDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "check_duplicate": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + + "driver": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "options": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "ipam_driver": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "ipam_config": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: getIpamConfigElem(), + Set: resourceDockerIpamConfigHash, + }, + + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "scope": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func getIpamConfigElem() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "subnet": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "ip_range": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "gateway": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "aux_address": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceDockerIpamConfigHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + + if v, ok := m["subnet"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["ip_range"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["gateway"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["aux_address"]; ok { + auxAddress := v.(map[string]interface{}) + + keys := make([]string, len(auxAddress)) + i := 0 + for k, _ := range auxAddress { + keys[i] = k + i++ + } + sort.Strings(keys) + + for _, k := range keys { + buf.WriteString(fmt.Sprintf("%v-%v-", k, auxAddress[k].(string))) + } + } + + return hashcode.String(buf.String()) +} diff --git a/builtin/providers/docker/resource_docker_network_funcs.go b/builtin/providers/docker/resource_docker_network_funcs.go new file mode 100644 index 0000000000..61954f4aff --- /dev/null +++ b/builtin/providers/docker/resource_docker_network_funcs.go @@ -0,0 +1,115 @@ +package docker + +import ( + "fmt" + + dc "github.com/fsouza/go-dockerclient" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceDockerNetworkCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + createOpts := dc.CreateNetworkOptions{ + Name: d.Get("name").(string), + } + if v, ok := d.GetOk("check_duplicate"); ok { + createOpts.CheckDuplicate = v.(bool) + } + if v, ok := d.GetOk("driver"); ok { + createOpts.Driver = v.(string) + } + if v, ok := d.GetOk("options"); ok { + createOpts.Options = v.(map[string]interface{}) + } + + ipamOpts := dc.IPAMOptions{} + ipamOptsSet := false + if v, ok := d.GetOk("ipam_driver"); ok { + ipamOpts.Driver = v.(string) + ipamOptsSet = true + } + if v, ok := d.GetOk("ipam_config"); ok { + ipamOpts.Config = ipamConfigSetToIpamConfigs(v.(*schema.Set)) + ipamOptsSet = true + } + + if ipamOptsSet { + createOpts.IPAM = ipamOpts + } + + var err error + var retNetwork *dc.Network + if retNetwork, err = client.CreateNetwork(createOpts); err != nil { + return fmt.Errorf("Unable to create network: %s", err) + } + if retNetwork == nil { + return fmt.Errorf("Returned network is nil") + } + + d.SetId(retNetwork.ID) + d.Set("name", retNetwork.Name) + d.Set("scope", retNetwork.Scope) + d.Set("driver", retNetwork.Driver) + d.Set("options", retNetwork.Options) + + return nil +} + +func resourceDockerNetworkRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + var err error + var retNetwork *dc.Network + if retNetwork, err = client.NetworkInfo(d.Id()); err != nil { + if _, ok := err.(*dc.NoSuchNetwork); !ok { + return fmt.Errorf("Unable to inspect network: %s", err) + } + } + if retNetwork == nil { + d.SetId("") + return nil + } + + d.Set("scope", retNetwork.Scope) + d.Set("driver", retNetwork.Driver) + d.Set("options", retNetwork.Options) + + return nil +} + +func resourceDockerNetworkDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + if err := client.RemoveNetwork(d.Id()); err != nil { + if _, ok := err.(*dc.NoSuchNetwork); !ok { + return fmt.Errorf("Error deleting network %s: %s", d.Id(), err) + } + } + + d.SetId("") + return nil +} + +func ipamConfigSetToIpamConfigs(ipamConfigSet *schema.Set) []dc.IPAMConfig { + ipamConfigs := make([]dc.IPAMConfig, ipamConfigSet.Len()) + + for i, ipamConfigInt := range ipamConfigSet.List() { + ipamConfigRaw := ipamConfigInt.(map[string]interface{}) + + ipamConfig := dc.IPAMConfig{} + ipamConfig.Subnet = ipamConfigRaw["subnet"].(string) + ipamConfig.IPRange = ipamConfigRaw["ip_range"].(string) + ipamConfig.Gateway = ipamConfigRaw["gateway"].(string) + + auxAddressRaw := ipamConfigRaw["aux_address"].(map[string]interface{}) + ipamConfig.AuxAddress = make(map[string]string, len(auxAddressRaw)) + for k, v := range auxAddressRaw { + ipamConfig.AuxAddress[k] = v.(string) + } + + ipamConfigs[i] = ipamConfig + } + + return ipamConfigs +} diff --git a/builtin/providers/docker/resource_docker_network_test.go b/builtin/providers/docker/resource_docker_network_test.go new file mode 100644 index 0000000000..6e3bb4e380 --- /dev/null +++ b/builtin/providers/docker/resource_docker_network_test.go @@ -0,0 +1,65 @@ +package docker + +import ( + "fmt" + "testing" + + dc "github.com/fsouza/go-dockerclient" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDockerNetwork_basic(t *testing.T) { + var n dc.Network + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDockerNetworkConfig, + Check: resource.ComposeTestCheckFunc( + testAccNetwork("docker_network.foo", &n), + ), + }, + }, + }) +} + +func testAccNetwork(n string, network *dc.Network) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*dc.Client) + networks, err := client.ListNetworks() + if err != nil { + return err + } + + for _, n := range networks { + if n.ID == rs.Primary.ID { + inspected, err := client.NetworkInfo(n.ID) + if err != nil { + return fmt.Errorf("Network could not be obtained: %s", err) + } + *network = *inspected + return nil + } + } + + return fmt.Errorf("Network not found: %s", rs.Primary.ID) + } +} + +const testAccDockerNetworkConfig = ` +resource "docker_network" "foo" { + name = "bar" +} +` diff --git a/builtin/providers/docker/resource_docker_volume.go b/builtin/providers/docker/resource_docker_volume.go new file mode 100644 index 0000000000..33c22d581e --- /dev/null +++ b/builtin/providers/docker/resource_docker_volume.go @@ -0,0 +1,102 @@ +package docker + +import ( + "fmt" + + dc "github.com/fsouza/go-dockerclient" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceDockerVolume() *schema.Resource { + return &schema.Resource{ + Create: resourceDockerVolumeCreate, + Read: resourceDockerVolumeRead, + Delete: resourceDockerVolumeDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "driver": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "driver_opts": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + "mountpoint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceDockerVolumeCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + createOpts := dc.CreateVolumeOptions{} + if v, ok := d.GetOk("name"); ok { + createOpts.Name = v.(string) + } + if v, ok := d.GetOk("driver"); ok { + createOpts.Driver = v.(string) + } + if v, ok := d.GetOk("driver_opts"); ok { + createOpts.DriverOpts = mapTypeMapValsToString(v.(map[string]interface{})) + } + + var err error + var retVolume *dc.Volume + if retVolume, err = client.CreateVolume(createOpts); err != nil { + return fmt.Errorf("Unable to create volume: %s", err) + } + if retVolume == nil { + return fmt.Errorf("Returned volume is nil") + } + + d.SetId(retVolume.Name) + d.Set("name", retVolume.Name) + d.Set("driver", retVolume.Driver) + d.Set("mountpoint", retVolume.Mountpoint) + + return nil +} + +func resourceDockerVolumeRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + var err error + var retVolume *dc.Volume + if retVolume, err = client.InspectVolume(d.Id()); err != nil && err != dc.ErrNoSuchVolume { + return fmt.Errorf("Unable to inspect volume: %s", err) + } + if retVolume == nil { + d.SetId("") + return nil + } + + d.Set("name", retVolume.Name) + d.Set("driver", retVolume.Driver) + d.Set("mountpoint", retVolume.Mountpoint) + + return nil +} + +func resourceDockerVolumeDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + if err := client.RemoveVolume(d.Id()); err != nil && err != dc.ErrNoSuchVolume { + return fmt.Errorf("Error deleting volume %s: %s", d.Id(), err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/docker/resource_docker_volume_test.go b/builtin/providers/docker/resource_docker_volume_test.go new file mode 100644 index 0000000000..38fec3c4e6 --- /dev/null +++ b/builtin/providers/docker/resource_docker_volume_test.go @@ -0,0 +1,67 @@ +package docker + +import ( + "fmt" + "testing" + + dc "github.com/fsouza/go-dockerclient" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDockerVolume_basic(t *testing.T) { + var v dc.Volume + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDockerVolumeConfig, + Check: resource.ComposeTestCheckFunc( + checkDockerVolume("docker_volume.foo", &v), + resource.TestCheckResourceAttr("docker_volume.foo", "id", "testAccDockerVolume_basic"), + resource.TestCheckResourceAttr("docker_volume.foo", "name", "testAccDockerVolume_basic"), + ), + }, + }, + }) +} + +func checkDockerVolume(n string, volume *dc.Volume) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*dc.Client) + volumes, err := client.ListVolumes(dc.ListVolumesOptions{}) + if err != nil { + return err + } + + for _, v := range volumes { + if v.Name == rs.Primary.ID { + inspected, err := client.InspectVolume(v.Name) + if err != nil { + return fmt.Errorf("Volume could not be inspected: %s", err) + } + *volume = *inspected + return nil + } + } + + return fmt.Errorf("Volume not found: %s", rs.Primary.ID) + } +} + +const testAccDockerVolumeConfig = ` +resource "docker_volume" "foo" { + name = "testAccDockerVolume_basic" +} +` diff --git a/builtin/providers/google/compute_operation.go b/builtin/providers/google/compute_operation.go index 987e983b47..ab76895e8f 100644 --- a/builtin/providers/google/compute_operation.go +++ b/builtin/providers/google/compute_operation.go @@ -63,7 +63,7 @@ func (w *ComputeOperationWaiter) RefreshFunc() resource.StateRefreshFunc { func (w *ComputeOperationWaiter) Conf() *resource.StateChangeConf { return &resource.StateChangeConf{ Pending: []string{"PENDING", "RUNNING"}, - Target: "DONE", + Target: []string{"DONE"}, Refresh: w.RefreshFunc(), } } @@ -134,6 +134,10 @@ func computeOperationWaitRegion(config *Config, op *compute.Operation, region, a } func computeOperationWaitZone(config *Config, op *compute.Operation, zone, activity string) error { + return computeOperationWaitZoneTime(config, op, zone, 4, activity) +} + +func computeOperationWaitZoneTime(config *Config, op *compute.Operation, zone string, minutes int, activity string) error { w := &ComputeOperationWaiter{ Service: config.clientCompute, Op: op, @@ -143,7 +147,7 @@ func computeOperationWaitZone(config *Config, op *compute.Operation, zone, activ } state := w.Conf() state.Delay = 10 * time.Second - state.Timeout = 4 * time.Minute + state.Timeout = time.Duration(minutes) * time.Minute state.MinTimeout = 2 * time.Second opRaw, err := state.WaitForState() if err != nil { diff --git a/builtin/providers/google/config.go b/builtin/providers/google/config.go index 218fda06f9..159a57e093 100644 --- a/builtin/providers/google/config.go +++ b/builtin/providers/google/config.go @@ -16,6 +16,7 @@ import ( "google.golang.org/api/compute/v1" "google.golang.org/api/container/v1" "google.golang.org/api/dns/v1" + "google.golang.org/api/pubsub/v1" "google.golang.org/api/sqladmin/v1beta4" "google.golang.org/api/storage/v1" ) @@ -32,6 +33,7 @@ type Config struct { clientDns *dns.Service clientStorage *storage.Service clientSqlAdmin *sqladmin.Service + clientPubsub *pubsub.Service } func (c *Config) loadAndValidate() error { @@ -128,6 +130,13 @@ func (c *Config) loadAndValidate() error { } c.clientSqlAdmin.UserAgent = userAgent + log.Printf("[INFO] Instatiating Google Pubsub Client...") + c.clientPubsub, err = pubsub.New(client) + if err != nil { + return err + } + c.clientPubsub.UserAgent = userAgent + return nil } diff --git a/builtin/providers/google/dns_change.go b/builtin/providers/google/dns_change.go index a1facdd992..38a34135e2 100644 --- a/builtin/providers/google/dns_change.go +++ b/builtin/providers/google/dns_change.go @@ -32,7 +32,7 @@ func (w *DnsChangeWaiter) RefreshFunc() resource.StateRefreshFunc { func (w *DnsChangeWaiter) Conf() *resource.StateChangeConf { return &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "done", + Target: []string{"done"}, Refresh: w.RefreshFunc(), } } diff --git a/builtin/providers/google/metadata.go b/builtin/providers/google/metadata.go index e75c450228..e2ebd18a3d 100644 --- a/builtin/providers/google/metadata.go +++ b/builtin/providers/google/metadata.go @@ -60,11 +60,13 @@ func MetadataUpdate(oldMDMap map[string]interface{}, newMDMap map[string]interfa } // Format metadata from the server data format -> schema data format -func MetadataFormatSchema(md *compute.Metadata) map[string]interface{} { +func MetadataFormatSchema(curMDMap map[string]interface{}, md *compute.Metadata) map[string]interface{} { newMD := make(map[string]interface{}) for _, kv := range md.Items { - newMD[kv.Key] = *kv.Value + if _, ok := curMDMap[kv.Key]; ok { + newMD[kv.Key] = *kv.Value + } } return newMD diff --git a/builtin/providers/google/provider.go b/builtin/providers/google/provider.go index b2d083bc25..2c29501019 100644 --- a/builtin/providers/google/provider.go +++ b/builtin/providers/google/provider.go @@ -70,6 +70,9 @@ func Provider() terraform.ResourceProvider { "google_dns_record_set": resourceDnsRecordSet(), "google_sql_database": resourceSqlDatabase(), "google_sql_database_instance": resourceSqlDatabaseInstance(), + "google_sql_user": resourceSqlUser(), + "google_pubsub_topic": resourcePubsubTopic(), + "google_pubsub_subscription": resourcePubsubSubscription(), "google_storage_bucket": resourceStorageBucket(), "google_storage_bucket_acl": resourceStorageBucketAcl(), "google_storage_bucket_object": resourceStorageBucketObject(), diff --git a/builtin/providers/google/provider_test.go b/builtin/providers/google/provider_test.go index 827a7f5753..51654a6688 100644 --- a/builtin/providers/google/provider_test.go +++ b/builtin/providers/google/provider_test.go @@ -1,6 +1,7 @@ package google import ( + "io/ioutil" "os" "testing" @@ -29,6 +30,14 @@ func TestProvider_impl(t *testing.T) { } func testAccPreCheck(t *testing.T) { + if v := os.Getenv("GOOGLE_CREDENTIALS_FILE"); v != "" { + creds, err := ioutil.ReadFile(v) + if err != nil { + t.Fatalf("Error reading GOOGLE_CREDENTIALS_FILE path: %s", err) + } + os.Setenv("GOOGLE_CREDENTIALS", string(creds)) + } + if v := os.Getenv("GOOGLE_CREDENTIALS"); v == "" { t.Fatal("GOOGLE_CREDENTIALS must be set for acceptance tests") } diff --git a/builtin/providers/google/resource_compute_address.go b/builtin/providers/google/resource_compute_address.go index 0027df230f..15fa132723 100644 --- a/builtin/providers/google/resource_compute_address.go +++ b/builtin/providers/google/resource_compute_address.go @@ -82,6 +82,7 @@ func resourceComputeAddressRead(d *schema.ResourceData, meta interface{}) error if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { // The resource doesn't exist anymore + log.Printf("[WARN] Removing Address %q because it's gone", d.Get("name").(string)) d.SetId("") return nil diff --git a/builtin/providers/google/resource_compute_address_test.go b/builtin/providers/google/resource_compute_address_test.go index 90988bb2ce..e15d11dcf5 100644 --- a/builtin/providers/google/resource_compute_address_test.go +++ b/builtin/providers/google/resource_compute_address_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -75,7 +76,7 @@ func testAccCheckComputeAddressExists(n string, addr *compute.Address) resource. } } -const testAccComputeAddress_basic = ` +var testAccComputeAddress_basic = fmt.Sprintf(` resource "google_compute_address" "foobar" { - name = "terraform-test" -}` + name = "address-test-%s" +}`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_autoscaler.go b/builtin/providers/google/resource_compute_autoscaler.go index 8539c62b30..89cc41b075 100644 --- a/builtin/providers/google/resource_compute_autoscaler.go +++ b/builtin/providers/google/resource_compute_autoscaler.go @@ -240,6 +240,7 @@ func resourceComputeAutoscalerRead(d *schema.ResourceData, meta interface{}) err if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { // The resource doesn't exist anymore + log.Printf("[WARN] Removing Autoscalar %q because it's gone", d.Get("name").(string)) d.SetId("") return nil diff --git a/builtin/providers/google/resource_compute_autoscaler_test.go b/builtin/providers/google/resource_compute_autoscaler_test.go index 7dba5520db..4cdaa90198 100644 --- a/builtin/providers/google/resource_compute_autoscaler_test.go +++ b/builtin/providers/google/resource_compute_autoscaler_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -130,9 +131,9 @@ func testAccCheckAutoscalerUpdated(n string, max int64) resource.TestCheckFunc { } } -const testAccAutoscaler_basic = ` +var testAccAutoscaler_basic = fmt.Sprintf(` resource "google_compute_instance_template" "foobar" { - name = "terraform-test-template-foobar" + name = "ascaler-test-%s" machine_type = "n1-standard-1" can_ip_forward = false tags = ["foo", "bar"] @@ -158,13 +159,13 @@ resource "google_compute_instance_template" "foobar" { resource "google_compute_target_pool" "foobar" { description = "Resource created for Terraform acceptance testing" - name = "terraform-test-tpool-foobar" + name = "ascaler-test-%s" session_affinity = "CLIENT_IP_PROTO" } resource "google_compute_instance_group_manager" "foobar" { description = "Terraform test instance group manager" - name = "terraform-test-groupmanager" + name = "ascaler-test-%s" instance_template = "${google_compute_instance_template.foobar.self_link}" target_pools = ["${google_compute_target_pool.foobar.self_link}"] base_instance_name = "foobar" @@ -173,7 +174,7 @@ resource "google_compute_instance_group_manager" "foobar" { resource "google_compute_autoscaler" "foobar" { description = "Resource created for Terraform acceptance testing" - name = "terraform-test-ascaler" + name = "ascaler-test-%s" zone = "us-central1-a" target = "${google_compute_instance_group_manager.foobar.self_link}" autoscaling_policy = { @@ -185,11 +186,11 @@ resource "google_compute_autoscaler" "foobar" { } } -}` +}`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) -const testAccAutoscaler_update = ` +var testAccAutoscaler_update = fmt.Sprintf(` resource "google_compute_instance_template" "foobar" { - name = "terraform-test-template-foobar" + name = "ascaler-test-%s" machine_type = "n1-standard-1" can_ip_forward = false tags = ["foo", "bar"] @@ -215,13 +216,13 @@ resource "google_compute_instance_template" "foobar" { resource "google_compute_target_pool" "foobar" { description = "Resource created for Terraform acceptance testing" - name = "terraform-test-tpool-foobar" + name = "ascaler-test-%s" session_affinity = "CLIENT_IP_PROTO" } resource "google_compute_instance_group_manager" "foobar" { description = "Terraform test instance group manager" - name = "terraform-test-groupmanager" + name = "ascaler-test-%s" instance_template = "${google_compute_instance_template.foobar.self_link}" target_pools = ["${google_compute_target_pool.foobar.self_link}"] base_instance_name = "foobar" @@ -230,7 +231,7 @@ resource "google_compute_instance_group_manager" "foobar" { resource "google_compute_autoscaler" "foobar" { description = "Resource created for Terraform acceptance testing" - name = "terraform-test-ascaler" + name = "ascaler-test-%s" zone = "us-central1-a" target = "${google_compute_instance_group_manager.foobar.self_link}" autoscaling_policy = { @@ -242,4 +243,4 @@ resource "google_compute_autoscaler" "foobar" { } } -}` +}`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_backend_service.go b/builtin/providers/google/resource_compute_backend_service.go index ead6e24023..e4c1586d7c 100644 --- a/builtin/providers/google/resource_compute_backend_service.go +++ b/builtin/providers/google/resource_compute_backend_service.go @@ -186,6 +186,7 @@ func resourceComputeBackendServiceRead(d *schema.ResourceData, meta interface{}) if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { // The resource doesn't exist anymore + log.Printf("[WARN] Removing Backend Service %q because it's gone", d.Get("name").(string)) d.SetId("") return nil diff --git a/builtin/providers/google/resource_compute_backend_service_test.go b/builtin/providers/google/resource_compute_backend_service_test.go index 70b420ba4f..174aa3e621 100644 --- a/builtin/providers/google/resource_compute_backend_service_test.go +++ b/builtin/providers/google/resource_compute_backend_service_test.go @@ -4,12 +4,16 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" ) func TestAccComputeBackendService_basic(t *testing.T) { + serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + extraCheckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) var svc compute.BackendService resource.Test(t, resource.TestCase{ @@ -18,14 +22,15 @@ func TestAccComputeBackendService_basic(t *testing.T) { CheckDestroy: testAccCheckComputeBackendServiceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeBackendService_basic, + Config: testAccComputeBackendService_basic(serviceName, checkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeBackendServiceExists( "google_compute_backend_service.foobar", &svc), ), }, resource.TestStep{ - Config: testAccComputeBackendService_basicModified, + Config: testAccComputeBackendService_basicModified( + serviceName, checkName, extraCheckName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeBackendServiceExists( "google_compute_backend_service.foobar", &svc), @@ -36,6 +41,10 @@ func TestAccComputeBackendService_basic(t *testing.T) { } func TestAccComputeBackendService_withBackend(t *testing.T) { + serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + igName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + itName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) var svc compute.BackendService resource.Test(t, resource.TestCase{ @@ -44,7 +53,8 @@ func TestAccComputeBackendService_withBackend(t *testing.T) { CheckDestroy: testAccCheckComputeBackendServiceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeBackendService_withBackend, + Config: testAccComputeBackendService_withBackend( + serviceName, igName, itName, checkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeBackendServiceExists( "google_compute_backend_service.lipsum", &svc), @@ -111,83 +121,90 @@ func testAccCheckComputeBackendServiceExists(n string, svc *compute.BackendServi } } -const testAccComputeBackendService_basic = ` +func testAccComputeBackendService_basic(serviceName, checkName string) string { + return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { - name = "blablah" - health_checks = ["${google_compute_http_health_check.zero.self_link}"] + name = "%s" + health_checks = ["${google_compute_http_health_check.zero.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" - request_path = "/" - check_interval_sec = 1 - timeout_sec = 1 + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} +`, serviceName, checkName) } -` -const testAccComputeBackendService_basicModified = ` +func testAccComputeBackendService_basicModified(serviceName, checkOne, checkTwo string) string { + return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { - name = "blablah" + name = "%s" health_checks = ["${google_compute_http_health_check.one.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" + name = "%s" request_path = "/" check_interval_sec = 1 timeout_sec = 1 } resource "google_compute_http_health_check" "one" { - name = "tf-test-one" + name = "%s" request_path = "/one" check_interval_sec = 30 timeout_sec = 30 } -` +`, serviceName, checkOne, checkTwo) +} -const testAccComputeBackendService_withBackend = ` +func testAccComputeBackendService_withBackend( + serviceName, igName, itName, checkName string) string { + return fmt.Sprintf(` resource "google_compute_backend_service" "lipsum" { - name = "hello-world-bs" - description = "Hello World 1234" - port_name = "http" - protocol = "HTTP" - timeout_sec = 10 + name = "%s" + description = "Hello World 1234" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 - backend { - group = "${google_compute_instance_group_manager.foobar.instance_group}" - } + backend { + group = "${google_compute_instance_group_manager.foobar.instance_group}" + } - health_checks = ["${google_compute_http_health_check.default.self_link}"] + health_checks = ["${google_compute_http_health_check.default.self_link}"] } resource "google_compute_instance_group_manager" "foobar" { - name = "terraform-test" - instance_template = "${google_compute_instance_template.foobar.self_link}" - base_instance_name = "foobar" - zone = "us-central1-f" - target_size = 1 + name = "%s" + instance_template = "${google_compute_instance_template.foobar.self_link}" + base_instance_name = "foobar" + zone = "us-central1-f" + target_size = 1 } resource "google_compute_instance_template" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" + name = "%s" + machine_type = "n1-standard-1" - network_interface { - network = "default" - } + network_interface { + network = "default" + } - disk { - source_image = "debian-7-wheezy-v20140814" - auto_delete = true - boot = true - } + disk { + source_image = "debian-7-wheezy-v20140814" + auto_delete = true + boot = true + } } resource "google_compute_http_health_check" "default" { - name = "test2" - request_path = "/" - check_interval_sec = 1 - timeout_sec = 1 + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} +`, serviceName, igName, itName, checkName) } -` diff --git a/builtin/providers/google/resource_compute_disk.go b/builtin/providers/google/resource_compute_disk.go index 1118702d6c..1df66b9bb9 100644 --- a/builtin/providers/google/resource_compute_disk.go +++ b/builtin/providers/google/resource_compute_disk.go @@ -141,6 +141,7 @@ func resourceComputeDiskRead(d *schema.ResourceData, meta interface{}) error { config.Project, d.Get("zone").(string), d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Disk %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_disk_test.go b/builtin/providers/google/resource_compute_disk_test.go index 659affff8e..c4f5c4daeb 100644 --- a/builtin/providers/google/resource_compute_disk_test.go +++ b/builtin/providers/google/resource_compute_disk_test.go @@ -4,12 +4,14 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" ) func TestAccComputeDisk_basic(t *testing.T) { + diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) var disk compute.Disk resource.Test(t, resource.TestCase{ @@ -18,7 +20,7 @@ func TestAccComputeDisk_basic(t *testing.T) { CheckDestroy: testAccCheckComputeDiskDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeDisk_basic, + Config: testAccComputeDisk_basic(diskName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeDiskExists( "google_compute_disk.foobar", &disk), @@ -75,11 +77,13 @@ func testAccCheckComputeDiskExists(n string, disk *compute.Disk) resource.TestCh } } -const testAccComputeDisk_basic = ` +func testAccComputeDisk_basic(diskName string) string { + return fmt.Sprintf(` resource "google_compute_disk" "foobar" { - name = "terraform-test" + name = "%s" image = "debian-7-wheezy-v20140814" size = 50 type = "pd-ssd" zone = "us-central1-a" -}` +}`, diskName) +} diff --git a/builtin/providers/google/resource_compute_firewall.go b/builtin/providers/google/resource_compute_firewall.go index 1cec2c8265..f2f4fa73d2 100644 --- a/builtin/providers/google/resource_compute_firewall.go +++ b/builtin/providers/google/resource_compute_firewall.go @@ -3,6 +3,7 @@ package google import ( "bytes" "fmt" + "log" "sort" "github.com/hashicorp/terraform/helper/hashcode" @@ -150,6 +151,7 @@ func resourceComputeFirewallRead(d *schema.ResourceData, meta interface{}) error if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { // The resource doesn't exist anymore + log.Printf("[WARN] Removing Firewall %q because it's gone", d.Get("name").(string)) d.SetId("") return nil diff --git a/builtin/providers/google/resource_compute_firewall_test.go b/builtin/providers/google/resource_compute_firewall_test.go index a4a489fba1..3fa6b305b7 100644 --- a/builtin/providers/google/resource_compute_firewall_test.go +++ b/builtin/providers/google/resource_compute_firewall_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -11,6 +12,8 @@ import ( func TestAccComputeFirewall_basic(t *testing.T) { var firewall compute.Firewall + networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -18,7 +21,7 @@ func TestAccComputeFirewall_basic(t *testing.T) { CheckDestroy: testAccCheckComputeFirewallDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeFirewall_basic, + Config: testAccComputeFirewall_basic(networkName, firewallName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeFirewallExists( "google_compute_firewall.foobar", &firewall), @@ -30,6 +33,8 @@ func TestAccComputeFirewall_basic(t *testing.T) { func TestAccComputeFirewall_update(t *testing.T) { var firewall compute.Firewall + networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -37,14 +42,14 @@ func TestAccComputeFirewall_update(t *testing.T) { CheckDestroy: testAccCheckComputeFirewallDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeFirewall_basic, + Config: testAccComputeFirewall_basic(networkName, firewallName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeFirewallExists( "google_compute_firewall.foobar", &firewall), ), }, resource.TestStep{ - Config: testAccComputeFirewall_update, + Config: testAccComputeFirewall_update(networkName, firewallName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeFirewallExists( "google_compute_firewall.foobar", &firewall), @@ -118,37 +123,41 @@ func testAccCheckComputeFirewallPorts( } } -const testAccComputeFirewall_basic = ` -resource "google_compute_network" "foobar" { - name = "terraform-test" - ipv4_range = "10.0.0.0/16" +func testAccComputeFirewall_basic(network, firewall string) string { + return fmt.Sprintf(` + resource "google_compute_network" "foobar" { + name = "firewall-test-%s" + ipv4_range = "10.0.0.0/16" + } + + resource "google_compute_firewall" "foobar" { + name = "firewall-test-%s" + description = "Resource created for Terraform acceptance testing" + network = "${google_compute_network.foobar.name}" + source_tags = ["foo"] + + allow { + protocol = "icmp" + } + }`, network, firewall) } -resource "google_compute_firewall" "foobar" { - name = "terraform-test" - description = "Resource created for Terraform acceptance testing" - network = "${google_compute_network.foobar.name}" - source_tags = ["foo"] - - allow { - protocol = "icmp" +func testAccComputeFirewall_update(network, firewall string) string { + return fmt.Sprintf(` + resource "google_compute_network" "foobar" { + name = "firewall-test-%s" + ipv4_range = "10.0.0.0/16" } -}` -const testAccComputeFirewall_update = ` -resource "google_compute_network" "foobar" { - name = "terraform-test" - ipv4_range = "10.0.0.0/16" + resource "google_compute_firewall" "foobar" { + name = "firewall-test-%s" + description = "Resource created for Terraform acceptance testing" + network = "${google_compute_network.foobar.name}" + source_tags = ["foo"] + + allow { + protocol = "tcp" + ports = ["80-255"] + } + }`, network, firewall) } - -resource "google_compute_firewall" "foobar" { - name = "terraform-test" - description = "Resource created for Terraform acceptance testing" - network = "${google_compute_network.foobar.name}" - source_tags = ["foo"] - - allow { - protocol = "tcp" - ports = ["80-255"] - } -}` diff --git a/builtin/providers/google/resource_compute_forwarding_rule.go b/builtin/providers/google/resource_compute_forwarding_rule.go index ac4851e51b..e1cbdc46c9 100644 --- a/builtin/providers/google/resource_compute_forwarding_rule.go +++ b/builtin/providers/google/resource_compute_forwarding_rule.go @@ -139,6 +139,7 @@ func resourceComputeForwardingRuleRead(d *schema.ResourceData, meta interface{}) config.Project, region, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Forwarding Rule %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_forwarding_rule_test.go b/builtin/providers/google/resource_compute_forwarding_rule_test.go index ee0a000568..08e9fa51e9 100644 --- a/builtin/providers/google/resource_compute_forwarding_rule_test.go +++ b/builtin/providers/google/resource_compute_forwarding_rule_test.go @@ -4,11 +4,14 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccComputeForwardingRule_basic(t *testing.T) { + poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + ruleName := fmt.Sprintf("tf-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -16,7 +19,7 @@ func TestAccComputeForwardingRule_basic(t *testing.T) { CheckDestroy: testAccCheckComputeForwardingRuleDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeForwardingRule_basic, + Config: testAccComputeForwardingRule_basic(poolName, ruleName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeForwardingRuleExists( "google_compute_forwarding_rule.foobar"), @@ -27,6 +30,9 @@ func TestAccComputeForwardingRule_basic(t *testing.T) { } func TestAccComputeForwardingRule_ip(t *testing.T) { + addrName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + ruleName := fmt.Sprintf("tf-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -34,7 +40,7 @@ func TestAccComputeForwardingRule_ip(t *testing.T) { CheckDestroy: testAccCheckComputeForwardingRuleDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeForwardingRule_ip, + Config: testAccComputeForwardingRule_ip(addrName, poolName, ruleName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeForwardingRuleExists( "google_compute_forwarding_rule.foobar"), @@ -89,36 +95,40 @@ func testAccCheckComputeForwardingRuleExists(n string) resource.TestCheckFunc { } } -const testAccComputeForwardingRule_basic = ` +func testAccComputeForwardingRule_basic(poolName, ruleName string) string { + return fmt.Sprintf(` resource "google_compute_target_pool" "foobar-tp" { - description = "Resource created for Terraform acceptance testing" - instances = ["us-central1-a/foo", "us-central1-b/bar"] - name = "terraform-test" + description = "Resource created for Terraform acceptance testing" + instances = ["us-central1-a/foo", "us-central1-b/bar"] + name = "%s" } resource "google_compute_forwarding_rule" "foobar" { - description = "Resource created for Terraform acceptance testing" - ip_protocol = "UDP" - name = "terraform-test" - port_range = "80-81" - target = "${google_compute_target_pool.foobar-tp.self_link}" + description = "Resource created for Terraform acceptance testing" + ip_protocol = "UDP" + name = "%s" + port_range = "80-81" + target = "${google_compute_target_pool.foobar-tp.self_link}" +} +`, poolName, ruleName) } -` -const testAccComputeForwardingRule_ip = ` +func testAccComputeForwardingRule_ip(addrName, poolName, ruleName string) string { + return fmt.Sprintf(` resource "google_compute_address" "foo" { - name = "foo" + name = "%s" } resource "google_compute_target_pool" "foobar-tp" { - description = "Resource created for Terraform acceptance testing" - instances = ["us-central1-a/foo", "us-central1-b/bar"] - name = "terraform-test" + description = "Resource created for Terraform acceptance testing" + instances = ["us-central1-a/foo", "us-central1-b/bar"] + name = "%s" } resource "google_compute_forwarding_rule" "foobar" { - description = "Resource created for Terraform acceptance testing" - ip_address = "${google_compute_address.foo.address}" - ip_protocol = "TCP" - name = "terraform-test" - port_range = "80-81" - target = "${google_compute_target_pool.foobar-tp.self_link}" + description = "Resource created for Terraform acceptance testing" + ip_address = "${google_compute_address.foo.address}" + ip_protocol = "TCP" + name = "%s" + port_range = "80-81" + target = "${google_compute_target_pool.foobar-tp.self_link}" +} +`, addrName, poolName, ruleName) } -` diff --git a/builtin/providers/google/resource_compute_global_address.go b/builtin/providers/google/resource_compute_global_address.go index 74c0633cdd..58d3f5e8e7 100644 --- a/builtin/providers/google/resource_compute_global_address.go +++ b/builtin/providers/google/resource_compute_global_address.go @@ -64,6 +64,7 @@ func resourceComputeGlobalAddressRead(d *schema.ResourceData, meta interface{}) config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Global Address %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_global_address_test.go b/builtin/providers/google/resource_compute_global_address_test.go index 2ef7b97ea7..9ed49d836d 100644 --- a/builtin/providers/google/resource_compute_global_address_test.go +++ b/builtin/providers/google/resource_compute_global_address_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -75,7 +76,7 @@ func testAccCheckComputeGlobalAddressExists(n string, addr *compute.Address) res } } -const testAccComputeGlobalAddress_basic = ` +var testAccComputeGlobalAddress_basic = fmt.Sprintf(` resource "google_compute_global_address" "foobar" { - name = "terraform-test" -}` + name = "address-test-%s" +}`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_global_forwarding_rule.go b/builtin/providers/google/resource_compute_global_forwarding_rule.go index f4d3c21bfb..ce987f7165 100644 --- a/builtin/providers/google/resource_compute_global_forwarding_rule.go +++ b/builtin/providers/google/resource_compute_global_forwarding_rule.go @@ -131,6 +131,7 @@ func resourceComputeGlobalForwardingRuleRead(d *schema.ResourceData, meta interf config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Global Forwarding Rule %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_global_forwarding_rule_test.go b/builtin/providers/google/resource_compute_global_forwarding_rule_test.go index 58f65c25d4..f81361c7b8 100644 --- a/builtin/providers/google/resource_compute_global_forwarding_rule_test.go +++ b/builtin/providers/google/resource_compute_global_forwarding_rule_test.go @@ -4,18 +4,26 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccComputeGlobalForwardingRule_basic(t *testing.T) { + fr := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + proxy1 := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + proxy2 := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + backend := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + hc := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + urlmap := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeGlobalForwardingRule_basic1, + Config: testAccComputeGlobalForwardingRule_basic1(fr, proxy1, proxy2, backend, hc, urlmap), Check: resource.ComposeTestCheckFunc( testAccCheckComputeGlobalForwardingRuleExists( "google_compute_global_forwarding_rule.foobar"), @@ -26,13 +34,20 @@ func TestAccComputeGlobalForwardingRule_basic(t *testing.T) { } func TestAccComputeGlobalForwardingRule_update(t *testing.T) { + fr := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + proxy1 := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + proxy2 := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + backend := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + hc := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + urlmap := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeGlobalForwardingRule_basic1, + Config: testAccComputeGlobalForwardingRule_basic1(fr, proxy1, proxy2, backend, hc, urlmap), Check: resource.ComposeTestCheckFunc( testAccCheckComputeGlobalForwardingRuleExists( "google_compute_global_forwarding_rule.foobar"), @@ -40,7 +55,7 @@ func TestAccComputeGlobalForwardingRule_update(t *testing.T) { }, resource.TestStep{ - Config: testAccComputeGlobalForwardingRule_basic2, + Config: testAccComputeGlobalForwardingRule_basic2(fr, proxy1, proxy2, backend, hc, urlmap), Check: resource.ComposeTestCheckFunc( testAccCheckComputeGlobalForwardingRuleExists( "google_compute_global_forwarding_rule.foobar"), @@ -95,114 +110,116 @@ func testAccCheckComputeGlobalForwardingRuleExists(n string) resource.TestCheckF } } -const testAccComputeGlobalForwardingRule_basic1 = ` -resource "google_compute_global_forwarding_rule" "foobar" { - description = "Resource created for Terraform acceptance testing" - ip_protocol = "TCP" - name = "terraform-test" - port_range = "80" - target = "${google_compute_target_http_proxy.foobar1.self_link}" -} - -resource "google_compute_target_http_proxy" "foobar1" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test1" - url_map = "${google_compute_url_map.foobar.self_link}" -} - -resource "google_compute_target_http_proxy" "foobar2" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test2" - url_map = "${google_compute_url_map.foobar.self_link}" -} - -resource "google_compute_backend_service" "foobar" { - name = "service" - health_checks = ["${google_compute_http_health_check.zero.self_link}"] -} - -resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" - request_path = "/" - check_interval_sec = 1 - timeout_sec = 1 -} - -resource "google_compute_url_map" "foobar" { - name = "myurlmap" - default_service = "${google_compute_backend_service.foobar.self_link}" - host_rule { - hosts = ["mysite.com", "myothersite.com"] - path_matcher = "boop" +func testAccComputeGlobalForwardingRule_basic1(fr, proxy1, proxy2, backend, hc, urlmap string) string { + return fmt.Sprintf(` + resource "google_compute_global_forwarding_rule" "foobar" { + description = "Resource created for Terraform acceptance testing" + ip_protocol = "TCP" + name = "%s" + port_range = "80" + target = "${google_compute_target_http_proxy.foobar1.self_link}" } - path_matcher { + + resource "google_compute_target_http_proxy" "foobar1" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + url_map = "${google_compute_url_map.foobar.self_link}" + } + + resource "google_compute_target_http_proxy" "foobar2" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + url_map = "${google_compute_url_map.foobar.self_link}" + } + + resource "google_compute_backend_service" "foobar" { + name = "%s" + health_checks = ["${google_compute_http_health_check.zero.self_link}"] + } + + resource "google_compute_http_health_check" "zero" { + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 + } + + resource "google_compute_url_map" "foobar" { + name = "%s" default_service = "${google_compute_backend_service.foobar.self_link}" - name = "boop" - path_rule { - paths = ["/*"] + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "boop" + } + path_matcher { + default_service = "${google_compute_backend_service.foobar.self_link}" + name = "boop" + path_rule { + paths = ["/*"] + service = "${google_compute_backend_service.foobar.self_link}" + } + } + test { + host = "mysite.com" + path = "/*" service = "${google_compute_backend_service.foobar.self_link}" } + }`, fr, proxy1, proxy2, backend, hc, urlmap) +} + +func testAccComputeGlobalForwardingRule_basic2(fr, proxy1, proxy2, backend, hc, urlmap string) string { + return fmt.Sprintf(` + resource "google_compute_global_forwarding_rule" "foobar" { + description = "Resource created for Terraform acceptance testing" + ip_protocol = "TCP" + name = "%s" + port_range = "80" + target = "${google_compute_target_http_proxy.foobar2.self_link}" } - test { - host = "mysite.com" - path = "/*" - service = "${google_compute_backend_service.foobar.self_link}" + + resource "google_compute_target_http_proxy" "foobar1" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + url_map = "${google_compute_url_map.foobar.self_link}" } -} -` -const testAccComputeGlobalForwardingRule_basic2 = ` -resource "google_compute_global_forwarding_rule" "foobar" { - description = "Resource created for Terraform acceptance testing" - ip_protocol = "TCP" - name = "terraform-test" - port_range = "80" - target = "${google_compute_target_http_proxy.foobar2.self_link}" -} - -resource "google_compute_target_http_proxy" "foobar1" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test1" - url_map = "${google_compute_url_map.foobar.self_link}" -} - -resource "google_compute_target_http_proxy" "foobar2" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test2" - url_map = "${google_compute_url_map.foobar.self_link}" -} - -resource "google_compute_backend_service" "foobar" { - name = "service" - health_checks = ["${google_compute_http_health_check.zero.self_link}"] -} - -resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" - request_path = "/" - check_interval_sec = 1 - timeout_sec = 1 -} - -resource "google_compute_url_map" "foobar" { - name = "myurlmap" - default_service = "${google_compute_backend_service.foobar.self_link}" - host_rule { - hosts = ["mysite.com", "myothersite.com"] - path_matcher = "boop" + resource "google_compute_target_http_proxy" "foobar2" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + url_map = "${google_compute_url_map.foobar.self_link}" } - path_matcher { + + resource "google_compute_backend_service" "foobar" { + name = "%s" + health_checks = ["${google_compute_http_health_check.zero.self_link}"] + } + + resource "google_compute_http_health_check" "zero" { + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 + } + + resource "google_compute_url_map" "foobar" { + name = "%s" default_service = "${google_compute_backend_service.foobar.self_link}" - name = "boop" - path_rule { - paths = ["/*"] + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "boop" + } + path_matcher { + default_service = "${google_compute_backend_service.foobar.self_link}" + name = "boop" + path_rule { + paths = ["/*"] + service = "${google_compute_backend_service.foobar.self_link}" + } + } + test { + host = "mysite.com" + path = "/*" service = "${google_compute_backend_service.foobar.self_link}" } - } - test { - host = "mysite.com" - path = "/*" - service = "${google_compute_backend_service.foobar.self_link}" - } + }`, fr, proxy1, proxy2, backend, hc, urlmap) } -` diff --git a/builtin/providers/google/resource_compute_http_health_check.go b/builtin/providers/google/resource_compute_http_health_check.go index c53267afda..8ddae0b70f 100644 --- a/builtin/providers/google/resource_compute_http_health_check.go +++ b/builtin/providers/google/resource_compute_http_health_check.go @@ -187,6 +187,7 @@ func resourceComputeHttpHealthCheckRead(d *schema.ResourceData, meta interface{} if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { // The resource doesn't exist anymore + log.Printf("[WARN] Removing HTTP Health Check %q because it's gone", d.Get("name").(string)) d.SetId("") return nil diff --git a/builtin/providers/google/resource_compute_http_health_check_test.go b/builtin/providers/google/resource_compute_http_health_check_test.go index c37c770bb1..7734ab28f4 100644 --- a/builtin/providers/google/resource_compute_http_health_check_test.go +++ b/builtin/providers/google/resource_compute_http_health_check_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -137,35 +138,35 @@ func testAccCheckComputeHttpHealthCheckThresholds(healthy, unhealthy int64, heal } } -const testAccComputeHttpHealthCheck_basic = ` +var testAccComputeHttpHealthCheck_basic = fmt.Sprintf(` resource "google_compute_http_health_check" "foobar" { check_interval_sec = 3 description = "Resource created for Terraform acceptance testing" healthy_threshold = 3 host = "foobar" - name = "terraform-test" + name = "httphealth-test-%s" port = "80" request_path = "/health_check" timeout_sec = 2 unhealthy_threshold = 3 } -` +`, acctest.RandString(10)) -const testAccComputeHttpHealthCheck_update1 = ` +var testAccComputeHttpHealthCheck_update1 = fmt.Sprintf(` resource "google_compute_http_health_check" "foobar" { - name = "terraform-test" + name = "httphealth-test-%s" description = "Resource created for Terraform acceptance testing" request_path = "/not_default" } -` +`, acctest.RandString(10)) /* Change description, restore request_path to default, and change * thresholds from defaults */ -const testAccComputeHttpHealthCheck_update2 = ` +var testAccComputeHttpHealthCheck_update2 = fmt.Sprintf(` resource "google_compute_http_health_check" "foobar" { - name = "terraform-test" + name = "httphealth-test-%s" description = "Resource updated for Terraform acceptance testing" healthy_threshold = 10 unhealthy_threshold = 10 } -` +`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_https_health_check.go b/builtin/providers/google/resource_compute_https_health_check.go index 32a8dfb381..46affdd9e3 100644 --- a/builtin/providers/google/resource_compute_https_health_check.go +++ b/builtin/providers/google/resource_compute_https_health_check.go @@ -186,6 +186,7 @@ func resourceComputeHttpsHealthCheckRead(d *schema.ResourceData, meta interface{ config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing HTTPS Health Check %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_https_health_check_test.go b/builtin/providers/google/resource_compute_https_health_check_test.go index d263bfd881..c7510c325c 100644 --- a/builtin/providers/google/resource_compute_https_health_check_test.go +++ b/builtin/providers/google/resource_compute_https_health_check_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -137,35 +138,35 @@ func testAccCheckComputeHttpsHealthCheckThresholds(healthy, unhealthy int64, hea } } -const testAccComputeHttpsHealthCheck_basic = ` +var testAccComputeHttpsHealthCheck_basic = fmt.Sprintf(` resource "google_compute_https_health_check" "foobar" { check_interval_sec = 3 description = "Resource created for Terraform acceptance testing" healthy_threshold = 3 host = "foobar" - name = "terraform-test" + name = "httpshealth-test-%s" port = "80" request_path = "/health_check" timeout_sec = 2 unhealthy_threshold = 3 } -` +`, acctest.RandString(10)) -const testAccComputeHttpsHealthCheck_update1 = ` +var testAccComputeHttpsHealthCheck_update1 = fmt.Sprintf(` resource "google_compute_https_health_check" "foobar" { - name = "terraform-test" + name = "httpshealth-test-%s" description = "Resource created for Terraform acceptance testing" request_path = "/not_default" } -` +`, acctest.RandString(10)) /* Change description, restore request_path to default, and change * thresholds from defaults */ -const testAccComputeHttpsHealthCheck_update2 = ` +var testAccComputeHttpsHealthCheck_update2 = fmt.Sprintf(` resource "google_compute_https_health_check" "foobar" { - name = "terraform-test" + name = "httpshealth-test-%s" description = "Resource updated for Terraform acceptance testing" healthy_threshold = 10 unhealthy_threshold = 10 } -` +`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_instance.go b/builtin/providers/google/resource_compute_instance.go index 808c5de789..8c7f6318b2 100644 --- a/builtin/providers/google/resource_compute_instance.go +++ b/builtin/providers/google/resource_compute_instance.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "strings" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -136,9 +137,13 @@ func resourceComputeInstance() *schema.Resource { Schema: map[string]*schema.Schema{ "nat_ip": &schema.Schema{ Type: schema.TypeString, - Computed: true, Optional: true, }, + + "assigned_nat_ip": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, }, }, }, @@ -284,10 +289,12 @@ func getInstance(config *Config, d *schema.ResourceData) (*compute.Instance, err config.Project, d.Get("zone").(string), d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Instance %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore + id := d.Id() d.SetId("") - return nil, fmt.Errorf("Resource %s no longer exists", config.Project) + return nil, fmt.Errorf("Resource %s no longer exists", id) } return nil, fmt.Errorf("Error reading instance: %s", err) @@ -547,15 +554,20 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) + id := d.Id() instance, err := getInstance(config, d) if err != nil { + if strings.Contains(err.Error(), "no longer exists") { + log.Printf("[WARN] Google Compute Instance (%s) not found", id) + return nil + } return err } // Synch metadata md := instance.Metadata - _md := MetadataFormatSchema(md) + _md := MetadataFormatSchema(d.Get("metadata").(map[string]interface{}), md) delete(_md, "startup-script") if script, scriptExists := d.GetOk("metadata_startup_script"); scriptExists { @@ -629,9 +641,10 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error var natIP string accessConfigs := make( []map[string]interface{}, 0, len(iface.AccessConfigs)) - for _, config := range iface.AccessConfigs { + for j, config := range iface.AccessConfigs { accessConfigs = append(accessConfigs, map[string]interface{}{ - "nat_ip": config.NatIP, + "nat_ip": d.Get(fmt.Sprintf("network_interface.%d.access_config.%d.nat_ip", i, j)), + "assigned_nat_ip": config.NatIP, }) if natIP == "" { diff --git a/builtin/providers/google/resource_compute_instance_group_manager.go b/builtin/providers/google/resource_compute_instance_group_manager.go index b0186b7070..df88a96392 100644 --- a/builtin/providers/google/resource_compute_instance_group_manager.go +++ b/builtin/providers/google/resource_compute_instance_group_manager.go @@ -53,6 +53,31 @@ func resourceComputeInstanceGroupManager() *schema.Resource { Required: true, }, + "named_port": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + + "update_strategy": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "RESTART", + }, + "target_pools": &schema.Schema{ Type: schema.TypeSet, Optional: true, @@ -82,6 +107,18 @@ func resourceComputeInstanceGroupManager() *schema.Resource { } } +func getNamedPorts(nps []interface{}) []*compute.NamedPort { + namedPorts := make([]*compute.NamedPort, 0, len(nps)) + for _, v := range nps { + np := v.(map[string]interface{}) + namedPorts = append(namedPorts, &compute.NamedPort{ + Name: np["name"].(string), + Port: int64(np["port"].(int)), + }) + } + return namedPorts +} + func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -104,6 +141,10 @@ func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta inte manager.Description = v.(string) } + if v, ok := d.GetOk("named_port"); ok { + manager.NamedPorts = getNamedPorts(v.([]interface{})) + } + if attr := d.Get("target_pools").(*schema.Set); attr.Len() > 0 { var s []string for _, v := range attr.List() { @@ -112,6 +153,11 @@ func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta inte manager.TargetPools = s } + updateStrategy := d.Get("update_strategy").(string) + if !(updateStrategy == "NONE" || updateStrategy == "RESTART") { + return fmt.Errorf("Update strategy must be \"NONE\" or \"RESTART\"") + } + log.Printf("[DEBUG] InstanceGroupManager insert request: %#v", manager) op, err := config.clientCompute.InstanceGroupManagers.Insert( config.Project, d.Get("zone").(string), manager).Do() @@ -138,6 +184,7 @@ func resourceComputeInstanceGroupManagerRead(d *schema.ResourceData, meta interf config.Project, d.Get("zone").(string), d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Instance Group Manager %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") @@ -148,6 +195,7 @@ func resourceComputeInstanceGroupManagerRead(d *schema.ResourceData, meta interf } // Set computed fields + d.Set("named_port", manager.NamedPorts) d.Set("fingerprint", manager.Fingerprint) d.Set("instance_group", manager.InstanceGroup) d.Set("target_size", manager.TargetSize) @@ -209,9 +257,63 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte return err } + if d.Get("update_strategy").(string) == "RESTART" { + managedInstances, err := config.clientCompute.InstanceGroupManagers.ListManagedInstances( + config.Project, d.Get("zone").(string), d.Id()).Do() + + managedInstanceCount := len(managedInstances.ManagedInstances) + instances := make([]string, managedInstanceCount) + for i, v := range managedInstances.ManagedInstances { + instances[i] = v.Instance + } + + recreateInstances := &compute.InstanceGroupManagersRecreateInstancesRequest{ + Instances: instances, + } + + op, err = config.clientCompute.InstanceGroupManagers.RecreateInstances( + config.Project, d.Get("zone").(string), d.Id(), recreateInstances).Do() + + if err != nil { + return fmt.Errorf("Error restarting instance group managers instances: %s", err) + } + + // Wait for the operation to complete + err = computeOperationWaitZoneTime(config, op, d.Get("zone").(string), + managedInstanceCount*4, "Restarting InstanceGroupManagers instances") + if err != nil { + return err + } + } + d.SetPartial("instance_template") } + // If named_port changes then update: + if d.HasChange("named_port") { + + // Build the parameters for a "SetNamedPorts" request: + namedPorts := getNamedPorts(d.Get("named_port").([]interface{})) + setNamedPorts := &compute.InstanceGroupsSetNamedPortsRequest{ + NamedPorts: namedPorts, + } + + // Make the request: + op, err := config.clientCompute.InstanceGroups.SetNamedPorts( + config.Project, d.Get("zone").(string), d.Id(), setNamedPorts).Do() + if err != nil { + return fmt.Errorf("Error updating InstanceGroupManager: %s", err) + } + + // Wait for the operation to complete: + err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Updating InstanceGroupManager") + if err != nil { + return err + } + + d.SetPartial("named_port") + } + // If size changes trigger a resize if d.HasChange("target_size") { if v, ok := d.GetOk("target_size"); ok { diff --git a/builtin/providers/google/resource_compute_instance_group_manager_test.go b/builtin/providers/google/resource_compute_instance_group_manager_test.go index 4d5bd7c131..c0b466b7ea 100644 --- a/builtin/providers/google/resource_compute_instance_group_manager_test.go +++ b/builtin/providers/google/resource_compute_instance_group_manager_test.go @@ -6,6 +6,7 @@ import ( "google.golang.org/api/compute/v1" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -13,13 +14,18 @@ import ( func TestAccInstanceGroupManager_basic(t *testing.T) { var manager compute.InstanceGroupManager + template := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + target := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceGroupManagerDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccInstanceGroupManager_basic, + Config: testAccInstanceGroupManager_basic(template, target, igm1, igm2), Check: resource.ComposeTestCheckFunc( testAccCheckInstanceGroupManagerExists( "google_compute_instance_group_manager.igm-basic", &manager), @@ -34,26 +40,39 @@ func TestAccInstanceGroupManager_basic(t *testing.T) { func TestAccInstanceGroupManager_update(t *testing.T) { var manager compute.InstanceGroupManager + template1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + target := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceGroupManagerDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccInstanceGroupManager_update, + Config: testAccInstanceGroupManager_update(template1, target, igm), Check: resource.ComposeTestCheckFunc( testAccCheckInstanceGroupManagerExists( "google_compute_instance_group_manager.igm-update", &manager), + testAccCheckInstanceGroupManagerNamedPorts( + "google_compute_instance_group_manager.igm-update", + map[string]int64{"customhttp": 8080}, + &manager), ), }, resource.TestStep{ - Config: testAccInstanceGroupManager_update2, + Config: testAccInstanceGroupManager_update2(template1, target, template2, igm), Check: resource.ComposeTestCheckFunc( testAccCheckInstanceGroupManagerExists( "google_compute_instance_group_manager.igm-update", &manager), testAccCheckInstanceGroupManagerUpdated( "google_compute_instance_group_manager.igm-update", 3, - "google_compute_target_pool.igm-update", "terraform-test-igm-update2"), + "google_compute_target_pool.igm-update", template2), + testAccCheckInstanceGroupManagerNamedPorts( + "google_compute_instance_group_manager.igm-update", + map[string]int64{"customhttp": 8080, "customhttps": 8443}, + &manager), ), }, }, @@ -69,7 +88,7 @@ func testAccCheckInstanceGroupManagerDestroy(s *terraform.State) error { } _, err := config.clientCompute.InstanceGroupManagers.Get( config.Project, rs.Primary.Attributes["zone"], rs.Primary.ID).Do() - if err != nil { + if err == nil { return fmt.Errorf("InstanceGroupManager still exists") } } @@ -146,164 +165,218 @@ func testAccCheckInstanceGroupManagerUpdated(n string, size int64, targetPool st } } -const testAccInstanceGroupManager_basic = ` -resource "google_compute_instance_template" "igm-basic" { - name = "terraform-test-igm-basic" - machine_type = "n1-standard-1" - can_ip_forward = false - tags = ["foo", "bar"] +func testAccCheckInstanceGroupManagerNamedPorts(n string, np map[string]int64, instanceGroupManager *compute.InstanceGroupManager) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } - disk { - source_image = "debian-cloud/debian-7-wheezy-v20140814" - auto_delete = true - boot = true - } + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } - network_interface { - network = "default" - } + config := testAccProvider.Meta().(*Config) - metadata { - foo = "bar" - } + manager, err := config.clientCompute.InstanceGroupManagers.Get( + config.Project, rs.Primary.Attributes["zone"], rs.Primary.ID).Do() + if err != nil { + return err + } - service_account { - scopes = ["userinfo-email", "compute-ro", "storage-ro"] + var found bool + for _, namedPort := range manager.NamedPorts { + found = false + for name, port := range np { + if namedPort.Name == name && namedPort.Port == port { + found = true + } + } + if !found { + return fmt.Errorf("named port incorrect") + } + } + + return nil } } -resource "google_compute_target_pool" "igm-basic" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test-igm-basic" - session_affinity = "CLIENT_IP_PROTO" -} +func testAccInstanceGroupManager_basic(template, target, igm1, igm2 string) string { + return fmt.Sprintf(` + resource "google_compute_instance_template" "igm-basic" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] -resource "google_compute_instance_group_manager" "igm-basic" { - description = "Terraform test instance group manager" - name = "terraform-test-igm-basic" - instance_template = "${google_compute_instance_template.igm-basic.self_link}" - target_pools = ["${google_compute_target_pool.igm-basic.self_link}"] - base_instance_name = "igm-basic" - zone = "us-central1-c" - target_size = 2 -} + disk { + source_image = "debian-cloud/debian-7-wheezy-v20140814" + auto_delete = true + boot = true + } -resource "google_compute_instance_group_manager" "igm-no-tp" { - description = "Terraform test instance group manager" - name = "terraform-test-igm-no-tp" - instance_template = "${google_compute_instance_template.igm-basic.self_link}" - base_instance_name = "igm-no-tp" - zone = "us-central1-c" - target_size = 2 -} -` + network_interface { + network = "default" + } -const testAccInstanceGroupManager_update = ` -resource "google_compute_instance_template" "igm-update" { - name = "terraform-test-igm-update" - machine_type = "n1-standard-1" - can_ip_forward = false - tags = ["foo", "bar"] + metadata { + foo = "bar" + } - disk { - source_image = "debian-cloud/debian-7-wheezy-v20140814" - auto_delete = true - boot = true + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } } - network_interface { - network = "default" + resource "google_compute_target_pool" "igm-basic" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" } - metadata { - foo = "bar" + resource "google_compute_instance_group_manager" "igm-basic" { + description = "Terraform test instance group manager" + name = "%s" + instance_template = "${google_compute_instance_template.igm-basic.self_link}" + target_pools = ["${google_compute_target_pool.igm-basic.self_link}"] + base_instance_name = "igm-basic" + zone = "us-central1-c" + target_size = 2 } - service_account { - scopes = ["userinfo-email", "compute-ro", "storage-ro"] + resource "google_compute_instance_group_manager" "igm-no-tp" { + description = "Terraform test instance group manager" + name = "%s" + instance_template = "${google_compute_instance_template.igm-basic.self_link}" + base_instance_name = "igm-no-tp" + zone = "us-central1-c" + target_size = 2 } + `, template, target, igm1, igm2) } -resource "google_compute_target_pool" "igm-update" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test-igm-update" - session_affinity = "CLIENT_IP_PROTO" -} +func testAccInstanceGroupManager_update(template, target, igm string) string { + return fmt.Sprintf(` + resource "google_compute_instance_template" "igm-update" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] -resource "google_compute_instance_group_manager" "igm-update" { - description = "Terraform test instance group manager" - name = "terraform-test-igm-update" - instance_template = "${google_compute_instance_template.igm-update.self_link}" - target_pools = ["${google_compute_target_pool.igm-update.self_link}"] - base_instance_name = "igm-update" - zone = "us-central1-c" - target_size = 2 -}` + disk { + source_image = "debian-cloud/debian-7-wheezy-v20140814" + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + metadata { + foo = "bar" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } + } + + resource "google_compute_target_pool" "igm-update" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" + } + + resource "google_compute_instance_group_manager" "igm-update" { + description = "Terraform test instance group manager" + name = "%s" + instance_template = "${google_compute_instance_template.igm-update.self_link}" + target_pools = ["${google_compute_target_pool.igm-update.self_link}"] + base_instance_name = "igm-update" + zone = "us-central1-c" + target_size = 2 + named_port { + name = "customhttp" + port = 8080 + } + }`, template, target, igm) +} // Change IGM's instance template and target size -const testAccInstanceGroupManager_update2 = ` -resource "google_compute_instance_template" "igm-update" { - name = "terraform-test-igm-update" - machine_type = "n1-standard-1" - can_ip_forward = false - tags = ["foo", "bar"] +func testAccInstanceGroupManager_update2(template1, target, template2, igm string) string { + return fmt.Sprintf(` + resource "google_compute_instance_template" "igm-update" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] - disk { - source_image = "debian-cloud/debian-7-wheezy-v20140814" - auto_delete = true - boot = true + disk { + source_image = "debian-cloud/debian-7-wheezy-v20140814" + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + metadata { + foo = "bar" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } } - network_interface { - network = "default" + resource "google_compute_target_pool" "igm-update" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" } - metadata { - foo = "bar" + resource "google_compute_instance_template" "igm-update2" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = "debian-cloud/debian-7-wheezy-v20140814" + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + metadata { + foo = "bar" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } } - service_account { - scopes = ["userinfo-email", "compute-ro", "storage-ro"] - } + resource "google_compute_instance_group_manager" "igm-update" { + description = "Terraform test instance group manager" + name = "%s" + instance_template = "${google_compute_instance_template.igm-update2.self_link}" + target_pools = ["${google_compute_target_pool.igm-update.self_link}"] + base_instance_name = "igm-update" + zone = "us-central1-c" + target_size = 3 + named_port { + name = "customhttp" + port = 8080 + } + named_port { + name = "customhttps" + port = 8443 + } + }`, template1, target, template2, igm) } - -resource "google_compute_target_pool" "igm-update" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test-igm-update" - session_affinity = "CLIENT_IP_PROTO" -} - -resource "google_compute_instance_template" "igm-update2" { - name = "terraform-test-igm-update2" - machine_type = "n1-standard-1" - can_ip_forward = false - tags = ["foo", "bar"] - - disk { - source_image = "debian-cloud/debian-7-wheezy-v20140814" - auto_delete = true - boot = true - } - - network_interface { - network = "default" - } - - metadata { - foo = "bar" - } - - service_account { - scopes = ["userinfo-email", "compute-ro", "storage-ro"] - } -} - -resource "google_compute_instance_group_manager" "igm-update" { - description = "Terraform test instance group manager" - name = "terraform-test-igm-update" - instance_template = "${google_compute_instance_template.igm-update2.self_link}" - target_pools = ["${google_compute_target_pool.igm-update.self_link}"] - base_instance_name = "igm-update" - zone = "us-central1-c" - target_size = 3 -}` diff --git a/builtin/providers/google/resource_compute_instance_template.go b/builtin/providers/google/resource_compute_instance_template.go index 48be445cbb..07bcb5f4c0 100644 --- a/builtin/providers/google/resource_compute_instance_template.go +++ b/builtin/providers/google/resource_compute_instance_template.go @@ -2,6 +2,7 @@ package google import ( "fmt" + "log" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -466,6 +467,7 @@ func resourceComputeInstanceTemplateRead(d *schema.ResourceData, meta interface{ config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Instance Template %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_instance_template_test.go b/builtin/providers/google/resource_compute_instance_template_test.go index 82f88b4ac7..a36987b2ca 100644 --- a/builtin/providers/google/resource_compute_instance_template_test.go +++ b/builtin/providers/google/resource_compute_instance_template_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -201,9 +202,9 @@ func testAccCheckComputeInstanceTemplateTag(instanceTemplate *compute.InstanceTe } } -const testAccComputeInstanceTemplate_basic = ` +var testAccComputeInstanceTemplate_basic = fmt.Sprintf(` resource "google_compute_instance_template" "foobar" { - name = "terraform-test" + name = "instancet-test-%s" machine_type = "n1-standard-1" can_ip_forward = false tags = ["foo", "bar"] @@ -230,15 +231,15 @@ resource "google_compute_instance_template" "foobar" { service_account { scopes = ["userinfo-email", "compute-ro", "storage-ro"] } -}` +}`, acctest.RandString(10)) -const testAccComputeInstanceTemplate_ip = ` +var testAccComputeInstanceTemplate_ip = fmt.Sprintf(` resource "google_compute_address" "foo" { - name = "foo" + name = "instancet-test-%s" } resource "google_compute_instance_template" "foobar" { - name = "terraform-test" + name = "instancet-test-%s" machine_type = "n1-standard-1" tags = ["foo", "bar"] @@ -256,11 +257,11 @@ resource "google_compute_instance_template" "foobar" { metadata { foo = "bar" } -}` +}`, acctest.RandString(10), acctest.RandString(10)) -const testAccComputeInstanceTemplate_disks = ` +var testAccComputeInstanceTemplate_disks = fmt.Sprintf(` resource "google_compute_disk" "foobar" { - name = "terraform-test-foobar" + name = "instancet-test-%s" image = "debian-7-wheezy-v20140814" size = 10 type = "pd-ssd" @@ -268,7 +269,7 @@ resource "google_compute_disk" "foobar" { } resource "google_compute_instance_template" "foobar" { - name = "terraform-test" + name = "instancet-test-%s" machine_type = "n1-standard-1" disk { @@ -291,4 +292,4 @@ resource "google_compute_instance_template" "foobar" { metadata { foo = "bar" } -}` +}`, acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_instance_test.go b/builtin/providers/google/resource_compute_instance_test.go index 4cee16a51b..9a2c3a7879 100644 --- a/builtin/providers/google/resource_compute_instance_test.go +++ b/builtin/providers/google/resource_compute_instance_test.go @@ -5,6 +5,7 @@ import ( "strings" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -12,6 +13,7 @@ import ( func TestAccComputeInstance_basic_deprecated_network(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -19,13 +21,13 @@ func TestAccComputeInstance_basic_deprecated_network(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_basic_deprecated_network, + Config: testAccComputeInstance_basic_deprecated_network(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), - testAccCheckComputeInstanceDisk(&instance, "terraform-test", true, true), + testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), ), }, }, @@ -34,6 +36,7 @@ func TestAccComputeInstance_basic_deprecated_network(t *testing.T) { func TestAccComputeInstance_basic1(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -41,14 +44,14 @@ func TestAccComputeInstance_basic1(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_basic, + Config: testAccComputeInstance_basic(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), testAccCheckComputeInstanceMetadata(&instance, "baz", "qux"), - testAccCheckComputeInstanceDisk(&instance, "terraform-test", true, true), + testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), ), }, }, @@ -57,6 +60,7 @@ func TestAccComputeInstance_basic1(t *testing.T) { func TestAccComputeInstance_basic2(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -64,13 +68,13 @@ func TestAccComputeInstance_basic2(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_basic2, + Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), - testAccCheckComputeInstanceDisk(&instance, "terraform-test", true, true), + testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), ), }, }, @@ -79,6 +83,7 @@ func TestAccComputeInstance_basic2(t *testing.T) { func TestAccComputeInstance_basic3(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -86,13 +91,13 @@ func TestAccComputeInstance_basic3(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_basic3, + Config: testAccComputeInstance_basic3(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), - testAccCheckComputeInstanceDisk(&instance, "terraform-test", true, true), + testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), ), }, }, @@ -101,6 +106,8 @@ func TestAccComputeInstance_basic3(t *testing.T) { func TestAccComputeInstance_IP(t *testing.T) { var instance compute.Instance + var ipName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -108,7 +115,7 @@ func TestAccComputeInstance_IP(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_ip, + Config: testAccComputeInstance_ip(ipName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), @@ -121,6 +128,8 @@ func TestAccComputeInstance_IP(t *testing.T) { func TestAccComputeInstance_disks(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var diskName = fmt.Sprintf("instance-testd-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -128,12 +137,12 @@ func TestAccComputeInstance_disks(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_disks, + Config: testAccComputeInstance_disks(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), - testAccCheckComputeInstanceDisk(&instance, "terraform-test", true, true), - testAccCheckComputeInstanceDisk(&instance, "terraform-test-disk", false, false), + testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), + testAccCheckComputeInstanceDisk(&instance, diskName, false, false), ), }, }, @@ -142,6 +151,7 @@ func TestAccComputeInstance_disks(t *testing.T) { func TestAccComputeInstance_local_ssd(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -149,11 +159,11 @@ func TestAccComputeInstance_local_ssd(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_local_ssd, + Config: testAccComputeInstance_local_ssd(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.local-ssd", &instance), - testAccCheckComputeInstanceDisk(&instance, "terraform-test", true, true), + testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), ), }, }, @@ -162,6 +172,7 @@ func TestAccComputeInstance_local_ssd(t *testing.T) { func TestAccComputeInstance_update_deprecated_network(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -169,14 +180,14 @@ func TestAccComputeInstance_update_deprecated_network(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_basic_deprecated_network, + Config: testAccComputeInstance_basic_deprecated_network(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), ), }, resource.TestStep{ - Config: testAccComputeInstance_update_deprecated_network, + Config: testAccComputeInstance_update_deprecated_network(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), @@ -191,6 +202,7 @@ func TestAccComputeInstance_update_deprecated_network(t *testing.T) { func TestAccComputeInstance_forceNewAndChangeMetadata(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -198,14 +210,14 @@ func TestAccComputeInstance_forceNewAndChangeMetadata(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_basic, + Config: testAccComputeInstance_basic(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), ), }, resource.TestStep{ - Config: testAccComputeInstance_forceNewAndChangeMetadata, + Config: testAccComputeInstance_forceNewAndChangeMetadata(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), @@ -219,6 +231,7 @@ func TestAccComputeInstance_forceNewAndChangeMetadata(t *testing.T) { func TestAccComputeInstance_update(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -226,14 +239,14 @@ func TestAccComputeInstance_update(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_basic, + Config: testAccComputeInstance_basic(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), ), }, resource.TestStep{ - Config: testAccComputeInstance_update, + Config: testAccComputeInstance_update(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), @@ -249,6 +262,7 @@ func TestAccComputeInstance_update(t *testing.T) { func TestAccComputeInstance_service_account(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -256,7 +270,7 @@ func TestAccComputeInstance_service_account(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_service_account, + Config: testAccComputeInstance_service_account(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), @@ -274,6 +288,7 @@ func TestAccComputeInstance_service_account(t *testing.T) { func TestAccComputeInstance_scheduling(t *testing.T) { var instance compute.Instance + var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -281,7 +296,7 @@ func TestAccComputeInstance_scheduling(t *testing.T) { CheckDestroy: testAccCheckComputeInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeInstance_scheduling, + Config: testAccComputeInstance_scheduling(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( "google_compute_instance.foobar", &instance), @@ -436,276 +451,300 @@ func testAccCheckComputeInstanceServiceAccount(instance *compute.Instance, scope } } -const testAccComputeInstance_basic_deprecated_network = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - can_ip_forward = false - tags = ["foo", "bar"] +func testAccComputeInstance_basic_deprecated_network(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + can_ip_forward = false + tags = ["foo", "bar"] - disk { - image = "debian-7-wheezy-v20140814" - } + disk { + image = "debian-7-wheezy-v20140814" + } - network { - source = "default" - } + network { + source = "default" + } - metadata { - foo = "bar" - } -}` + metadata { + foo = "bar" + } + }`, instance) +} -const testAccComputeInstance_update_deprecated_network = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - tags = ["baz"] +func testAccComputeInstance_update_deprecated_network(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + tags = ["baz"] - disk { - image = "debian-7-wheezy-v20140814" - } + disk { + image = "debian-7-wheezy-v20140814" + } - network { - source = "default" - } + network { + source = "default" + } - metadata { - bar = "baz" - } -}` + metadata { + bar = "baz" + } + }`, instance) +} -const testAccComputeInstance_basic = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - can_ip_forward = false - tags = ["foo", "bar"] +func testAccComputeInstance_basic(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + can_ip_forward = false + tags = ["foo", "bar"] - disk { - image = "debian-7-wheezy-v20140814" - } + disk { + image = "debian-7-wheezy-v20140814" + } - network_interface { - network = "default" - } + network_interface { + network = "default" + } - metadata { - foo = "bar" - baz = "qux" - } + metadata { + foo = "bar" + baz = "qux" + } - metadata_startup_script = "echo Hello" -}` + metadata_startup_script = "echo Hello" + }`, instance) +} -const testAccComputeInstance_basic2 = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - can_ip_forward = false - tags = ["foo", "bar"] +func testAccComputeInstance_basic2(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + can_ip_forward = false + tags = ["foo", "bar"] - disk { - image = "debian-cloud/debian-7-wheezy-v20140814" - } + disk { + image = "debian-cloud/debian-7-wheezy-v20140814" + } - network_interface { - network = "default" - } + network_interface { + network = "default" + } - metadata { - foo = "bar" - } -}` + metadata { + foo = "bar" + } + }`, instance) +} -const testAccComputeInstance_basic3 = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - can_ip_forward = false - tags = ["foo", "bar"] +func testAccComputeInstance_basic3(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + can_ip_forward = false + tags = ["foo", "bar"] - disk { - image = "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-7-wheezy-v20140814" - } + disk { + image = "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-7-wheezy-v20140814" + } - network_interface { - network = "default" - } + network_interface { + network = "default" + } - metadata { - foo = "bar" - } -}` + metadata { + foo = "bar" + } + }`, instance) +} // Update zone to ForceNew, and change metadata k/v entirely // Generates diff mismatch -const testAccComputeInstance_forceNewAndChangeMetadata = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - zone = "us-central1-b" - tags = ["baz"] +func testAccComputeInstance_forceNewAndChangeMetadata(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + zone = "us-central1-b" + tags = ["baz"] - disk { - image = "debian-7-wheezy-v20140814" - } + disk { + image = "debian-7-wheezy-v20140814" + } - network_interface { - network = "default" - access_config { } - } + network_interface { + network = "default" + access_config { } + } - metadata { - qux = "true" - } -}` + metadata { + qux = "true" + } + }`, instance) +} // Update metadata, tags, and network_interface -const testAccComputeInstance_update = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - tags = ["baz"] +func testAccComputeInstance_update(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + tags = ["baz"] - disk { - image = "debian-7-wheezy-v20140814" - } - - network_interface { - network = "default" - access_config { } - } - - metadata { - bar = "baz" - } -}` - -const testAccComputeInstance_ip = ` -resource "google_compute_address" "foo" { - name = "foo" -} - -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - tags = ["foo", "bar"] - - disk { - image = "debian-7-wheezy-v20140814" - } - - network_interface { - network = "default" - access_config { - nat_ip = "${google_compute_address.foo.address}" + disk { + image = "debian-7-wheezy-v20140814" } - } - metadata { - foo = "bar" - } -}` + network_interface { + network = "default" + access_config { } + } -const testAccComputeInstance_disks = ` -resource "google_compute_disk" "foobar" { - name = "terraform-test-disk" - size = 10 - type = "pd-ssd" - zone = "us-central1-a" + metadata { + bar = "baz" + } + }`, instance) } -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" - - disk { - image = "debian-7-wheezy-v20140814" +func testAccComputeInstance_ip(ip, instance string) string { + return fmt.Sprintf(` + resource "google_compute_address" "foo" { + name = "%s" } - disk { - disk = "${google_compute_disk.foobar.name}" - auto_delete = false + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + tags = ["foo", "bar"] + + disk { + image = "debian-7-wheezy-v20140814" + } + + network_interface { + network = "default" + access_config { + nat_ip = "${google_compute_address.foo.address}" + } + } + + metadata { + foo = "bar" + } + }`, ip, instance) +} + +func testAccComputeInstance_disks(disk, instance string) string { + return fmt.Sprintf(` + resource "google_compute_disk" "foobar" { + name = "%s" + size = 10 + type = "pd-ssd" + zone = "us-central1-a" } - network_interface { - network = "default" - } + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" - metadata { - foo = "bar" - } -}` + disk { + image = "debian-7-wheezy-v20140814" + } -const testAccComputeInstance_local_ssd = ` -resource "google_compute_instance" "local-ssd" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" + disk { + disk = "${google_compute_disk.foobar.name}" + auto_delete = false + } - disk { - image = "debian-7-wheezy-v20140814" - } + network_interface { + network = "default" + } - disk { - type = "local-ssd" - scratch = true - } + metadata { + foo = "bar" + } + }`, disk, instance) +} - network_interface { - network = "default" - } +func testAccComputeInstance_local_ssd(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "local-ssd" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" -}` + disk { + image = "debian-7-wheezy-v20140814" + } -const testAccComputeInstance_service_account = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" + disk { + type = "local-ssd" + scratch = true + } - disk { - image = "debian-7-wheezy-v20140814" - } + network_interface { + network = "default" + } - network_interface { - network = "default" - } + }`, instance) +} - service_account { - scopes = [ - "userinfo-email", - "compute-ro", - "storage-ro", - ] - } -}` +func testAccComputeInstance_service_account(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" -const testAccComputeInstance_scheduling = ` -resource "google_compute_instance" "foobar" { - name = "terraform-test" - machine_type = "n1-standard-1" - zone = "us-central1-a" + disk { + image = "debian-7-wheezy-v20140814" + } - disk { - image = "debian-7-wheezy-v20140814" - } + network_interface { + network = "default" + } - network_interface { - network = "default" - } + service_account { + scopes = [ + "userinfo-email", + "compute-ro", + "storage-ro", + ] + } + }`, instance) +} - scheduling { - } -}` +func testAccComputeInstance_scheduling(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + + disk { + image = "debian-7-wheezy-v20140814" + } + + network_interface { + network = "default" + } + + scheduling { + } + }`, instance) +} diff --git a/builtin/providers/google/resource_compute_network.go b/builtin/providers/google/resource_compute_network.go index 5a61f2ad65..a3c72aa114 100644 --- a/builtin/providers/google/resource_compute_network.go +++ b/builtin/providers/google/resource_compute_network.go @@ -74,6 +74,7 @@ func resourceComputeNetworkRead(d *schema.ResourceData, meta interface{}) error config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Network %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_network_test.go b/builtin/providers/google/resource_compute_network_test.go index 89827f5762..4337bf7f71 100644 --- a/builtin/providers/google/resource_compute_network_test.go +++ b/builtin/providers/google/resource_compute_network_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -75,8 +76,8 @@ func testAccCheckComputeNetworkExists(n string, network *compute.Network) resour } } -const testAccComputeNetwork_basic = ` +var testAccComputeNetwork_basic = fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "terraform-test" + name = "network-test-%s" ipv4_range = "10.0.0.0/16" -}` +}`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_project_metadata.go b/builtin/providers/google/resource_compute_project_metadata.go index c2f8a4a5fa..c2508c8f31 100644 --- a/builtin/providers/google/resource_compute_project_metadata.go +++ b/builtin/providers/google/resource_compute_project_metadata.go @@ -4,10 +4,9 @@ import ( "fmt" "log" - // "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" - // "google.golang.org/api/googleapi" + "google.golang.org/api/googleapi" ) func resourceComputeProjectMetadata() *schema.Resource { @@ -85,12 +84,20 @@ func resourceComputeProjectMetadataRead(d *schema.ResourceData, meta interface{} log.Printf("[DEBUG] Loading project service: %s", config.Project) project, err := config.clientCompute.Projects.Get(config.Project).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Project Metadata because it's gone") + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error loading project '%s': %s", config.Project, err) } md := project.CommonInstanceMetadata - if err = d.Set("metadata", MetadataFormatSchema(md)); err != nil { + if err = d.Set("metadata", MetadataFormatSchema(d.Get("metadata").(map[string]interface{}), md)); err != nil { return fmt.Errorf("Error setting metadata: %s", err) } diff --git a/builtin/providers/google/resource_compute_project_metadata_test.go b/builtin/providers/google/resource_compute_project_metadata_test.go index 2644433864..7be3dfb263 100644 --- a/builtin/providers/google/resource_compute_project_metadata_test.go +++ b/builtin/providers/google/resource_compute_project_metadata_test.go @@ -193,7 +193,7 @@ resource "google_compute_project_metadata" "fizzbuzz" { const testAccComputeProject_basic1_metadata = ` resource "google_compute_project_metadata" "fizzbuzz" { metadata { - kiwi = "papaya" + kiwi = "papaya" finches = "darwinism" } }` @@ -201,7 +201,7 @@ resource "google_compute_project_metadata" "fizzbuzz" { const testAccComputeProject_modify0_metadata = ` resource "google_compute_project_metadata" "fizzbuzz" { metadata { - paper = "pen" + paper = "pen" genghis_khan = "french bread" happy = "smiling" } @@ -210,7 +210,7 @@ resource "google_compute_project_metadata" "fizzbuzz" { const testAccComputeProject_modify1_metadata = ` resource "google_compute_project_metadata" "fizzbuzz" { metadata { - paper = "pen" + paper = "pen" paris = "french bread" happy = "laughing" } diff --git a/builtin/providers/google/resource_compute_route.go b/builtin/providers/google/resource_compute_route.go index 82b43d3580..9b5b5292fa 100644 --- a/builtin/providers/google/resource_compute_route.go +++ b/builtin/providers/google/resource_compute_route.go @@ -185,6 +185,7 @@ func resourceComputeRouteRead(d *schema.ResourceData, meta interface{}) error { config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Route %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_route_test.go b/builtin/providers/google/resource_compute_route_test.go index e4b8627e93..dff2ed0037 100644 --- a/builtin/providers/google/resource_compute_route_test.go +++ b/builtin/providers/google/resource_compute_route_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -75,16 +76,16 @@ func testAccCheckComputeRouteExists(n string, route *compute.Route) resource.Tes } } -const testAccComputeRoute_basic = ` +var testAccComputeRoute_basic = fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "terraform-test" + name = "route-test-%s" ipv4_range = "10.0.0.0/16" } resource "google_compute_route" "foobar" { - name = "terraform-test" + name = "route-test-%s" dest_range = "15.0.0.0/24" network = "${google_compute_network.foobar.name}" next_hop_ip = "10.0.1.5" priority = 100 -}` +}`, acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_ssl_certificate.go b/builtin/providers/google/resource_compute_ssl_certificate.go index 05de350fac..a80bc2fb24 100644 --- a/builtin/providers/google/resource_compute_ssl_certificate.go +++ b/builtin/providers/google/resource_compute_ssl_certificate.go @@ -2,6 +2,7 @@ package google import ( "fmt" + "log" "strconv" "github.com/hashicorp/terraform/helper/schema" @@ -91,6 +92,7 @@ func resourceComputeSslCertificateRead(d *schema.ResourceData, meta interface{}) config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing SSL Certificate %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_ssl_certificate_test.go b/builtin/providers/google/resource_compute_ssl_certificate_test.go index a237bea165..373e0ab303 100644 --- a/builtin/providers/google/resource_compute_ssl_certificate_test.go +++ b/builtin/providers/google/resource_compute_ssl_certificate_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -70,11 +71,11 @@ func testAccCheckComputeSslCertificateExists(n string) resource.TestCheckFunc { } } -const testAccComputeSslCertificate_basic = ` +var testAccComputeSslCertificate_basic = fmt.Sprintf(` resource "google_compute_ssl_certificate" "foobar" { - name = "terraform-test" + name = "sslcert-test-%s" description = "very descriptive" private_key = "${file("test-fixtures/ssl_cert/test.key")}" certificate = "${file("test-fixtures/ssl_cert/test.crt")}" } -` +`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_target_http_proxy.go b/builtin/providers/google/resource_compute_target_http_proxy.go index 6cf2ccf5d0..72644fb017 100644 --- a/builtin/providers/google/resource_compute_target_http_proxy.go +++ b/builtin/providers/google/resource_compute_target_http_proxy.go @@ -111,6 +111,7 @@ func resourceComputeTargetHttpProxyRead(d *schema.ResourceData, meta interface{} config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Target HTTP Proxy %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_target_http_proxy_test.go b/builtin/providers/google/resource_compute_target_http_proxy_test.go index 6337ada57f..591a3eaa55 100644 --- a/builtin/providers/google/resource_compute_target_http_proxy_test.go +++ b/builtin/providers/google/resource_compute_target_http_proxy_test.go @@ -4,11 +4,17 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccComputeTargetHttpProxy_basic(t *testing.T) { + target := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + backend := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + hc := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + urlmap1 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + urlmap2 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -16,7 +22,7 @@ func TestAccComputeTargetHttpProxy_basic(t *testing.T) { CheckDestroy: testAccCheckComputeTargetHttpProxyDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeTargetHttpProxy_basic1, + Config: testAccComputeTargetHttpProxy_basic1(target, backend, hc, urlmap1, urlmap2), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetHttpProxyExists( "google_compute_target_http_proxy.foobar"), @@ -27,6 +33,11 @@ func TestAccComputeTargetHttpProxy_basic(t *testing.T) { } func TestAccComputeTargetHttpProxy_update(t *testing.T) { + target := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + backend := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + hc := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + urlmap1 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + urlmap2 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -34,7 +45,7 @@ func TestAccComputeTargetHttpProxy_update(t *testing.T) { CheckDestroy: testAccCheckComputeTargetHttpProxyDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeTargetHttpProxy_basic1, + Config: testAccComputeTargetHttpProxy_basic1(target, backend, hc, urlmap1, urlmap2), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetHttpProxyExists( "google_compute_target_http_proxy.foobar"), @@ -42,7 +53,7 @@ func TestAccComputeTargetHttpProxy_update(t *testing.T) { }, resource.TestStep{ - Config: testAccComputeTargetHttpProxy_basic2, + Config: testAccComputeTargetHttpProxy_basic2(target, backend, hc, urlmap1, urlmap2), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetHttpProxyExists( "google_compute_target_http_proxy.foobar"), @@ -97,130 +108,134 @@ func testAccCheckComputeTargetHttpProxyExists(n string) resource.TestCheckFunc { } } -const testAccComputeTargetHttpProxy_basic1 = ` -resource "google_compute_target_http_proxy" "foobar" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test" - url_map = "${google_compute_url_map.foobar1.self_link}" -} - -resource "google_compute_backend_service" "foobar" { - name = "service" - health_checks = ["${google_compute_http_health_check.zero.self_link}"] -} - -resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" - request_path = "/" - check_interval_sec = 1 - timeout_sec = 1 -} - -resource "google_compute_url_map" "foobar1" { - name = "myurlmap1" - default_service = "${google_compute_backend_service.foobar.self_link}" - host_rule { - hosts = ["mysite.com", "myothersite.com"] - path_matcher = "boop" +func testAccComputeTargetHttpProxy_basic1(target, backend, hc, urlmap1, urlmap2 string) string { + return fmt.Sprintf(` + resource "google_compute_target_http_proxy" "foobar" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + url_map = "${google_compute_url_map.foobar1.self_link}" } - path_matcher { + + resource "google_compute_backend_service" "foobar" { + name = "%s" + health_checks = ["${google_compute_http_health_check.zero.self_link}"] + } + + resource "google_compute_http_health_check" "zero" { + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 + } + + resource "google_compute_url_map" "foobar1" { + name = "%s" default_service = "${google_compute_backend_service.foobar.self_link}" - name = "boop" - path_rule { - paths = ["/*"] + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "boop" + } + path_matcher { + default_service = "${google_compute_backend_service.foobar.self_link}" + name = "boop" + path_rule { + paths = ["/*"] + service = "${google_compute_backend_service.foobar.self_link}" + } + } + test { + host = "mysite.com" + path = "/*" service = "${google_compute_backend_service.foobar.self_link}" } } - test { - host = "mysite.com" - path = "/*" - service = "${google_compute_backend_service.foobar.self_link}" - } -} -resource "google_compute_url_map" "foobar2" { - name = "myurlmap2" - default_service = "${google_compute_backend_service.foobar.self_link}" - host_rule { - hosts = ["mysite.com", "myothersite.com"] - path_matcher = "boop" - } - path_matcher { + resource "google_compute_url_map" "foobar2" { + name = "%s" default_service = "${google_compute_backend_service.foobar.self_link}" - name = "boop" - path_rule { - paths = ["/*"] + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "boop" + } + path_matcher { + default_service = "${google_compute_backend_service.foobar.self_link}" + name = "boop" + path_rule { + paths = ["/*"] + service = "${google_compute_backend_service.foobar.self_link}" + } + } + test { + host = "mysite.com" + path = "/*" service = "${google_compute_backend_service.foobar.self_link}" } } - test { - host = "mysite.com" - path = "/*" - service = "${google_compute_backend_service.foobar.self_link}" + `, target, backend, hc, urlmap1, urlmap2) +} + +func testAccComputeTargetHttpProxy_basic2(target, backend, hc, urlmap1, urlmap2 string) string { + return fmt.Sprintf(` + resource "google_compute_target_http_proxy" "foobar" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + url_map = "${google_compute_url_map.foobar2.self_link}" } -} -` -const testAccComputeTargetHttpProxy_basic2 = ` -resource "google_compute_target_http_proxy" "foobar" { - description = "Resource created for Terraform acceptance testing" - name = "terraform-test" - url_map = "${google_compute_url_map.foobar2.self_link}" -} - -resource "google_compute_backend_service" "foobar" { - name = "service" - health_checks = ["${google_compute_http_health_check.zero.self_link}"] -} - -resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" - request_path = "/" - check_interval_sec = 1 - timeout_sec = 1 -} - -resource "google_compute_url_map" "foobar1" { - name = "myurlmap1" - default_service = "${google_compute_backend_service.foobar.self_link}" - host_rule { - hosts = ["mysite.com", "myothersite.com"] - path_matcher = "boop" + resource "google_compute_backend_service" "foobar" { + name = "%s" + health_checks = ["${google_compute_http_health_check.zero.self_link}"] } - path_matcher { + + resource "google_compute_http_health_check" "zero" { + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 + } + + resource "google_compute_url_map" "foobar1" { + name = "%s" default_service = "${google_compute_backend_service.foobar.self_link}" - name = "boop" - path_rule { - paths = ["/*"] + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "boop" + } + path_matcher { + default_service = "${google_compute_backend_service.foobar.self_link}" + name = "boop" + path_rule { + paths = ["/*"] + service = "${google_compute_backend_service.foobar.self_link}" + } + } + test { + host = "mysite.com" + path = "/*" service = "${google_compute_backend_service.foobar.self_link}" } } - test { - host = "mysite.com" - path = "/*" - service = "${google_compute_backend_service.foobar.self_link}" - } -} -resource "google_compute_url_map" "foobar2" { - name = "myurlmap2" - default_service = "${google_compute_backend_service.foobar.self_link}" - host_rule { - hosts = ["mysite.com", "myothersite.com"] - path_matcher = "boop" - } - path_matcher { + resource "google_compute_url_map" "foobar2" { + name = "%s" default_service = "${google_compute_backend_service.foobar.self_link}" - name = "boop" - path_rule { - paths = ["/*"] + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "boop" + } + path_matcher { + default_service = "${google_compute_backend_service.foobar.self_link}" + name = "boop" + path_rule { + paths = ["/*"] + service = "${google_compute_backend_service.foobar.self_link}" + } + } + test { + host = "mysite.com" + path = "/*" service = "${google_compute_backend_service.foobar.self_link}" } } - test { - host = "mysite.com" - path = "/*" - service = "${google_compute_backend_service.foobar.self_link}" - } + `, target, backend, hc, urlmap1, urlmap2) } -` diff --git a/builtin/providers/google/resource_compute_target_https_proxy.go b/builtin/providers/google/resource_compute_target_https_proxy.go index 1ea8444414..b30fd1eab8 100644 --- a/builtin/providers/google/resource_compute_target_https_proxy.go +++ b/builtin/providers/google/resource_compute_target_https_proxy.go @@ -186,6 +186,7 @@ func resourceComputeTargetHttpsProxyRead(d *schema.ResourceData, meta interface{ config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Target HTTPS Proxy %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_target_https_proxy_test.go b/builtin/providers/google/resource_compute_target_https_proxy_test.go index af3704d3e0..f8d731f080 100644 --- a/builtin/providers/google/resource_compute_target_https_proxy_test.go +++ b/builtin/providers/google/resource_compute_target_https_proxy_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -97,28 +98,28 @@ func testAccCheckComputeTargetHttpsProxyExists(n string) resource.TestCheckFunc } } -const testAccComputeTargetHttpsProxy_basic1 = ` +var testAccComputeTargetHttpsProxy_basic1 = fmt.Sprintf(` resource "google_compute_target_https_proxy" "foobar" { description = "Resource created for Terraform acceptance testing" - name = "terraform-test" + name = "httpsproxy-test-%s" url_map = "${google_compute_url_map.foobar.self_link}" ssl_certificates = ["${google_compute_ssl_certificate.foobar1.self_link}"] } resource "google_compute_backend_service" "foobar" { - name = "service" + name = "httpsproxy-test-%s" health_checks = ["${google_compute_http_health_check.zero.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" + name = "httpsproxy-test-%s" request_path = "/" check_interval_sec = 1 timeout_sec = 1 } resource "google_compute_url_map" "foobar" { - name = "myurlmap" + name = "httpsproxy-test-%s" default_service = "${google_compute_backend_service.foobar.self_link}" host_rule { hosts = ["mysite.com", "myothersite.com"] @@ -140,42 +141,43 @@ resource "google_compute_url_map" "foobar" { } resource "google_compute_ssl_certificate" "foobar1" { - name = "terraform-test1" + name = "httpsproxy-test-%s" description = "very descriptive" private_key = "${file("test-fixtures/ssl_cert/test.key")}" certificate = "${file("test-fixtures/ssl_cert/test.crt")}" } resource "google_compute_ssl_certificate" "foobar2" { - name = "terraform-test2" + name = "httpsproxy-test-%s" description = "very descriptive" private_key = "${file("test-fixtures/ssl_cert/test.key")}" certificate = "${file("test-fixtures/ssl_cert/test.crt")}" } -` +`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10), + acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) -const testAccComputeTargetHttpsProxy_basic2 = ` +var testAccComputeTargetHttpsProxy_basic2 = fmt.Sprintf(` resource "google_compute_target_https_proxy" "foobar" { description = "Resource created for Terraform acceptance testing" - name = "terraform-test" + name = "httpsproxy-test-%s" url_map = "${google_compute_url_map.foobar.self_link}" ssl_certificates = ["${google_compute_ssl_certificate.foobar1.self_link}"] } resource "google_compute_backend_service" "foobar" { - name = "service" + name = "httpsproxy-test-%s" health_checks = ["${google_compute_http_health_check.zero.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" + name = "httpsproxy-test-%s" request_path = "/" check_interval_sec = 1 timeout_sec = 1 } resource "google_compute_url_map" "foobar" { - name = "myurlmap" + name = "httpsproxy-test-%s" default_service = "${google_compute_backend_service.foobar.self_link}" host_rule { hosts = ["mysite.com", "myothersite.com"] @@ -197,16 +199,17 @@ resource "google_compute_url_map" "foobar" { } resource "google_compute_ssl_certificate" "foobar1" { - name = "terraform-test1" + name = "httpsproxy-test-%s" description = "very descriptive" private_key = "${file("test-fixtures/ssl_cert/test.key")}" certificate = "${file("test-fixtures/ssl_cert/test.crt")}" } resource "google_compute_ssl_certificate" "foobar2" { - name = "terraform-test2" + name = "httpsproxy-test-%s" description = "very descriptive" private_key = "${file("test-fixtures/ssl_cert/test.key")}" certificate = "${file("test-fixtures/ssl_cert/test.crt")}" } -` +`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10), + acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_target_pool.go b/builtin/providers/google/resource_compute_target_pool.go index 91e83a46aa..fa25a1b720 100644 --- a/builtin/providers/google/resource_compute_target_pool.go +++ b/builtin/providers/google/resource_compute_target_pool.go @@ -330,6 +330,7 @@ func resourceComputeTargetPoolRead(d *schema.ResourceData, meta interface{}) err config.Project, region, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Target Pool %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_compute_target_pool_test.go b/builtin/providers/google/resource_compute_target_pool_test.go index 4a65eaac65..2ab48d319c 100644 --- a/builtin/providers/google/resource_compute_target_pool_test.go +++ b/builtin/providers/google/resource_compute_target_pool_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -71,10 +72,10 @@ func testAccCheckComputeTargetPoolExists(n string) resource.TestCheckFunc { } } -const testAccComputeTargetPool_basic = ` +var testAccComputeTargetPool_basic = fmt.Sprintf(` resource "google_compute_target_pool" "foobar" { description = "Resource created for Terraform acceptance testing" instances = ["us-central1-a/foo", "us-central1-b/bar"] - name = "terraform-test" + name = "tpool-test-%s" session_affinity = "CLIENT_IP_PROTO" -}` +}`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_url_map.go b/builtin/providers/google/resource_compute_url_map.go index 4b29c4360d..47a38431fd 100644 --- a/builtin/providers/google/resource_compute_url_map.go +++ b/builtin/providers/google/resource_compute_url_map.go @@ -2,10 +2,12 @@ package google import ( "fmt" + "log" "strconv" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeUrlMap() *schema.Resource { @@ -292,6 +294,14 @@ func resourceComputeUrlMapRead(d *schema.ResourceData, meta interface{}) error { urlMap, err := config.clientCompute.UrlMaps.Get(config.Project, name).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing URL Map %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error, failed to get Url Map %s: %s", name, err) } diff --git a/builtin/providers/google/resource_compute_url_map_test.go b/builtin/providers/google/resource_compute_url_map_test.go index ac2f08b135..0f43df5f4e 100644 --- a/builtin/providers/google/resource_compute_url_map_test.go +++ b/builtin/providers/google/resource_compute_url_map_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -119,21 +120,21 @@ func testAccCheckComputeUrlMapExists(n string) resource.TestCheckFunc { } } -const testAccComputeUrlMap_basic1 = ` +var testAccComputeUrlMap_basic1 = fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { - name = "service" + name = "urlmap-test-%s" health_checks = ["${google_compute_http_health_check.zero.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" + name = "urlmap-test-%s" request_path = "/" check_interval_sec = 1 timeout_sec = 1 } resource "google_compute_url_map" "foobar" { - name = "myurlmap" + name = "urlmap-test-%s" default_service = "${google_compute_backend_service.foobar.self_link}" host_rule { @@ -156,23 +157,23 @@ resource "google_compute_url_map" "foobar" { service = "${google_compute_backend_service.foobar.self_link}" } } -` +`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) -const testAccComputeUrlMap_basic2 = ` +var testAccComputeUrlMap_basic2 = fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { - name = "service" + name = "urlmap-test-%s" health_checks = ["${google_compute_http_health_check.zero.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" + name = "urlmap-test-%s" request_path = "/" check_interval_sec = 1 timeout_sec = 1 } resource "google_compute_url_map" "foobar" { - name = "myurlmap" + name = "urlmap-test-%s" default_service = "${google_compute_backend_service.foobar.self_link}" host_rule { @@ -195,23 +196,23 @@ resource "google_compute_url_map" "foobar" { service = "${google_compute_backend_service.foobar.self_link}" } } -` +`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) -const testAccComputeUrlMap_advanced1 = ` +var testAccComputeUrlMap_advanced1 = fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { - name = "service" + name = "urlmap-test-%s" health_checks = ["${google_compute_http_health_check.zero.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" + name = "urlmap-test-%s" request_path = "/" check_interval_sec = 1 timeout_sec = 1 } resource "google_compute_url_map" "foobar" { - name = "myurlmap" + name = "urlmap-test-%s" default_service = "${google_compute_backend_service.foobar.self_link}" host_rule { @@ -242,23 +243,23 @@ resource "google_compute_url_map" "foobar" { } } } -` +`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) -const testAccComputeUrlMap_advanced2 = ` +var testAccComputeUrlMap_advanced2 = fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { - name = "service" + name = "urlmap-test-%s" health_checks = ["${google_compute_http_health_check.zero.self_link}"] } resource "google_compute_http_health_check" "zero" { - name = "tf-test-zero" + name = "urlmap-test-%s" request_path = "/" check_interval_sec = 1 timeout_sec = 1 } resource "google_compute_url_map" "foobar" { - name = "myurlmap" + name = "urlmap-test-%s" default_service = "${google_compute_backend_service.foobar.self_link}" host_rule { @@ -308,4 +309,4 @@ resource "google_compute_url_map" "foobar" { } } } -` +`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_vpn_gateway.go b/builtin/providers/google/resource_compute_vpn_gateway.go index bd5350b9c3..697ec8b649 100644 --- a/builtin/providers/google/resource_compute_vpn_gateway.go +++ b/builtin/providers/google/resource_compute_vpn_gateway.go @@ -2,10 +2,12 @@ package google import ( "fmt" + "log" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeVpnGateway() *schema.Resource { @@ -88,6 +90,14 @@ func resourceComputeVpnGatewayRead(d *schema.ResourceData, meta interface{}) err vpnGateway, err := vpnGatewaysService.Get(project, region, name).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing VPN Gateway %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error Reading VPN Gateway %s: %s", name, err) } diff --git a/builtin/providers/google/resource_compute_vpn_gateway_test.go b/builtin/providers/google/resource_compute_vpn_gateway_test.go index 1d62704239..1011808a89 100644 --- a/builtin/providers/google/resource_compute_vpn_gateway_test.go +++ b/builtin/providers/google/resource_compute_vpn_gateway_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -79,13 +80,13 @@ func testAccCheckComputeVpnGatewayExists(n string) resource.TestCheckFunc { } } -const testAccComputeVpnGateway_basic = ` +var testAccComputeVpnGateway_basic = fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "tf-test-network" + name = "gateway-test-%s" ipv4_range = "10.0.0.0/16" } resource "google_compute_vpn_gateway" "foobar" { - name = "tf-test-vpn-gateway" + name = "gateway-test-%s" network = "${google_compute_network.foobar.self_link}" region = "us-central1" -} ` +}`, acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_compute_vpn_tunnel.go b/builtin/providers/google/resource_compute_vpn_tunnel.go index 172f96a907..f6290504b8 100644 --- a/builtin/providers/google/resource_compute_vpn_tunnel.go +++ b/builtin/providers/google/resource_compute_vpn_tunnel.go @@ -2,10 +2,12 @@ package google import ( "fmt" + "log" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeVpnTunnel() *schema.Resource { @@ -118,6 +120,14 @@ func resourceComputeVpnTunnelRead(d *schema.ResourceData, meta interface{}) erro vpnTunnel, err := vpnTunnelsService.Get(project, region, name).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing VPN Tunnel %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error Reading VPN Tunnel %s: %s", name, err) } diff --git a/builtin/providers/google/resource_compute_vpn_tunnel_test.go b/builtin/providers/google/resource_compute_vpn_tunnel_test.go index 4bb666879b..007441eeba 100644 --- a/builtin/providers/google/resource_compute_vpn_tunnel_test.go +++ b/builtin/providers/google/resource_compute_vpn_tunnel_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -79,29 +80,29 @@ func testAccCheckComputeVpnTunnelExists(n string) resource.TestCheckFunc { } } -const testAccComputeVpnTunnel_basic = ` +var testAccComputeVpnTunnel_basic = fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "tf-test-network" + name = "tunnel-test-%s" ipv4_range = "10.0.0.0/16" } resource "google_compute_address" "foobar" { - name = "tf-test-static-ip" + name = "tunnel-test-%s" region = "us-central1" } resource "google_compute_vpn_gateway" "foobar" { - name = "tf-test-vpn-gateway" + name = "tunnel-test-%s" network = "${google_compute_network.foobar.self_link}" region = "${google_compute_address.foobar.region}" } resource "google_compute_forwarding_rule" "foobar_esp" { - name = "tf-test-fr-esp" + name = "tunnel-test-%s" region = "${google_compute_vpn_gateway.foobar.region}" ip_protocol = "ESP" ip_address = "${google_compute_address.foobar.address}" target = "${google_compute_vpn_gateway.foobar.self_link}" } resource "google_compute_forwarding_rule" "foobar_udp500" { - name = "tf-test-fr-udp500" + name = "tunnel-test-%s" region = "${google_compute_forwarding_rule.foobar_esp.region}" ip_protocol = "UDP" port_range = "500" @@ -109,7 +110,7 @@ resource "google_compute_forwarding_rule" "foobar_udp500" { target = "${google_compute_vpn_gateway.foobar.self_link}" } resource "google_compute_forwarding_rule" "foobar_udp4500" { - name = "tf-test-fr-udp4500" + name = "tunnel-test-%s" region = "${google_compute_forwarding_rule.foobar_udp500.region}" ip_protocol = "UDP" port_range = "4500" @@ -117,9 +118,11 @@ resource "google_compute_forwarding_rule" "foobar_udp4500" { target = "${google_compute_vpn_gateway.foobar.self_link}" } resource "google_compute_vpn_tunnel" "foobar" { - name = "tf-test-vpn-tunnel" + name = "tunnel-test-%s" region = "${google_compute_forwarding_rule.foobar_udp4500.region}" target_vpn_gateway = "${google_compute_vpn_gateway.foobar.self_link}" shared_secret = "unguessable" peer_ip = "0.0.0.0" -}` +}`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10), + acctest.RandString(10), acctest.RandString(10), acctest.RandString(10), + acctest.RandString(10)) diff --git a/builtin/providers/google/resource_container_cluster.go b/builtin/providers/google/resource_container_cluster.go index 68c0b96ad0..841644015b 100644 --- a/builtin/providers/google/resource_container_cluster.go +++ b/builtin/providers/google/resource_container_cluster.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/container/v1" + "google.golang.org/api/googleapi" ) func resourceContainerCluster() *schema.Resource { @@ -280,7 +281,7 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er // Wait until it's created wait := resource.StateChangeConf{ Pending: []string{"PENDING", "RUNNING"}, - Target: "DONE", + Target: []string{"DONE"}, Timeout: 30 * time.Minute, MinTimeout: 3 * time.Second, Refresh: func() (interface{}, string, error) { @@ -312,6 +313,14 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro cluster, err := config.clientContainer.Projects.Zones.Clusters.Get( config.Project, zoneName, d.Get("name").(string)).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Container Cluster %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return err } @@ -364,7 +373,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er // Wait until it's updated wait := resource.StateChangeConf{ Pending: []string{"PENDING", "RUNNING"}, - Target: "DONE", + Target: []string{"DONE"}, Timeout: 10 * time.Minute, MinTimeout: 2 * time.Second, Refresh: func() (interface{}, string, error) { @@ -404,7 +413,7 @@ func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) er // Wait until it's deleted wait := resource.StateChangeConf{ Pending: []string{"PENDING", "RUNNING"}, - Target: "DONE", + Target: []string{"DONE"}, Timeout: 10 * time.Minute, MinTimeout: 3 * time.Second, Refresh: func() (interface{}, string, error) { diff --git a/builtin/providers/google/resource_container_cluster_test.go b/builtin/providers/google/resource_container_cluster_test.go index ea4a5a597b..11cf1378e7 100644 --- a/builtin/providers/google/resource_container_cluster_test.go +++ b/builtin/providers/google/resource_container_cluster_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -89,9 +90,9 @@ func testAccCheckContainerClusterExists(n string) resource.TestCheckFunc { } } -const testAccContainerCluster_basic = ` +var testAccContainerCluster_basic = fmt.Sprintf(` resource "google_container_cluster" "primary" { - name = "terraform-foo-bar-test" + name = "cluster-test-%s" zone = "us-central1-a" initial_node_count = 3 @@ -99,11 +100,11 @@ resource "google_container_cluster" "primary" { username = "mr.yoda" password = "adoy.rm" } -}` +}`, acctest.RandString(10)) -const testAccContainerCluster_withNodeConfig = ` +var testAccContainerCluster_withNodeConfig = fmt.Sprintf(` resource "google_container_cluster" "with_node_config" { - name = "terraform-foo-bar-with-nodeconfig" + name = "cluster-test-%s" zone = "us-central1-f" initial_node_count = 1 @@ -122,4 +123,4 @@ resource "google_container_cluster" "with_node_config" { "https://www.googleapis.com/auth/monitoring" ] } -}` +}`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_dns_managed_zone.go b/builtin/providers/google/resource_dns_managed_zone.go index 7253297e60..6d76c0c442 100644 --- a/builtin/providers/google/resource_dns_managed_zone.go +++ b/builtin/providers/google/resource_dns_managed_zone.go @@ -81,6 +81,7 @@ func resourceDnsManagedZoneRead(d *schema.ResourceData, meta interface{}) error config.Project, d.Id()).Do() if err != nil { if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing DNS Managed Zone %q because it's gone", d.Get("name").(string)) // The resource doesn't exist anymore d.SetId("") diff --git a/builtin/providers/google/resource_dns_managed_zone_test.go b/builtin/providers/google/resource_dns_managed_zone_test.go index 2f91dfcc8e..b90fc8697d 100644 --- a/builtin/providers/google/resource_dns_managed_zone_test.go +++ b/builtin/providers/google/resource_dns_managed_zone_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/dns/v1" @@ -75,9 +76,9 @@ func testAccCheckDnsManagedZoneExists(n string, zone *dns.ManagedZone) resource. } } -const testAccDnsManagedZone_basic = ` +var testAccDnsManagedZone_basic = fmt.Sprintf(` resource "google_dns_managed_zone" "foobar" { - name = "terraform-test" + name = "mzone-test-%s" dns_name = "terraform.test." description = "Test Description" -}` +}`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_dns_record_set.go b/builtin/providers/google/resource_dns_record_set.go index 05fa547f72..49b1fce71b 100644 --- a/builtin/providers/google/resource_dns_record_set.go +++ b/builtin/providers/google/resource_dns_record_set.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/terraform/helper/schema" "google.golang.org/api/dns/v1" + "google.golang.org/api/googleapi" ) func resourceDnsRecordSet() *schema.Resource { @@ -114,6 +115,14 @@ func resourceDnsRecordSetRead(d *schema.ResourceData, meta interface{}) error { resp, err := config.clientDns.ResourceRecordSets.List( config.Project, zone).Name(name).Type(dnsType).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing DNS Record Set %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error reading DNS RecordSet: %#v", err) } if len(resp.Rrsets) == 0 { diff --git a/builtin/providers/google/resource_dns_record_set_test.go b/builtin/providers/google/resource_dns_record_set_test.go index 5ff1233885..94c7fce16b 100644 --- a/builtin/providers/google/resource_dns_record_set_test.go +++ b/builtin/providers/google/resource_dns_record_set_test.go @@ -4,21 +4,23 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccDnsRecordSet_basic(t *testing.T) { + zoneName := fmt.Sprintf("dnszone-test-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDnsRecordSetDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccDnsRecordSet_basic, + Config: testAccDnsRecordSet_basic(zoneName), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar"), + "google_dns_record_set.foobar", zoneName), ), }, }, @@ -42,11 +44,11 @@ func testAccCheckDnsRecordSetDestroy(s *terraform.State) error { return nil } -func testAccCheckDnsRecordSetExists(name string) resource.TestCheckFunc { +func testAccCheckDnsRecordSetExists(resourceType, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] + rs, ok := s.RootModule().Resources[resourceType] if !ok { - return fmt.Errorf("Not found: %s", name) + return fmt.Errorf("Not found: %s", resourceName) } dnsName := rs.Primary.Attributes["name"] @@ -59,7 +61,7 @@ func testAccCheckDnsRecordSetExists(name string) resource.TestCheckFunc { config := testAccProvider.Meta().(*Config) resp, err := config.clientDns.ResourceRecordSets.List( - config.Project, "terraform-test-zone").Name(dnsName).Type(dnsType).Do() + config.Project, resourceName).Name(dnsName).Type(dnsType).Do() if err != nil { return fmt.Errorf("Error confirming DNS RecordSet existence: %#v", err) } @@ -76,17 +78,19 @@ func testAccCheckDnsRecordSetExists(name string) resource.TestCheckFunc { } } -const testAccDnsRecordSet_basic = ` -resource "google_dns_managed_zone" "parent-zone" { - name = "terraform-test-zone" - dns_name = "terraform.test." - description = "Test Description" +func testAccDnsRecordSet_basic(zoneName string) string { + return fmt.Sprintf(` + resource "google_dns_managed_zone" "parent-zone" { + name = "%s" + dns_name = "terraform.test." + description = "Test Description" + } + resource "google_dns_record_set" "foobar" { + managed_zone = "${google_dns_managed_zone.parent-zone.name}" + name = "test-record.terraform.test." + type = "A" + rrdatas = ["127.0.0.1", "127.0.0.10"] + ttl = 600 + } + `, zoneName) } -resource "google_dns_record_set" "foobar" { - managed_zone = "${google_dns_managed_zone.parent-zone.name}" - name = "test-record.terraform.test." - type = "A" - rrdatas = ["127.0.0.1", "127.0.0.10"] - ttl = 600 -} -` diff --git a/builtin/providers/google/resource_pubsub_subscription.go b/builtin/providers/google/resource_pubsub_subscription.go new file mode 100644 index 0000000000..03e6f31238 --- /dev/null +++ b/builtin/providers/google/resource_pubsub_subscription.go @@ -0,0 +1,133 @@ +package google + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/pubsub/v1" +) + +func resourcePubsubSubscription() *schema.Resource { + return &schema.Resource{ + Create: resourcePubsubSubscriptionCreate, + Read: resourcePubsubSubscriptionRead, + Delete: resourcePubsubSubscriptionDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "ack_deadline_seconds": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + + "push_config": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "attributes": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: schema.TypeString, + }, + + "push_endpoint": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + + "topic": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func cleanAdditionalArgs(args map[string]interface{}) map[string]string { + cleaned_args := make(map[string]string) + for k, v := range args { + cleaned_args[k] = v.(string) + } + return cleaned_args +} + +func resourcePubsubSubscriptionCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := fmt.Sprintf("projects/%s/subscriptions/%s", config.Project, d.Get("name").(string)) + computed_topic_name := fmt.Sprintf("projects/%s/topics/%s", config.Project, d.Get("topic").(string)) + + // process optional parameters + var ackDeadlineSeconds int64 + ackDeadlineSeconds = 10 + if v, ok := d.GetOk("ack_deadline_seconds"); ok { + ackDeadlineSeconds = v.(int64) + } + + var subscription *pubsub.Subscription + if v, ok := d.GetOk("push_config"); ok { + push_configs := v.([]interface{}) + + if len(push_configs) > 1 { + return fmt.Errorf("At most one PushConfig is allowed per subscription!") + } + + push_config := push_configs[0].(map[string]interface{}) + attributes := push_config["attributes"].(map[string]interface{}) + attributesClean := cleanAdditionalArgs(attributes) + pushConfig := &pubsub.PushConfig{Attributes: attributesClean, PushEndpoint: push_config["push_endpoint"].(string)} + subscription = &pubsub.Subscription{AckDeadlineSeconds: ackDeadlineSeconds, Topic: computed_topic_name, PushConfig: pushConfig} + } else { + subscription = &pubsub.Subscription{AckDeadlineSeconds: ackDeadlineSeconds, Topic: computed_topic_name} + } + + call := config.clientPubsub.Projects.Subscriptions.Create(name, subscription) + res, err := call.Do() + if err != nil { + return err + } + + d.SetId(res.Name) + + return nil +} + +func resourcePubsubSubscriptionRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := d.Id() + call := config.clientPubsub.Projects.Subscriptions.Get(name) + _, err := call.Do() + if err != nil { + return err + } + + return nil +} + +func resourcePubsubSubscriptionDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := d.Id() + call := config.clientPubsub.Projects.Subscriptions.Delete(name) + _, err := call.Do() + if err != nil { + return err + } + + return nil +} diff --git a/builtin/providers/google/resource_pubsub_subscription_test.go b/builtin/providers/google/resource_pubsub_subscription_test.go new file mode 100644 index 0000000000..9cc0a218b3 --- /dev/null +++ b/builtin/providers/google/resource_pubsub_subscription_test.go @@ -0,0 +1,74 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccPubsubSubscriptionCreate(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPubsubSubscriptionDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccPubsubSubscription, + Check: resource.ComposeTestCheckFunc( + testAccPubsubSubscriptionExists( + "google_pubsub_subscription.foobar_sub"), + ), + }, + }, + }) +} + +func testAccCheckPubsubSubscriptionDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_pubsub_subscription" { + continue + } + + config := testAccProvider.Meta().(*Config) + _, err := config.clientPubsub.Projects.Subscriptions.Get(rs.Primary.ID).Do() + if err != nil { + fmt.Errorf("Subscription still present") + } + } + + return nil +} + +func testAccPubsubSubscriptionExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + config := testAccProvider.Meta().(*Config) + _, err := config.clientPubsub.Projects.Subscriptions.Get(rs.Primary.ID).Do() + if err != nil { + fmt.Errorf("Subscription still present") + } + + return nil + } +} + +var testAccPubsubSubscription = fmt.Sprintf(` +resource "google_pubsub_topic" "foobar_sub" { + name = "pssub-test-%s" +} + +resource "google_pubsub_subscription" "foobar_sub" { + name = "pssub-test-%s" + topic = "${google_pubsub_topic.foobar_sub.name}" +}`, acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_pubsub_topic.go b/builtin/providers/google/resource_pubsub_topic.go new file mode 100644 index 0000000000..9d6a6a8797 --- /dev/null +++ b/builtin/providers/google/resource_pubsub_topic.go @@ -0,0 +1,67 @@ +package google + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/pubsub/v1" +) + +func resourcePubsubTopic() *schema.Resource { + return &schema.Resource{ + Create: resourcePubsubTopicCreate, + Read: resourcePubsubTopicRead, + Delete: resourcePubsubTopicDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourcePubsubTopicCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := fmt.Sprintf("projects/%s/topics/%s", config.Project, d.Get("name").(string)) + topic := &pubsub.Topic{} + + call := config.clientPubsub.Projects.Topics.Create(name, topic) + res, err := call.Do() + if err != nil { + return err + } + + d.SetId(res.Name) + + return nil +} + +func resourcePubsubTopicRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := d.Id() + call := config.clientPubsub.Projects.Topics.Get(name) + _, err := call.Do() + if err != nil { + return err + } + + return nil +} + +func resourcePubsubTopicDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := d.Id() + call := config.clientPubsub.Projects.Topics.Delete(name) + _, err := call.Do() + if err != nil { + return err + } + + return nil +} diff --git a/builtin/providers/google/resource_pubsub_topic_test.go b/builtin/providers/google/resource_pubsub_topic_test.go new file mode 100644 index 0000000000..f81b9c21d1 --- /dev/null +++ b/builtin/providers/google/resource_pubsub_topic_test.go @@ -0,0 +1,69 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccPubsubTopicCreate(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPubsubTopicDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccPubsubTopic, + Check: resource.ComposeTestCheckFunc( + testAccPubsubTopicExists( + "google_pubsub_topic.foobar"), + ), + }, + }, + }) +} + +func testAccCheckPubsubTopicDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_pubsub_topic" { + continue + } + + config := testAccProvider.Meta().(*Config) + _, err := config.clientPubsub.Projects.Topics.Get(rs.Primary.ID).Do() + if err != nil { + fmt.Errorf("Topic still present") + } + } + + return nil +} + +func testAccPubsubTopicExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + config := testAccProvider.Meta().(*Config) + _, err := config.clientPubsub.Projects.Topics.Get(rs.Primary.ID).Do() + if err != nil { + fmt.Errorf("Topic still present") + } + + return nil + } +} + +var testAccPubsubTopic = fmt.Sprintf(` +resource "google_pubsub_topic" "foobar" { + name = "pstopic-test-%s" +}`, acctest.RandString(10)) diff --git a/builtin/providers/google/resource_sql_database.go b/builtin/providers/google/resource_sql_database.go index e8715f9b0c..f66d3c5845 100644 --- a/builtin/providers/google/resource_sql_database.go +++ b/builtin/providers/google/resource_sql_database.go @@ -2,9 +2,11 @@ package google import ( "fmt" + "log" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/googleapi" "google.golang.org/api/sqladmin/v1beta4" ) @@ -75,6 +77,14 @@ func resourceSqlDatabaseRead(d *schema.ResourceData, meta interface{}) error { database_name).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing SQL Database %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error, failed to get"+ "database %s in instance %s: %s", database_name, instance_name, err) diff --git a/builtin/providers/google/resource_sql_database_instance.go b/builtin/providers/google/resource_sql_database_instance.go index d684839283..6ca416e88d 100644 --- a/builtin/providers/google/resource_sql_database_instance.go +++ b/builtin/providers/google/resource_sql_database_instance.go @@ -2,9 +2,12 @@ package google import ( "fmt" + "log" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/googleapi" "google.golang.org/api/sqladmin/v1beta4" ) @@ -18,7 +21,8 @@ func resourceSqlDatabaseInstance() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, + Computed: true, ForceNew: true, }, "master_instance_name": &schema.Schema{ @@ -231,7 +235,6 @@ func resourceSqlDatabaseInstance() *schema.Resource { func resourceSqlDatabaseInstanceCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - name := d.Get("name").(string) region := d.Get("region").(string) databaseVersion := d.Get("database_version").(string) @@ -376,12 +379,18 @@ func resourceSqlDatabaseInstanceCreate(d *schema.ResourceData, meta interface{}) } instance := &sqladmin.DatabaseInstance{ - Name: name, Region: region, Settings: settings, DatabaseVersion: databaseVersion, } + if v, ok := d.GetOk("name"); ok { + instance.Name = v.(string) + } else { + instance.Name = resource.UniqueId() + d.Set("name", instance.Name) + } + if v, ok := d.GetOk("replica_configuration"); ok { _replicaConfigurationList := v.([]interface{}) if len(_replicaConfigurationList) > 1 { @@ -444,7 +453,11 @@ func resourceSqlDatabaseInstanceCreate(d *schema.ResourceData, meta interface{}) op, err := config.clientSqlAdmin.Instances.Insert(config.Project, instance).Do() if err != nil { - return fmt.Errorf("Error, failed to create instance %s: %s", name, err) + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 409 { + return fmt.Errorf("Error, the name %s is unavailable because it was used recently", instance.Name) + } else { + return fmt.Errorf("Error, failed to create instance %s: %s", instance.Name, err) + } } err = sqladminOperationWait(config, op, "Create Instance") @@ -462,6 +475,14 @@ func resourceSqlDatabaseInstanceRead(d *schema.ResourceData, meta interface{}) e d.Get("name").(string)).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing SQL Database %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error retrieving instance %s: %s", d.Get("name").(string), err) } diff --git a/builtin/providers/google/resource_sql_database_instance_test.go b/builtin/providers/google/resource_sql_database_instance_test.go index c8c32fc6b5..fda17660e8 100644 --- a/builtin/providers/google/resource_sql_database_instance_test.go +++ b/builtin/providers/google/resource_sql_database_instance_test.go @@ -20,6 +20,7 @@ import ( func TestAccGoogleSqlDatabaseInstance_basic(t *testing.T) { var instance sqladmin.DatabaseInstance + databaseID := genRandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -27,7 +28,29 @@ func TestAccGoogleSqlDatabaseInstance_basic(t *testing.T) { CheckDestroy: testAccGoogleSqlDatabaseInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleSqlDatabaseInstance_basic, + Config: fmt.Sprintf( + testGoogleSqlDatabaseInstance_basic, databaseID), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleSqlDatabaseInstanceExists( + "google_sql_database_instance.instance", &instance), + testAccCheckGoogleSqlDatabaseInstanceEquals( + "google_sql_database_instance.instance", &instance), + ), + }, + }, + }) +} + +func TestAccGoogleSqlDatabaseInstance_basic2(t *testing.T) { + var instance sqladmin.DatabaseInstance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleSqlDatabaseInstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleSqlDatabaseInstance_basic2, Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseInstanceExists( "google_sql_database_instance.instance", &instance), @@ -41,6 +64,7 @@ func TestAccGoogleSqlDatabaseInstance_basic(t *testing.T) { func TestAccGoogleSqlDatabaseInstance_settings_basic(t *testing.T) { var instance sqladmin.DatabaseInstance + databaseID := genRandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -48,7 +72,8 @@ func TestAccGoogleSqlDatabaseInstance_settings_basic(t *testing.T) { CheckDestroy: testAccGoogleSqlDatabaseInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleSqlDatabaseInstance_settings, + Config: fmt.Sprintf( + testGoogleSqlDatabaseInstance_settings, databaseID), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseInstanceExists( "google_sql_database_instance.instance", &instance), @@ -62,6 +87,7 @@ func TestAccGoogleSqlDatabaseInstance_settings_basic(t *testing.T) { func TestAccGoogleSqlDatabaseInstance_settings_upgrade(t *testing.T) { var instance sqladmin.DatabaseInstance + databaseID := genRandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -69,7 +95,8 @@ func TestAccGoogleSqlDatabaseInstance_settings_upgrade(t *testing.T) { CheckDestroy: testAccGoogleSqlDatabaseInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleSqlDatabaseInstance_basic, + Config: fmt.Sprintf( + testGoogleSqlDatabaseInstance_basic, databaseID), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseInstanceExists( "google_sql_database_instance.instance", &instance), @@ -78,7 +105,8 @@ func TestAccGoogleSqlDatabaseInstance_settings_upgrade(t *testing.T) { ), }, resource.TestStep{ - Config: testGoogleSqlDatabaseInstance_settings, + Config: fmt.Sprintf( + testGoogleSqlDatabaseInstance_settings, databaseID), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseInstanceExists( "google_sql_database_instance.instance", &instance), @@ -92,6 +120,7 @@ func TestAccGoogleSqlDatabaseInstance_settings_upgrade(t *testing.T) { func TestAccGoogleSqlDatabaseInstance_settings_downgrade(t *testing.T) { var instance sqladmin.DatabaseInstance + databaseID := genRandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -99,7 +128,8 @@ func TestAccGoogleSqlDatabaseInstance_settings_downgrade(t *testing.T) { CheckDestroy: testAccGoogleSqlDatabaseInstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleSqlDatabaseInstance_settings, + Config: fmt.Sprintf( + testGoogleSqlDatabaseInstance_settings, databaseID), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseInstanceExists( "google_sql_database_instance.instance", &instance), @@ -108,7 +138,8 @@ func TestAccGoogleSqlDatabaseInstance_settings_downgrade(t *testing.T) { ), }, resource.TestStep{ - Config: testGoogleSqlDatabaseInstance_basic, + Config: fmt.Sprintf( + testGoogleSqlDatabaseInstance_basic, databaseID), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseInstanceExists( "google_sql_database_instance.instance", &instance), @@ -319,9 +350,7 @@ func testAccGoogleSqlDatabaseInstanceDestroy(s *terraform.State) error { return nil } -var databaseId = genRandInt() - -var testGoogleSqlDatabaseInstance_basic = fmt.Sprintf(` +var testGoogleSqlDatabaseInstance_basic = ` resource "google_sql_database_instance" "instance" { name = "tf-lw-%d" region = "us-central" @@ -330,9 +359,19 @@ resource "google_sql_database_instance" "instance" { crash_safe_replication = false } } -`, databaseId) +` -var testGoogleSqlDatabaseInstance_settings = fmt.Sprintf(` +var testGoogleSqlDatabaseInstance_basic2 = ` +resource "google_sql_database_instance" "instance" { + region = "us-central" + settings { + tier = "D0" + crash_safe_replication = false + } +} +` + +var testGoogleSqlDatabaseInstance_settings = ` resource "google_sql_database_instance" "instance" { name = "tf-lw-%d" region = "us-central" @@ -361,11 +400,11 @@ resource "google_sql_database_instance" "instance" { activation_policy = "ON_DEMAND" } } -`, databaseId) +` // Note - this test is not feasible to run unless we generate // backups first. -var testGoogleSqlDatabaseInstance_replica = fmt.Sprintf(` +var testGoogleSqlDatabaseInstance_replica = ` resource "google_sql_database_instance" "instance_master" { name = "tf-lw-%d" database_version = "MYSQL_5_6" @@ -406,4 +445,4 @@ resource "google_sql_database_instance" "instance" { verify_server_certificate = false } } -`, genRandInt(), genRandInt()) +` diff --git a/builtin/providers/google/resource_sql_database_test.go b/builtin/providers/google/resource_sql_database_test.go index 70d7e5f056..509fa1de1f 100644 --- a/builtin/providers/google/resource_sql_database_test.go +++ b/builtin/providers/google/resource_sql_database_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -100,7 +101,7 @@ func testAccGoogleSqlDatabaseDestroy(s *terraform.State) error { var testGoogleSqlDatabase_basic = fmt.Sprintf(` resource "google_sql_database_instance" "instance" { - name = "tf-lw-%d" + name = "sqldatabasetest%s" region = "us-central" settings { tier = "D0" @@ -108,7 +109,7 @@ resource "google_sql_database_instance" "instance" { } resource "google_sql_database" "database" { - name = "database1" + name = "sqldatabasetest%s" instance = "${google_sql_database_instance.instance.name}" } -`, genRandInt()) +`, acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_sql_user.go b/builtin/providers/google/resource_sql_user.go new file mode 100644 index 0000000000..06e76becc9 --- /dev/null +++ b/builtin/providers/google/resource_sql_user.go @@ -0,0 +1,183 @@ +package google + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + + "google.golang.org/api/googleapi" + "google.golang.org/api/sqladmin/v1beta4" +) + +func resourceSqlUser() *schema.Resource { + return &schema.Resource{ + Create: resourceSqlUserCreate, + Read: resourceSqlUserRead, + Update: resourceSqlUserUpdate, + Delete: resourceSqlUserDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "host": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "instance": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceSqlUserCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := d.Get("name").(string) + instance := d.Get("instance").(string) + password := d.Get("password").(string) + host := d.Get("host").(string) + project := config.Project + + user := &sqladmin.User{ + Name: name, + Instance: instance, + Password: password, + Host: host, + } + + op, err := config.clientSqlAdmin.Users.Insert(project, instance, + user).Do() + + if err != nil { + return fmt.Errorf("Error, failed to insert "+ + "user %s into instance %s: %s", name, instance, err) + } + + err = sqladminOperationWait(config, op, "Insert User") + + if err != nil { + return fmt.Errorf("Error, failure waiting for insertion of %s "+ + "into %s: %s", name, instance, err) + } + + return resourceSqlUserRead(d, meta) +} + +func resourceSqlUserRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := d.Get("name").(string) + instance := d.Get("instance").(string) + project := config.Project + + users, err := config.clientSqlAdmin.Users.List(project, instance).Do() + + if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing SQL User %q because it's gone", d.Get("name").(string)) + d.SetId("") + + return nil + } + + return fmt.Errorf("Error, failed to get user %s in instance %s: %s", name, instance, err) + } + + found := false + for _, user := range users.Items { + if user.Name == name { + found = true + break + } + } + + if !found { + log.Printf("[WARN] Removing SQL User %q because it's gone", d.Get("name").(string)) + d.SetId("") + + return nil + } + + d.SetId(name) + + return nil +} + +func resourceSqlUserUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + if d.HasChange("password") { + name := d.Get("name").(string) + instance := d.Get("instance").(string) + host := d.Get("host").(string) + password := d.Get("password").(string) + project := config.Project + + user := &sqladmin.User{ + Name: name, + Instance: instance, + Password: password, + Host: host, + } + + op, err := config.clientSqlAdmin.Users.Update(project, instance, host, name, + user).Do() + + if err != nil { + return fmt.Errorf("Error, failed to update"+ + "user %s into user %s: %s", name, instance, err) + } + + err = sqladminOperationWait(config, op, "Insert User") + + if err != nil { + return fmt.Errorf("Error, failure waiting for update of %s "+ + "in %s: %s", name, instance, err) + } + + return resourceSqlUserRead(d, meta) + } + + return nil +} + +func resourceSqlUserDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + name := d.Get("name").(string) + instance := d.Get("instance").(string) + host := d.Get("host").(string) + project := config.Project + + op, err := config.clientSqlAdmin.Users.Delete(project, instance, host, name).Do() + + if err != nil { + return fmt.Errorf("Error, failed to delete"+ + "user %s in instance %s: %s", name, + instance, err) + } + + err = sqladminOperationWait(config, op, "Delete User") + + if err != nil { + return fmt.Errorf("Error, failure waiting for deletion of %s "+ + "in %s: %s", name, instance, err) + } + + return nil +} diff --git a/builtin/providers/google/resource_sql_user_test.go b/builtin/providers/google/resource_sql_user_test.go new file mode 100644 index 0000000000..0b91b398c3 --- /dev/null +++ b/builtin/providers/google/resource_sql_user_test.go @@ -0,0 +1,142 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccGoogleSqlUser_basic(t *testing.T) { + user := acctest.RandString(10) + instance := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleSqlUserDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleSqlUser_basic(instance, user), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleSqlUserExists("google_sql_user.user"), + ), + }, + }, + }) +} + +func TestAccGoogleSqlUser_update(t *testing.T) { + user := acctest.RandString(10) + instance := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleSqlUserDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleSqlUser_basic(instance, user), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleSqlUserExists("google_sql_user.user"), + ), + }, + + resource.TestStep{ + Config: testGoogleSqlUser_basic2(instance, user), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleSqlUserExists("google_sql_user.user"), + ), + }, + }, + }) +} + +func testAccCheckGoogleSqlUserExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Resource not found: %s", n) + } + + name := rs.Primary.Attributes["name"] + instance := rs.Primary.Attributes["instance"] + host := rs.Primary.Attributes["host"] + users, err := config.clientSqlAdmin.Users.List(config.Project, + instance).Do() + + for _, user := range users.Items { + if user.Name == name && user.Host == host { + return nil + } + } + + return fmt.Errorf("Not found: %s: %s", n, err) + } +} + +func testAccGoogleSqlUserDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + config := testAccProvider.Meta().(*Config) + if rs.Type != "google_sql_database" { + continue + } + + name := rs.Primary.Attributes["name"] + instance := rs.Primary.Attributes["instance"] + host := rs.Primary.Attributes["host"] + users, err := config.clientSqlAdmin.Users.List(config.Project, + instance).Do() + + for _, user := range users.Items { + if user.Name == name && user.Host == host { + return fmt.Errorf("User still %s exists %s", name, err) + } + } + + return nil + } + + return nil +} + +func testGoogleSqlUser_basic(instance, user string) string { + return fmt.Sprintf(` + resource "google_sql_database_instance" "instance" { + name = "i%s" + region = "us-central" + settings { + tier = "D0" + } + } + + resource "google_sql_user" "user" { + name = "user%s" + instance = "${google_sql_database_instance.instance.name}" + host = "google.com" + password = "hunter2" + } + `, instance, user) +} + +func testGoogleSqlUser_basic2(instance, user string) string { + return fmt.Sprintf(` + resource "google_sql_database_instance" "instance" { + name = "i%s" + region = "us-central" + settings { + tier = "D0" + } + } + + resource "google_sql_user" "user" { + name = "user%s" + instance = "${google_sql_database_instance.instance.name}" + host = "google.com" + password = "oops" + } + `, instance, user) +} diff --git a/builtin/providers/google/resource_storage_bucket.go b/builtin/providers/google/resource_storage_bucket.go index 9118119a8f..c4e64244fb 100644 --- a/builtin/providers/google/resource_storage_bucket.go +++ b/builtin/providers/google/resource_storage_bucket.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/googleapi" "google.golang.org/api/storage/v1" ) @@ -174,8 +175,15 @@ func resourceStorageBucketRead(d *schema.ResourceData, meta interface{}) error { res, err := config.clientStorage.Buckets.Get(bucket).Do() if err != nil { - fmt.Printf("Error reading bucket %s: %v", bucket, err) - return err + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Bucket %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + + return fmt.Errorf("Error reading bucket %s: %v", bucket, err) } log.Printf("[DEBUG] Read bucket %v at location %v\n\n", res.Name, res.SelfLink) diff --git a/builtin/providers/google/resource_storage_bucket_acl.go b/builtin/providers/google/resource_storage_bucket_acl.go index 3b866e0ad2..488fd85f45 100644 --- a/builtin/providers/google/resource_storage_bucket_acl.go +++ b/builtin/providers/google/resource_storage_bucket_acl.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/googleapi" "google.golang.org/api/storage/v1" ) @@ -166,6 +167,14 @@ func resourceStorageBucketAclRead(d *schema.ResourceData, meta interface{}) erro res, err := config.clientStorage.BucketAccessControls.List(bucket).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Bucket ACL for bucket %q because it's gone", d.Get("bucket").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return err } diff --git a/builtin/providers/google/resource_storage_bucket_acl_test.go b/builtin/providers/google/resource_storage_bucket_acl_test.go index 6f23d1882e..a8b11e8f62 100644 --- a/builtin/providers/google/resource_storage_bucket_acl_test.go +++ b/builtin/providers/google/resource_storage_bucket_acl_test.go @@ -4,6 +4,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -18,19 +19,22 @@ var roleEntityBasic3_owner = "OWNER:user-yetanotheremail@gmail.com" var roleEntityBasic3_reader = "READER:user-yetanotheremail@gmail.com" -var testAclBucketName = fmt.Sprintf("%s-%d", "tf-test-acl-bucket", genRandInt()) +func testAclBucketName() string { + return fmt.Sprintf("%s-%d", "tf-test-acl-bucket", acctest.RandInt()) +} func TestAccGoogleStorageBucketAcl_basic(t *testing.T) { + bucketName := testAclBucketName() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccGoogleStorageBucketAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsAclBasic1, + Config: testGoogleStorageBucketsAclBasic1(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), ), }, }, @@ -38,33 +42,34 @@ func TestAccGoogleStorageBucketAcl_basic(t *testing.T) { } func TestAccGoogleStorageBucketAcl_upgrade(t *testing.T) { + bucketName := testAclBucketName() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccGoogleStorageBucketAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsAclBasic1, + Config: testGoogleStorageBucketsAclBasic1(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), ), }, resource.TestStep{ - Config: testGoogleStorageBucketsAclBasic2, + Config: testGoogleStorageBucketsAclBasic2(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic3_owner), ), }, resource.TestStep{ - Config: testGoogleStorageBucketsAclBasicDelete, + Config: testGoogleStorageBucketsAclBasicDelete(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic3_owner), ), }, }, @@ -72,33 +77,34 @@ func TestAccGoogleStorageBucketAcl_upgrade(t *testing.T) { } func TestAccGoogleStorageBucketAcl_downgrade(t *testing.T) { + bucketName := testAclBucketName() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccGoogleStorageBucketAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsAclBasic2, + Config: testGoogleStorageBucketsAclBasic2(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic3_owner), ), }, resource.TestStep{ - Config: testGoogleStorageBucketsAclBasic3, + Config: testGoogleStorageBucketsAclBasic3(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAcl(testAclBucketName, roleEntityBasic3_reader), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic3_reader), ), }, resource.TestStep{ - Config: testGoogleStorageBucketsAclBasicDelete, + Config: testGoogleStorageBucketsAclBasicDelete(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAclDelete(testAclBucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic3_owner), ), }, }, @@ -112,7 +118,7 @@ func TestAccGoogleStorageBucketAcl_predefined(t *testing.T) { CheckDestroy: testAccGoogleStorageBucketAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsAclPredefined, + Config: testGoogleStorageBucketsAclPredefined(bucketName), }, }, }) @@ -172,7 +178,8 @@ func testAccGoogleStorageBucketAclDestroy(s *terraform.State) error { return nil } -var testGoogleStorageBucketsAclBasic1 = fmt.Sprintf(` +func testGoogleStorageBucketsAclBasic1(bucketName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -181,9 +188,11 @@ resource "google_storage_bucket_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = ["%s", "%s"] } -`, testAclBucketName, roleEntityBasic1, roleEntityBasic2) +`, bucketName, roleEntityBasic1, roleEntityBasic2) +} -var testGoogleStorageBucketsAclBasic2 = fmt.Sprintf(` +func testGoogleStorageBucketsAclBasic2(bucketName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -192,9 +201,11 @@ resource "google_storage_bucket_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = ["%s", "%s"] } -`, testAclBucketName, roleEntityBasic2, roleEntityBasic3_owner) +`, bucketName, roleEntityBasic2, roleEntityBasic3_owner) +} -var testGoogleStorageBucketsAclBasicDelete = fmt.Sprintf(` +func testGoogleStorageBucketsAclBasicDelete(bucketName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -203,9 +214,11 @@ resource "google_storage_bucket_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = [] } -`, testAclBucketName) +`, bucketName) +} -var testGoogleStorageBucketsAclBasic3 = fmt.Sprintf(` +func testGoogleStorageBucketsAclBasic3(bucketName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -214,9 +227,11 @@ resource "google_storage_bucket_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = ["%s", "%s"] } -`, testAclBucketName, roleEntityBasic2, roleEntityBasic3_reader) +`, bucketName, roleEntityBasic2, roleEntityBasic3_reader) +} -var testGoogleStorageBucketsAclPredefined = fmt.Sprintf(` +func testGoogleStorageBucketsAclPredefined(bucketName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -226,4 +241,5 @@ resource "google_storage_bucket_acl" "acl" { predefined_acl = "projectPrivate" default_acl = "projectPrivate" } -`, testAclBucketName) +`, bucketName) +} diff --git a/builtin/providers/google/resource_storage_bucket_object.go b/builtin/providers/google/resource_storage_bucket_object.go index 231153a85c..679c7e74e5 100644 --- a/builtin/providers/google/resource_storage_bucket_object.go +++ b/builtin/providers/google/resource_storage_bucket_object.go @@ -1,11 +1,15 @@ package google import ( + "bytes" "fmt" + "io" + "log" "os" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/googleapi" "google.golang.org/api/storage/v1" ) @@ -21,26 +25,39 @@ func resourceStorageBucketObject() *schema.Resource { Required: true, ForceNew: true, }, + "name": &schema.Schema{ Type: schema.TypeString, Required: true, ForceNew: true, }, + "source": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"content"}, }, + + "content": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"source"}, + }, + "predefined_acl": &schema.Schema{ Type: schema.TypeString, Deprecated: "Please use resource \"storage_object_acl.predefined_acl\" instead.", Optional: true, ForceNew: true, }, + "md5hash": &schema.Schema{ Type: schema.TypeString, Computed: true, }, + "crc32c": &schema.Schema{ Type: schema.TypeString, Computed: true, @@ -58,11 +75,18 @@ func resourceStorageBucketObjectCreate(d *schema.ResourceData, meta interface{}) bucket := d.Get("bucket").(string) name := d.Get("name").(string) - source := d.Get("source").(string) + var media io.Reader - file, err := os.Open(source) - if err != nil { - return fmt.Errorf("Error opening %s: %s", source, err) + if v, ok := d.GetOk("source"); ok { + err := error(nil) + media, err = os.Open(v.(string)) + if err != nil { + return err + } + } else if v, ok := d.GetOk("content"); ok { + media = bytes.NewReader([]byte(v.(string))) + } else { + return fmt.Errorf("Error, either \"content\" or \"string\" must be specified") } objectsService := storage.NewObjectsService(config.clientStorage) @@ -70,15 +94,15 @@ func resourceStorageBucketObjectCreate(d *schema.ResourceData, meta interface{}) insertCall := objectsService.Insert(bucket, object) insertCall.Name(name) - insertCall.Media(file) + insertCall.Media(media) if v, ok := d.GetOk("predefined_acl"); ok { insertCall.PredefinedAcl(v.(string)) } - _, err = insertCall.Do() + _, err := insertCall.Do() if err != nil { - return fmt.Errorf("Error uploading contents of object %s from %s: %s", name, source, err) + return fmt.Errorf("Error uploading object %s: %s", name, err) } return resourceStorageBucketObjectRead(d, meta) @@ -96,6 +120,14 @@ func resourceStorageBucketObjectRead(d *schema.ResourceData, meta interface{}) e res, err := getCall.Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Bucket Object %q because it's gone", d.Get("name").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return fmt.Errorf("Error retrieving contents of object %s: %s", name, err) } diff --git a/builtin/providers/google/resource_storage_bucket_object_test.go b/builtin/providers/google/resource_storage_bucket_object_test.go index e84822fddf..a8fd49c8cc 100644 --- a/builtin/providers/google/resource_storage_bucket_object_test.go +++ b/builtin/providers/google/resource_storage_bucket_object_test.go @@ -16,6 +16,7 @@ import ( var tf, err = ioutil.TempFile("", "tf-gce-test") var bucketName = "tf-gce-bucket-test" var objectName = "tf-gce-test" +var content = "now this is content!" func TestAccGoogleStorageObject_basic(t *testing.T) { data := []byte("data data data") @@ -42,6 +43,31 @@ func TestAccGoogleStorageObject_basic(t *testing.T) { }) } +func TestAccGoogleStorageObject_content(t *testing.T) { + data := []byte(content) + h := md5.New() + h.Write(data) + data_md5 := base64.StdEncoding.EncodeToString(h.Sum(nil)) + + ioutil.WriteFile(tf.Name(), data, 0644) + resource.Test(t, resource.TestCase{ + PreCheck: func() { + if err != nil { + panic(err) + } + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccGoogleStorageObjectDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testGoogleStorageBucketsObjectContent, + Check: testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + }, + }, + }) +} + func testAccCheckGoogleStorageObject(bucket, object, md5 string) resource.TestCheckFunc { return func(s *terraform.State) error { config := testAccProvider.Meta().(*Config) @@ -87,6 +113,19 @@ func testAccGoogleStorageObjectDestroy(s *terraform.State) error { return nil } +var testGoogleStorageBucketsObjectContent = fmt.Sprintf(` +resource "google_storage_bucket" "bucket" { + name = "%s" +} + +resource "google_storage_bucket_object" "object" { + name = "%s" + bucket = "${google_storage_bucket.bucket.name}" + content = "%s" + predefined_acl = "projectPrivate" +} +`, bucketName, objectName, content) + var testGoogleStorageBucketsObjectBasic = fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" diff --git a/builtin/providers/google/resource_storage_bucket_test.go b/builtin/providers/google/resource_storage_bucket_test.go index a5e7ea6361..35fc8f3081 100644 --- a/builtin/providers/google/resource_storage_bucket_test.go +++ b/builtin/providers/google/resource_storage_bucket_test.go @@ -5,6 +5,7 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -13,7 +14,7 @@ import ( ) func TestAccStorage_basic(t *testing.T) { - var bucketName string + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -21,10 +22,10 @@ func TestAccStorage_basic(t *testing.T) { CheckDestroy: testAccGoogleStorageDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsReaderDefaults, + Config: testGoogleStorageBucketsReaderDefaults(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( - "google_storage_bucket.bucket", &bucketName), + "google_storage_bucket.bucket", bucketName), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "location", "US"), resource.TestCheckResourceAttr( @@ -36,7 +37,7 @@ func TestAccStorage_basic(t *testing.T) { } func TestAccStorageCustomAttributes(t *testing.T) { - var bucketName string + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -44,10 +45,10 @@ func TestAccStorageCustomAttributes(t *testing.T) { CheckDestroy: testAccGoogleStorageDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsReaderCustomAttributes, + Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( - "google_storage_bucket.bucket", &bucketName), + "google_storage_bucket.bucket", bucketName), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "location", "EU"), resource.TestCheckResourceAttr( @@ -59,7 +60,7 @@ func TestAccStorageCustomAttributes(t *testing.T) { } func TestAccStorageBucketUpdate(t *testing.T) { - var bucketName string + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -67,10 +68,10 @@ func TestAccStorageBucketUpdate(t *testing.T) { CheckDestroy: testAccGoogleStorageDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsReaderDefaults, + Config: testGoogleStorageBucketsReaderDefaults(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( - "google_storage_bucket.bucket", &bucketName), + "google_storage_bucket.bucket", bucketName), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "location", "US"), resource.TestCheckResourceAttr( @@ -78,10 +79,10 @@ func TestAccStorageBucketUpdate(t *testing.T) { ), }, resource.TestStep{ - Config: testGoogleStorageBucketsReaderCustomAttributes, + Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( - "google_storage_bucket.bucket", &bucketName), + "google_storage_bucket.bucket", bucketName), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "predefined_acl", "publicReadWrite"), resource.TestCheckResourceAttr( @@ -95,7 +96,7 @@ func TestAccStorageBucketUpdate(t *testing.T) { } func TestAccStorageForceDestroy(t *testing.T) { - var bucketName string + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -103,29 +104,29 @@ func TestAccStorageForceDestroy(t *testing.T) { CheckDestroy: testAccGoogleStorageDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageBucketsReaderCustomAttributes, + Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudStorageBucketExists( - "google_storage_bucket.bucket", &bucketName), + "google_storage_bucket.bucket", bucketName), ), }, resource.TestStep{ - Config: testGoogleStorageBucketsReaderCustomAttributes, + Config: testGoogleStorageBucketsReaderCustomAttributes(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckCloudStorageBucketPutItem(&bucketName), + testAccCheckCloudStorageBucketPutItem(bucketName), ), }, resource.TestStep{ Config: "", Check: resource.ComposeTestCheckFunc( - testAccCheckCloudStorageBucketMissing(&bucketName), + testAccCheckCloudStorageBucketMissing(bucketName), ), }, }, }) } -func testAccCheckCloudStorageBucketExists(n string, bucketName *string) resource.TestCheckFunc { +func testAccCheckCloudStorageBucketExists(n string, bucketName string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -147,12 +148,14 @@ func testAccCheckCloudStorageBucketExists(n string, bucketName *string) resource return fmt.Errorf("Bucket not found") } - *bucketName = found.Name + if found.Name != bucketName { + return fmt.Errorf("expected name %s, got %s", bucketName, found.Name) + } return nil } } -func testAccCheckCloudStorageBucketPutItem(bucketName *string) resource.TestCheckFunc { +func testAccCheckCloudStorageBucketPutItem(bucketName string) resource.TestCheckFunc { return func(s *terraform.State) error { config := testAccProvider.Meta().(*Config) @@ -161,7 +164,7 @@ func testAccCheckCloudStorageBucketPutItem(bucketName *string) resource.TestChec object := &storage.Object{Name: "bucketDestroyTestFile"} // This needs to use Media(io.Reader) call, otherwise it does not go to /upload API and fails - if res, err := config.clientStorage.Objects.Insert(*bucketName, object).Media(dataReader).Do(); err == nil { + if res, err := config.clientStorage.Objects.Insert(bucketName, object).Media(dataReader).Do(); err == nil { fmt.Printf("Created object %v at location %v\n\n", res.Name, res.SelfLink) } else { return fmt.Errorf("Objects.Insert failed: %v", err) @@ -171,20 +174,20 @@ func testAccCheckCloudStorageBucketPutItem(bucketName *string) resource.TestChec } } -func testAccCheckCloudStorageBucketMissing(bucketName *string) resource.TestCheckFunc { +func testAccCheckCloudStorageBucketMissing(bucketName string) resource.TestCheckFunc { return func(s *terraform.State) error { config := testAccProvider.Meta().(*Config) - _, err := config.clientStorage.Buckets.Get(*bucketName).Do() + _, err := config.clientStorage.Buckets.Get(bucketName).Do() if err == nil { - return fmt.Errorf("Found %s", *bucketName) + return fmt.Errorf("Found %s", bucketName) } if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { return nil - } else { - return err } + + return err } } @@ -205,19 +208,21 @@ func testAccGoogleStorageDestroy(s *terraform.State) error { return nil } -var randInt = genRandInt() - -var testGoogleStorageBucketsReaderDefaults = fmt.Sprintf(` +func testGoogleStorageBucketsReaderDefaults(bucketName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { - name = "tf-test-bucket-%d" + name = "%s" +} +`, bucketName) } -`, randInt) -var testGoogleStorageBucketsReaderCustomAttributes = fmt.Sprintf(` +func testGoogleStorageBucketsReaderCustomAttributes(bucketName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { - name = "tf-test-bucket-%d" + name = "%s" predefined_acl = "publicReadWrite" location = "EU" force_destroy = "true" } -`, randInt) +`, bucketName) +} diff --git a/builtin/providers/google/resource_storage_object_acl.go b/builtin/providers/google/resource_storage_object_acl.go index 5212f81db2..e4968265f7 100644 --- a/builtin/providers/google/resource_storage_object_acl.go +++ b/builtin/providers/google/resource_storage_object_acl.go @@ -6,6 +6,7 @@ import ( "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/googleapi" "google.golang.org/api/storage/v1" ) @@ -134,6 +135,14 @@ func resourceStorageObjectAclRead(d *schema.ResourceData, meta interface{}) erro res, err := config.clientStorage.ObjectAccessControls.List(bucket, object).Do() if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + log.Printf("[WARN] Removing Storage Object ACL for Bucket %q because it's gone", d.Get("bucket").(string)) + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + return err } diff --git a/builtin/providers/google/resource_storage_object_acl_test.go b/builtin/providers/google/resource_storage_object_acl_test.go index ff14f683c8..5cac86a14b 100644 --- a/builtin/providers/google/resource_storage_object_acl_test.go +++ b/builtin/providers/google/resource_storage_object_acl_test.go @@ -14,10 +14,15 @@ import ( ) var tfObjectAcl, errObjectAcl = ioutil.TempFile("", "tf-gce-test") -var testAclObjectName = fmt.Sprintf("%s-%d", "tf-test-acl-object", - rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + +func testAclObjectName() string { + return fmt.Sprintf("%s-%d", "tf-test-acl-object", + rand.New(rand.NewSource(time.Now().UnixNano())).Int()) +} func TestAccGoogleStorageObjectAcl_basic(t *testing.T) { + bucketName := testAclBucketName() + objectName := testAclObjectName() objectData := []byte("data data data") ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) resource.Test(t, resource.TestCase{ @@ -31,12 +36,12 @@ func TestAccGoogleStorageObjectAcl_basic(t *testing.T) { CheckDestroy: testAccGoogleStorageObjectAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageObjectsAclBasic1, + Config: testGoogleStorageObjectsAclBasic1(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic2), ), }, }, @@ -44,6 +49,8 @@ func TestAccGoogleStorageObjectAcl_basic(t *testing.T) { } func TestAccGoogleStorageObjectAcl_upgrade(t *testing.T) { + bucketName := testAclBucketName() + objectName := testAclObjectName() objectData := []byte("data data data") ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) resource.Test(t, resource.TestCase{ @@ -57,34 +64,34 @@ func TestAccGoogleStorageObjectAcl_upgrade(t *testing.T) { CheckDestroy: testAccGoogleStorageObjectAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageObjectsAclBasic1, + Config: testGoogleStorageObjectsAclBasic1(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic2), ), }, resource.TestStep{ - Config: testGoogleStorageObjectsAclBasic2, + Config: testGoogleStorageObjectsAclBasic2(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic3_owner), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic3_owner), ), }, resource.TestStep{ - Config: testGoogleStorageObjectsAclBasicDelete, + Config: testGoogleStorageObjectsAclBasicDelete(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, - testAclObjectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, - testAclObjectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, - testAclObjectName, roleEntityBasic3_reader), + testAccCheckGoogleStorageObjectAclDelete(bucketName, + objectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAclDelete(bucketName, + objectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAclDelete(bucketName, + objectName, roleEntityBasic3_reader), ), }, }, @@ -92,6 +99,8 @@ func TestAccGoogleStorageObjectAcl_upgrade(t *testing.T) { } func TestAccGoogleStorageObjectAcl_downgrade(t *testing.T) { + bucketName := testAclBucketName() + objectName := testAclObjectName() objectData := []byte("data data data") ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) resource.Test(t, resource.TestCase{ @@ -105,34 +114,34 @@ func TestAccGoogleStorageObjectAcl_downgrade(t *testing.T) { CheckDestroy: testAccGoogleStorageObjectAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageObjectsAclBasic2, + Config: testGoogleStorageObjectsAclBasic2(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic3_owner), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic3_owner), ), }, resource.TestStep{ - Config: testGoogleStorageObjectsAclBasic3, + Config: testGoogleStorageObjectsAclBasic3(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAcl(testAclBucketName, - testAclObjectName, roleEntityBasic3_reader), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAcl(bucketName, + objectName, roleEntityBasic3_reader), ), }, resource.TestStep{ - Config: testGoogleStorageObjectsAclBasicDelete, + Config: testGoogleStorageObjectsAclBasicDelete(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, - testAclObjectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, - testAclObjectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAclDelete(testAclBucketName, - testAclObjectName, roleEntityBasic3_reader), + testAccCheckGoogleStorageObjectAclDelete(bucketName, + objectName, roleEntityBasic1), + testAccCheckGoogleStorageObjectAclDelete(bucketName, + objectName, roleEntityBasic2), + testAccCheckGoogleStorageObjectAclDelete(bucketName, + objectName, roleEntityBasic3_reader), ), }, }, @@ -140,6 +149,8 @@ func TestAccGoogleStorageObjectAcl_downgrade(t *testing.T) { } func TestAccGoogleStorageObjectAcl_predefined(t *testing.T) { + bucketName := testAclBucketName() + objectName := testAclObjectName() objectData := []byte("data data data") ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644) resource.Test(t, resource.TestCase{ @@ -153,7 +164,7 @@ func TestAccGoogleStorageObjectAcl_predefined(t *testing.T) { CheckDestroy: testAccGoogleStorageObjectAclDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testGoogleStorageObjectsAclPredefined, + Config: testGoogleStorageObjectsAclPredefined(bucketName, objectName), }, }, }) @@ -216,7 +227,8 @@ func testAccGoogleStorageObjectAclDestroy(s *terraform.State) error { return nil } -var testGoogleStorageObjectsAclBasicDelete = fmt.Sprintf(` +func testGoogleStorageObjectsAclBasicDelete(bucketName string, objectName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -232,9 +244,11 @@ resource "google_storage_object_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = [] } -`, testAclBucketName, testAclObjectName, tfObjectAcl.Name()) +`, bucketName, objectName, tfObjectAcl.Name()) +} -var testGoogleStorageObjectsAclBasic1 = fmt.Sprintf(` +func testGoogleStorageObjectsAclBasic1(bucketName string, objectName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -250,10 +264,12 @@ resource "google_storage_object_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = ["%s", "%s"] } -`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), - roleEntityBasic1, roleEntityBasic2) +`, bucketName, objectName, tfObjectAcl.Name(), + roleEntityBasic1, roleEntityBasic2) +} -var testGoogleStorageObjectsAclBasic2 = fmt.Sprintf(` +func testGoogleStorageObjectsAclBasic2(bucketName string, objectName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -269,10 +285,12 @@ resource "google_storage_object_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = ["%s", "%s"] } -`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), - roleEntityBasic2, roleEntityBasic3_owner) +`, bucketName, objectName, tfObjectAcl.Name(), + roleEntityBasic2, roleEntityBasic3_owner) +} -var testGoogleStorageObjectsAclBasic3 = fmt.Sprintf(` +func testGoogleStorageObjectsAclBasic3(bucketName string, objectName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -288,10 +306,12 @@ resource "google_storage_object_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" role_entity = ["%s", "%s"] } -`, testAclBucketName, testAclObjectName, tfObjectAcl.Name(), - roleEntityBasic2, roleEntityBasic3_reader) +`, bucketName, objectName, tfObjectAcl.Name(), + roleEntityBasic2, roleEntityBasic3_reader) +} -var testGoogleStorageObjectsAclPredefined = fmt.Sprintf(` +func testGoogleStorageObjectsAclPredefined(bucketName string, objectName string) string { + return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { name = "%s" } @@ -307,4 +327,5 @@ resource "google_storage_object_acl" "acl" { bucket = "${google_storage_bucket.bucket.name}" predefined_acl = "projectPrivate" } -`, testAclBucketName, testAclObjectName, tfObjectAcl.Name()) +`, bucketName, objectName, tfObjectAcl.Name()) +} diff --git a/builtin/providers/google/sqladmin_operation.go b/builtin/providers/google/sqladmin_operation.go index 4fc80204bf..05a2931bde 100644 --- a/builtin/providers/google/sqladmin_operation.go +++ b/builtin/providers/google/sqladmin_operation.go @@ -37,7 +37,7 @@ func (w *SqlAdminOperationWaiter) RefreshFunc() resource.StateRefreshFunc { func (w *SqlAdminOperationWaiter) Conf() *resource.StateChangeConf { return &resource.StateChangeConf{ Pending: []string{"PENDING", "RUNNING"}, - Target: "DONE", + Target: []string{"DONE"}, Refresh: w.RefreshFunc(), } } diff --git a/builtin/providers/heroku/resource_heroku_addon_test.go b/builtin/providers/heroku/resource_heroku_addon_test.go index b4c360e511..c707e0ed63 100644 --- a/builtin/providers/heroku/resource_heroku_addon_test.go +++ b/builtin/providers/heroku/resource_heroku_addon_test.go @@ -5,12 +5,14 @@ import ( "testing" "github.com/cyberdelia/heroku-go/v3" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccHerokuAddon_Basic(t *testing.T) { var addon heroku.Addon + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -18,14 +20,14 @@ func TestAccHerokuAddon_Basic(t *testing.T) { CheckDestroy: testAccCheckHerokuAddonDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckHerokuAddonConfig_basic, + Config: testAccCheckHerokuAddonConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), testAccCheckHerokuAddonAttributes(&addon, "deployhooks:http"), resource.TestCheckResourceAttr( "heroku_addon.foobar", "config.0.url", "http://google.com"), resource.TestCheckResourceAttr( - "heroku_addon.foobar", "app", "terraform-test-app"), + "heroku_addon.foobar", "app", appName), resource.TestCheckResourceAttr( "heroku_addon.foobar", "plan", "deployhooks:http"), ), @@ -37,6 +39,7 @@ func TestAccHerokuAddon_Basic(t *testing.T) { // GH-198 func TestAccHerokuAddon_noPlan(t *testing.T) { var addon heroku.Addon + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -44,23 +47,23 @@ func TestAccHerokuAddon_noPlan(t *testing.T) { CheckDestroy: testAccCheckHerokuAddonDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckHerokuAddonConfig_no_plan, + Config: testAccCheckHerokuAddonConfig_no_plan(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), testAccCheckHerokuAddonAttributes(&addon, "memcachier:dev"), resource.TestCheckResourceAttr( - "heroku_addon.foobar", "app", "terraform-test-app"), + "heroku_addon.foobar", "app", appName), resource.TestCheckResourceAttr( "heroku_addon.foobar", "plan", "memcachier"), ), }, resource.TestStep{ - Config: testAccCheckHerokuAddonConfig_no_plan, + Config: testAccCheckHerokuAddonConfig_no_plan(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), testAccCheckHerokuAddonAttributes(&addon, "memcachier:dev"), resource.TestCheckResourceAttr( - "heroku_addon.foobar", "app", "terraform-test-app"), + "heroku_addon.foobar", "app", appName), resource.TestCheckResourceAttr( "heroku_addon.foobar", "plan", "memcachier"), ), @@ -128,9 +131,10 @@ func testAccCheckHerokuAddonExists(n string, addon *heroku.Addon) resource.TestC } } -const testAccCheckHerokuAddonConfig_basic = ` +func testAccCheckHerokuAddonConfig_basic(appName string) string { + return fmt.Sprintf(` resource "heroku_app" "foobar" { - name = "terraform-test-app" + name = "%s" region = "us" } @@ -140,15 +144,18 @@ resource "heroku_addon" "foobar" { config { url = "http://google.com" } -}` +}`, appName) +} -const testAccCheckHerokuAddonConfig_no_plan = ` +func testAccCheckHerokuAddonConfig_no_plan(appName string) string { + return fmt.Sprintf(` resource "heroku_app" "foobar" { - name = "terraform-test-app" + name = "%s" region = "us" } resource "heroku_addon" "foobar" { app = "${heroku_app.foobar.name}" plan = "memcachier" -}` +}`, appName) +} diff --git a/builtin/providers/heroku/resource_heroku_app.go b/builtin/providers/heroku/resource_heroku_app.go index 4c2f3bf97a..b63be836bf 100644 --- a/builtin/providers/heroku/resource_heroku_app.go +++ b/builtin/providers/heroku/resource_heroku_app.go @@ -9,13 +9,27 @@ import ( "github.com/hashicorp/terraform/helper/schema" ) +// herokuApplication is a value type used to hold the details of an +// application. We use this for common storage of values needed for the +// heroku.App and heroku.OrganizationApp types +type herokuApplication struct { + Name string + Region string + Stack string + GitURL string + WebURL string + OrganizationName string + Locked bool +} + // type application is used to store all the details of a heroku app type application struct { Id string // Id of the resource - App *heroku.App // The heroku application - Client *heroku.Service // Client to interact with the heroku API - Vars map[string]string // The vars on the application + App *herokuApplication // The heroku application + Client *heroku.Service // Client to interact with the heroku API + Vars map[string]string // The vars on the application + Organization bool // is the application organization app } // Updates the application to have the latest from remote @@ -23,9 +37,37 @@ func (a *application) Update() error { var errs []error var err error - a.App, err = a.Client.AppInfo(a.Id) - if err != nil { - errs = append(errs, err) + if !a.Organization { + app, err := a.Client.AppInfo(a.Id) + if err != nil { + errs = append(errs, err) + } else { + a.App = &herokuApplication{} + a.App.Name = app.Name + a.App.Region = app.Region.Name + a.App.Stack = app.Stack.Name + a.App.GitURL = app.GitURL + a.App.WebURL = app.WebURL + } + } else { + app, err := a.Client.OrganizationAppInfo(a.Id) + if err != nil { + errs = append(errs, err) + } else { + // No inheritance between OrganizationApp and App is killing it :/ + a.App = &herokuApplication{} + a.App.Name = app.Name + a.App.Region = app.Region.Name + a.App.Stack = app.Stack.Name + a.App.GitURL = app.GitURL + a.App.WebURL = app.WebURL + if app.Organization != nil { + a.App.OrganizationName = app.Organization.Name + } else { + log.Println("[DEBUG] Something is wrong - didn't get information about organization name, while the app is marked as being so") + } + a.App.Locked = app.Locked + } } a.Vars, err = retrieveConfigVars(a.Id, a.Client) @@ -95,10 +137,9 @@ func resourceHerokuApp() *schema.Resource { }, "organization": &schema.Schema{ - Description: "Name of Organization to create application in. Leave blank for personal apps.", - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": &schema.Schema{ @@ -122,17 +163,17 @@ func resourceHerokuApp() *schema.Resource { } } +func isOrganizationApp(d *schema.ResourceData) bool { + v := d.Get("organization").([]interface{}) + return len(v) > 0 && v[0] != nil +} + func switchHerokuAppCreate(d *schema.ResourceData, meta interface{}) error { - orgCount := d.Get("organization.#").(int) - if orgCount > 1 { - return fmt.Errorf("Error Creating Heroku App: Only 1 Heroku Organization is permitted") + if isOrganizationApp(d) { + return resourceHerokuOrgAppCreate(d, meta) } - if _, ok := d.GetOk("organization.0.name"); ok { - return resourceHerokuOrgAppCreate(d, meta) - } else { - return resourceHerokuAppCreate(d, meta) - } + return resourceHerokuAppCreate(d, meta) } func resourceHerokuAppCreate(d *schema.ResourceData, meta interface{}) error { @@ -181,19 +222,25 @@ func resourceHerokuOrgAppCreate(d *schema.ResourceData, meta interface{}) error // Build up our creation options opts := heroku.OrganizationAppCreateOpts{} - if v := d.Get("organization.0.name"); v != nil { + v := d.Get("organization").([]interface{}) + if len(v) > 1 { + return fmt.Errorf("Error Creating Heroku App: Only 1 Heroku Organization is permitted") + } + orgDetails := v[0].(map[string]interface{}) + + if v := orgDetails["name"]; v != nil { vs := v.(string) log.Printf("[DEBUG] Organization name: %s", vs) opts.Organization = &vs } - if v := d.Get("organization.0.personal"); v != nil { + if v := orgDetails["personal"]; v != nil { vs := v.(bool) log.Printf("[DEBUG] Organization Personal: %t", vs) opts.Personal = &vs } - if v := d.Get("organization.0.locked"); v != nil { + if v := orgDetails["locked"]; v != nil { vs := v.(bool) log.Printf("[DEBUG] Organization locked: %t", vs) opts.Locked = &vs @@ -236,13 +283,7 @@ func resourceHerokuOrgAppCreate(d *schema.ResourceData, meta interface{}) error func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*heroku.Service) - app, err := resourceHerokuAppRetrieve(d.Id(), client) - if err != nil { - return err - } - // Only set the config_vars that we have set in the configuration. - // The "all_config_vars" field has all of them. configVars := make(map[string]string) care := make(map[string]struct{}) for _, v := range d.Get("config_vars").([]interface{}) { @@ -250,6 +291,16 @@ func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error { care[k] = struct{}{} } } + + organizationApp := isOrganizationApp(d) + + // Only set the config_vars that we have set in the configuration. + // The "all_config_vars" field has all of them. + app, err := resourceHerokuAppRetrieve(d.Id(), organizationApp, client) + if err != nil { + return err + } + for k, v := range app.Vars { if _, ok := care[k]; ok { configVars[k] = v @@ -261,12 +312,23 @@ func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error { } d.Set("name", app.App.Name) - d.Set("stack", app.App.Stack.Name) - d.Set("region", app.App.Region.Name) + d.Set("stack", app.App.Stack) + d.Set("region", app.App.Region) d.Set("git_url", app.App.GitURL) d.Set("web_url", app.App.WebURL) d.Set("config_vars", configVarsValue) d.Set("all_config_vars", app.Vars) + if organizationApp { + orgDetails := map[string]interface{}{ + "name": app.App.OrganizationName, + "locked": app.App.Locked, + "personal": false, + } + err := d.Set("organization", []interface{}{orgDetails}) + if err != nil { + return err + } + } // We know that the hostname on heroku will be the name+herokuapp.com // You need this to do things like create DNS CNAME records @@ -327,8 +389,8 @@ func resourceHerokuAppDelete(d *schema.ResourceData, meta interface{}) error { return nil } -func resourceHerokuAppRetrieve(id string, client *heroku.Service) (*application, error) { - app := application{Id: id, Client: client} +func resourceHerokuAppRetrieve(id string, organization bool, client *heroku.Service) (*application, error) { + app := application{Id: id, Client: client, Organization: organization} err := app.Update() diff --git a/builtin/providers/heroku/resource_heroku_app_test.go b/builtin/providers/heroku/resource_heroku_app_test.go index 185d4b7d70..38da36e18e 100644 --- a/builtin/providers/heroku/resource_heroku_app_test.go +++ b/builtin/providers/heroku/resource_heroku_app_test.go @@ -2,15 +2,18 @@ package heroku import ( "fmt" + "os" "testing" "github.com/cyberdelia/heroku-go/v3" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccHerokuApp_Basic(t *testing.T) { var app heroku.App + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -18,12 +21,12 @@ func TestAccHerokuApp_Basic(t *testing.T) { CheckDestroy: testAccCheckHerokuAppDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckHerokuAppConfig_basic, + Config: testAccCheckHerokuAppConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), - testAccCheckHerokuAppAttributes(&app), + testAccCheckHerokuAppAttributes(&app, appName), resource.TestCheckResourceAttr( - "heroku_app.foobar", "name", "terraform-test-app"), + "heroku_app.foobar", "name", appName), resource.TestCheckResourceAttr( "heroku_app.foobar", "config_vars.0.FOO", "bar"), ), @@ -34,6 +37,8 @@ func TestAccHerokuApp_Basic(t *testing.T) { func TestAccHerokuApp_NameChange(t *testing.T) { var app heroku.App + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) + appName2 := fmt.Sprintf("%s-v2", appName) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -41,23 +46,23 @@ func TestAccHerokuApp_NameChange(t *testing.T) { CheckDestroy: testAccCheckHerokuAppDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckHerokuAppConfig_basic, + Config: testAccCheckHerokuAppConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), - testAccCheckHerokuAppAttributes(&app), + testAccCheckHerokuAppAttributes(&app, appName), resource.TestCheckResourceAttr( - "heroku_app.foobar", "name", "terraform-test-app"), + "heroku_app.foobar", "name", appName), resource.TestCheckResourceAttr( "heroku_app.foobar", "config_vars.0.FOO", "bar"), ), }, resource.TestStep{ - Config: testAccCheckHerokuAppConfig_updated, + Config: testAccCheckHerokuAppConfig_updated(appName2), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), - testAccCheckHerokuAppAttributesUpdated(&app), + testAccCheckHerokuAppAttributesUpdated(&app, appName2), resource.TestCheckResourceAttr( - "heroku_app.foobar", "name", "terraform-test-renamed"), + "heroku_app.foobar", "name", appName2), resource.TestCheckResourceAttr( "heroku_app.foobar", "config_vars.0.FOO", "bing"), resource.TestCheckResourceAttr( @@ -70,6 +75,7 @@ func TestAccHerokuApp_NameChange(t *testing.T) { func TestAccHerokuApp_NukeVars(t *testing.T) { var app heroku.App + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -77,23 +83,23 @@ func TestAccHerokuApp_NukeVars(t *testing.T) { CheckDestroy: testAccCheckHerokuAppDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckHerokuAppConfig_basic, + Config: testAccCheckHerokuAppConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), - testAccCheckHerokuAppAttributes(&app), + testAccCheckHerokuAppAttributes(&app, appName), resource.TestCheckResourceAttr( - "heroku_app.foobar", "name", "terraform-test-app"), + "heroku_app.foobar", "name", appName), resource.TestCheckResourceAttr( "heroku_app.foobar", "config_vars.0.FOO", "bar"), ), }, resource.TestStep{ - Config: testAccCheckHerokuAppConfig_no_vars, + Config: testAccCheckHerokuAppConfig_no_vars(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), - testAccCheckHerokuAppAttributesNoVars(&app), + testAccCheckHerokuAppAttributesNoVars(&app, appName), resource.TestCheckResourceAttr( - "heroku_app.foobar", "name", "terraform-test-app"), + "heroku_app.foobar", "name", appName), resource.TestCheckResourceAttr( "heroku_app.foobar", "config_vars.0.FOO", ""), ), @@ -102,6 +108,32 @@ func TestAccHerokuApp_NukeVars(t *testing.T) { }) } +func TestAccHerokuApp_Organization(t *testing.T) { + var app heroku.OrganizationApp + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) + org := os.Getenv("HEROKU_ORGANIZATION") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + if org == "" { + t.Skip("HEROKU_ORGANIZATION is not set; skipping test.") + } + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckHerokuAppDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckHerokuAppConfig_organization(appName, org), + Check: resource.ComposeTestCheckFunc( + testAccCheckHerokuAppExistsOrg("heroku_app.foobar", &app), + testAccCheckHerokuAppAttributesOrg(&app, appName, org), + ), + }, + }, + }) +} + func testAccCheckHerokuAppDestroy(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) @@ -120,7 +152,7 @@ func testAccCheckHerokuAppDestroy(s *terraform.State) error { return nil } -func testAccCheckHerokuAppAttributes(app *heroku.App) resource.TestCheckFunc { +func testAccCheckHerokuAppAttributes(app *heroku.App, appName string) resource.TestCheckFunc { return func(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) @@ -132,7 +164,7 @@ func testAccCheckHerokuAppAttributes(app *heroku.App) resource.TestCheckFunc { return fmt.Errorf("Bad stack: %s", app.Stack.Name) } - if app.Name != "terraform-test-app" { + if app.Name != appName { return fmt.Errorf("Bad name: %s", app.Name) } @@ -149,11 +181,11 @@ func testAccCheckHerokuAppAttributes(app *heroku.App) resource.TestCheckFunc { } } -func testAccCheckHerokuAppAttributesUpdated(app *heroku.App) resource.TestCheckFunc { +func testAccCheckHerokuAppAttributesUpdated(app *heroku.App, appName string) resource.TestCheckFunc { return func(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) - if app.Name != "terraform-test-renamed" { + if app.Name != appName { return fmt.Errorf("Bad name: %s", app.Name) } @@ -176,11 +208,11 @@ func testAccCheckHerokuAppAttributesUpdated(app *heroku.App) resource.TestCheckF } } -func testAccCheckHerokuAppAttributesNoVars(app *heroku.App) resource.TestCheckFunc { +func testAccCheckHerokuAppAttributesNoVars(app *heroku.App, appName string) resource.TestCheckFunc { return func(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) - if app.Name != "terraform-test-app" { + if app.Name != appName { return fmt.Errorf("Bad name: %s", app.Name) } @@ -197,6 +229,39 @@ func testAccCheckHerokuAppAttributesNoVars(app *heroku.App) resource.TestCheckFu } } +func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, appName string, org string) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := testAccProvider.Meta().(*heroku.Service) + + if app.Region.Name != "us" { + return fmt.Errorf("Bad region: %s", app.Region.Name) + } + + if app.Stack.Name != "cedar-14" { + return fmt.Errorf("Bad stack: %s", app.Stack.Name) + } + + if app.Name != appName { + return fmt.Errorf("Bad name: %s", app.Name) + } + + if app.Organization == nil || app.Organization.Name != org { + return fmt.Errorf("Bad org: %v", app.Organization) + } + + vars, err := client.ConfigVarInfo(app.Name) + if err != nil { + return err + } + + if vars["FOO"] != "bar" { + return fmt.Errorf("Bad config vars: %v", vars) + } + + return nil + } +} + func testAccCheckHerokuAppExists(n string, app *heroku.App) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -227,29 +292,81 @@ func testAccCheckHerokuAppExists(n string, app *heroku.App) resource.TestCheckFu } } -const testAccCheckHerokuAppConfig_basic = ` -resource "heroku_app" "foobar" { - name = "terraform-test-app" - region = "us" +func testAccCheckHerokuAppExistsOrg(n string, app *heroku.OrganizationApp) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] - config_vars { - FOO = "bar" + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No App Name is set") + } + + client := testAccProvider.Meta().(*heroku.Service) + + foundApp, err := client.OrganizationAppInfo(rs.Primary.ID) + + if err != nil { + return err + } + + if foundApp.Name != rs.Primary.ID { + return fmt.Errorf("App not found") + } + + *app = *foundApp + + return nil } -}` +} -const testAccCheckHerokuAppConfig_updated = ` +func testAccCheckHerokuAppConfig_basic(appName string) string { + return fmt.Sprintf(` resource "heroku_app" "foobar" { - name = "terraform-test-renamed" - region = "us" + name = "%s" + region = "us" - config_vars { - FOO = "bing" - BAZ = "bar" - } -}` + config_vars { + FOO = "bar" + } +}`, appName) +} -const testAccCheckHerokuAppConfig_no_vars = ` +func testAccCheckHerokuAppConfig_updated(appName string) string { + return fmt.Sprintf(` resource "heroku_app" "foobar" { - name = "terraform-test-app" - region = "us" -}` + name = "%s" + region = "us" + + config_vars { + FOO = "bing" + BAZ = "bar" + } +}`, appName) +} + +func testAccCheckHerokuAppConfig_no_vars(appName string) string { + return fmt.Sprintf(` +resource "heroku_app" "foobar" { + name = "%s" + region = "us" +}`, appName) +} + +func testAccCheckHerokuAppConfig_organization(appName, org string) string { + return fmt.Sprintf(` +resource "heroku_app" "foobar" { + name = "%s" + region = "us" + + organization { + name = "%s" + } + + config_vars { + FOO = "bar" + } +}`, appName, org) +} diff --git a/builtin/providers/heroku/resource_heroku_domain_test.go b/builtin/providers/heroku/resource_heroku_domain_test.go index 344be24ca1..2d600b4e85 100644 --- a/builtin/providers/heroku/resource_heroku_domain_test.go +++ b/builtin/providers/heroku/resource_heroku_domain_test.go @@ -5,12 +5,14 @@ import ( "testing" "github.com/cyberdelia/heroku-go/v3" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccHerokuDomain_Basic(t *testing.T) { var domain heroku.Domain + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -18,16 +20,17 @@ func TestAccHerokuDomain_Basic(t *testing.T) { CheckDestroy: testAccCheckHerokuDomainDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckHerokuDomainConfig_basic, + Config: testAccCheckHerokuDomainConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuDomainExists("heroku_domain.foobar", &domain), testAccCheckHerokuDomainAttributes(&domain), resource.TestCheckResourceAttr( "heroku_domain.foobar", "hostname", "terraform.example.com"), resource.TestCheckResourceAttr( - "heroku_domain.foobar", "app", "terraform-test-app"), + "heroku_domain.foobar", "app", appName), resource.TestCheckResourceAttr( - "heroku_domain.foobar", "cname", "terraform-test-app.herokuapp.com"), + "heroku_domain.foobar", "cname", + fmt.Sprintf("%s.herokuapp.com", appName)), ), }, }, @@ -93,13 +96,14 @@ func testAccCheckHerokuDomainExists(n string, Domain *heroku.Domain) resource.Te } } -const testAccCheckHerokuDomainConfig_basic = ` -resource "heroku_app" "foobar" { - name = "terraform-test-app" +func testAccCheckHerokuDomainConfig_basic(appName string) string { + return fmt.Sprintf(`resource "heroku_app" "foobar" { + name = "%s" region = "us" } resource "heroku_domain" "foobar" { app = "${heroku_app.foobar.name}" hostname = "terraform.example.com" -}` +}`, appName) +} diff --git a/builtin/providers/heroku/resource_heroku_drain_test.go b/builtin/providers/heroku/resource_heroku_drain_test.go index e0cf6c7a80..60db1db6ee 100644 --- a/builtin/providers/heroku/resource_heroku_drain_test.go +++ b/builtin/providers/heroku/resource_heroku_drain_test.go @@ -5,12 +5,14 @@ import ( "testing" "github.com/cyberdelia/heroku-go/v3" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccHerokuDrain_Basic(t *testing.T) { var drain heroku.LogDrain + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -18,14 +20,14 @@ func TestAccHerokuDrain_Basic(t *testing.T) { CheckDestroy: testAccCheckHerokuDrainDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckHerokuDrainConfig_basic, + Config: testAccCheckHerokuDrainConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuDrainExists("heroku_drain.foobar", &drain), testAccCheckHerokuDrainAttributes(&drain), resource.TestCheckResourceAttr( "heroku_drain.foobar", "url", "syslog://terraform.example.com:1234"), resource.TestCheckResourceAttr( - "heroku_drain.foobar", "app", "terraform-test-app"), + "heroku_drain.foobar", "app", appName), ), }, }, @@ -95,13 +97,15 @@ func testAccCheckHerokuDrainExists(n string, Drain *heroku.LogDrain) resource.Te } } -const testAccCheckHerokuDrainConfig_basic = ` +func testAccCheckHerokuDrainConfig_basic(appName string) string { + return fmt.Sprintf(` resource "heroku_app" "foobar" { - name = "terraform-test-app" + name = "%s" region = "us" } resource "heroku_drain" "foobar" { app = "${heroku_app.foobar.name}" url = "syslog://terraform.example.com:1234" -}` +}`, appName) +} diff --git a/builtin/providers/mailgun/resource_mailgun_domain.go b/builtin/providers/mailgun/resource_mailgun_domain.go index 7dd287c90e..fb180bc0c3 100644 --- a/builtin/providers/mailgun/resource_mailgun_domain.go +++ b/builtin/providers/mailgun/resource_mailgun_domain.go @@ -3,7 +3,9 @@ package mailgun import ( "fmt" "log" + "time" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/pearkes/mailgun" ) @@ -143,7 +145,16 @@ func resourceMailgunDomainDelete(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error deleting domain: %s", err) } - return nil + // Give the destroy a chance to take effect + return resource.Retry(1*time.Minute, func() error { + _, err = client.RetrieveDomain(d.Id()) + if err == nil { + log.Printf("[INFO] Retrying until domain disappears...") + return fmt.Errorf("Domain seems to still exist; will check again.") + } + log.Printf("[INFO] Got error looking for domain, seems gone: %s", err) + return nil + }) } func resourceMailgunDomainRead(d *schema.ResourceData, meta interface{}) error { diff --git a/builtin/providers/mailgun/resource_mailgun_domain_test.go b/builtin/providers/mailgun/resource_mailgun_domain_test.go index 7bad19ddb1..0fbea1f584 100644 --- a/builtin/providers/mailgun/resource_mailgun_domain_test.go +++ b/builtin/providers/mailgun/resource_mailgun_domain_test.go @@ -48,10 +48,10 @@ func testAccCheckMailgunDomainDestroy(s *terraform.State) error { continue } - _, err := client.RetrieveDomain(rs.Primary.ID) + resp, err := client.RetrieveDomain(rs.Primary.ID) if err == nil { - return fmt.Errorf("Domain still exists") + return fmt.Errorf("Domain still exists: %#v", resp) } } diff --git a/builtin/providers/mysql/provider.go b/builtin/providers/mysql/provider.go new file mode 100644 index 0000000000..3afd7db4cb --- /dev/null +++ b/builtin/providers/mysql/provider.go @@ -0,0 +1,71 @@ +package mysql + +import ( + "fmt" + "strings" + + mysqlc "github.com/ziutek/mymysql/thrsafe" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("MYSQL_ENDPOINT", nil), + }, + + "username": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("MYSQL_USERNAME", nil), + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("MYSQL_PASSWORD", nil), + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "mysql_database": resourceDatabase(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + + var username = d.Get("username").(string) + var password = d.Get("password").(string) + var endpoint = d.Get("endpoint").(string) + + proto := "tcp" + if endpoint[0] == '/' { + proto = "unix" + } + + // mysqlc is the thread-safe implementation of mymysql, so we can + // safely re-use the same connection between multiple parallel + // operations. + conn := mysqlc.New(proto, "", endpoint, username, password) + + err := conn.Connect() + if err != nil { + return nil, err + } + + return conn, nil +} + +var identQuoteReplacer = strings.NewReplacer("`", "``") + +func quoteIdentifier(in string) string { + return fmt.Sprintf("`%s`", identQuoteReplacer.Replace(in)) +} diff --git a/builtin/providers/mysql/provider_test.go b/builtin/providers/mysql/provider_test.go new file mode 100644 index 0000000000..824e2b2be2 --- /dev/null +++ b/builtin/providers/mysql/provider_test.go @@ -0,0 +1,55 @@ +package mysql + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// To run these acceptance tests, you will need access to a MySQL server. +// Amazon RDS is one way to get a MySQL server. If you use RDS, you can +// use the root account credentials you specified when creating an RDS +// instance to get the access necessary to run these tests. (the tests +// assume full access to the server.) +// +// Set the MYSQL_ENDPOINT and MYSQL_USERNAME environment variables before +// running the tests. If the given user has a password then you will also need +// to set MYSQL_PASSWORD. +// +// The tests assume a reasonably-vanilla MySQL configuration. In particular, +// they assume that the "utf8" character set is available and that +// "utf8_bin" is a valid collation that isn't the default for that character +// set. +// +// You can run the tests like this: +// make testacc TEST=./builtin/providers/mysql + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "mysql": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + for _, name := range []string{"MYSQL_ENDPOINT", "MYSQL_USERNAME"} { + if v := os.Getenv(name); v == "" { + t.Fatal("MYSQL_ENDPOINT, MYSQL_USERNAME and optionally MYSQL_PASSWORD must be set for acceptance tests") + } + } +} diff --git a/builtin/providers/mysql/resource_database.go b/builtin/providers/mysql/resource_database.go new file mode 100644 index 0000000000..4aa56e8104 --- /dev/null +++ b/builtin/providers/mysql/resource_database.go @@ -0,0 +1,174 @@ +package mysql + +import ( + "fmt" + "log" + "strings" + + mysqlc "github.com/ziutek/mymysql/mysql" + + "github.com/hashicorp/terraform/helper/schema" +) + +const defaultCharacterSetKeyword = "CHARACTER SET " +const defaultCollateKeyword = "COLLATE " + +func resourceDatabase() *schema.Resource { + return &schema.Resource{ + Create: CreateDatabase, + Update: UpdateDatabase, + Read: ReadDatabase, + Delete: DeleteDatabase, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "default_character_set": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "utf8", + }, + + "default_collation": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "utf8_general_ci", + }, + }, + } +} + +func CreateDatabase(d *schema.ResourceData, meta interface{}) error { + conn := meta.(mysqlc.Conn) + + stmtSQL := databaseConfigSQL("CREATE", d) + log.Println("Executing statement:", stmtSQL) + + _, _, err := conn.Query(stmtSQL) + if err != nil { + return err + } + + d.SetId(d.Get("name").(string)) + + return nil +} + +func UpdateDatabase(d *schema.ResourceData, meta interface{}) error { + conn := meta.(mysqlc.Conn) + + stmtSQL := databaseConfigSQL("ALTER", d) + log.Println("Executing statement:", stmtSQL) + + _, _, err := conn.Query(stmtSQL) + if err != nil { + return err + } + + return nil +} + +func ReadDatabase(d *schema.ResourceData, meta interface{}) error { + conn := meta.(mysqlc.Conn) + + // This is kinda flimsy-feeling, since it depends on the formatting + // of the SHOW CREATE DATABASE output... but this data doesn't seem + // to be available any other way, so hopefully MySQL keeps this + // compatible in future releases. + + name := d.Id() + stmtSQL := "SHOW CREATE DATABASE " + quoteIdentifier(name) + + log.Println("Executing query:", stmtSQL) + rows, _, err := conn.Query(stmtSQL) + if err != nil { + if mysqlErr, ok := err.(*mysqlc.Error); ok { + if mysqlErr.Code == mysqlc.ER_BAD_DB_ERROR { + d.SetId("") + return nil + } + } + return err + } + + row := rows[0] + createSQL := string(row[1].([]byte)) + + defaultCharset := extractIdentAfter(createSQL, defaultCharacterSetKeyword) + defaultCollation := extractIdentAfter(createSQL, defaultCollateKeyword) + + if defaultCollation == "" && defaultCharset != "" { + // MySQL doesn't return the collation if it's the default one for + // the charset, so if we don't have a collation we need to go + // hunt for the default. + stmtSQL := "SHOW COLLATION WHERE `Charset` = '%s' AND `Default` = 'Yes'" + rows, _, err := conn.Query(stmtSQL, defaultCharset) + if err != nil { + return fmt.Errorf("Error getting default charset: %s", err) + } + if len(rows) == 0 { + return fmt.Errorf("Charset %s has no default collation", defaultCharset) + } + row := rows[0] + defaultCollation = string(row[0].([]byte)) + } + + d.Set("default_character_set", defaultCharset) + d.Set("default_collation", defaultCollation) + + return nil +} + +func DeleteDatabase(d *schema.ResourceData, meta interface{}) error { + conn := meta.(mysqlc.Conn) + + name := d.Id() + stmtSQL := "DROP DATABASE " + quoteIdentifier(name) + log.Println("Executing statement:", stmtSQL) + + _, _, err := conn.Query(stmtSQL) + if err == nil { + d.SetId("") + } + return err +} + +func databaseConfigSQL(verb string, d *schema.ResourceData) string { + name := d.Get("name").(string) + defaultCharset := d.Get("default_character_set").(string) + defaultCollation := d.Get("default_collation").(string) + + var defaultCharsetClause string + var defaultCollationClause string + + if defaultCharset != "" { + defaultCharsetClause = defaultCharacterSetKeyword + quoteIdentifier(defaultCharset) + } + if defaultCollation != "" { + defaultCollationClause = defaultCollateKeyword + quoteIdentifier(defaultCollation) + } + + return fmt.Sprintf( + "%s DATABASE %s %s %s", + verb, + quoteIdentifier(name), + defaultCharsetClause, + defaultCollationClause, + ) +} + +func extractIdentAfter(sql string, keyword string) string { + charsetIndex := strings.Index(sql, keyword) + if charsetIndex != -1 { + charsetIndex += len(keyword) + remain := sql[charsetIndex:] + spaceIndex := strings.IndexRune(remain, ' ') + return remain[:spaceIndex] + } + + return "" +} diff --git a/builtin/providers/mysql/resource_database_test.go b/builtin/providers/mysql/resource_database_test.go new file mode 100644 index 0000000000..49c44256f9 --- /dev/null +++ b/builtin/providers/mysql/resource_database_test.go @@ -0,0 +1,91 @@ +package mysql + +import ( + "fmt" + "strings" + "testing" + + mysqlc "github.com/ziutek/mymysql/mysql" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDatabase(t *testing.T) { + var dbName string + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccDatabaseCheckDestroy(dbName), + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDatabaseConfig_basic, + Check: testAccDatabaseCheck( + "mysql_database.test", &dbName, + ), + }, + }, + }) +} + +func testAccDatabaseCheck(rn string, name *string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[rn] + if !ok { + return fmt.Errorf("resource not found: %s", rn) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("database id not set") + } + + conn := testAccProvider.Meta().(mysqlc.Conn) + rows, _, err := conn.Query("SHOW CREATE DATABASE terraform_acceptance_test") + if err != nil { + return fmt.Errorf("error reading database: %s", err) + } + if len(rows) != 1 { + return fmt.Errorf("expected 1 row reading database but got %d", len(rows)) + } + + row := rows[0] + createSQL := string(row[1].([]byte)) + + if strings.Index(createSQL, "CHARACTER SET utf8") == -1 { + return fmt.Errorf("database default charset isn't utf8") + } + if strings.Index(createSQL, "COLLATE utf8_bin") == -1 { + return fmt.Errorf("database default collation isn't utf8_bin") + } + + *name = rs.Primary.ID + + return nil + } +} + +func testAccDatabaseCheckDestroy(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(mysqlc.Conn) + + _, _, err := conn.Query("SHOW CREATE DATABASE terraform_acceptance_test") + if err == nil { + return fmt.Errorf("database still exists after destroy") + } + if mysqlErr, ok := err.(*mysqlc.Error); ok { + if mysqlErr.Code == mysqlc.ER_BAD_DB_ERROR { + return nil + } + } + + return fmt.Errorf("got unexpected error: %s", err) + } +} + +const testAccDatabaseConfig_basic = ` +resource "mysql_database" "test" { + name = "terraform_acceptance_test" + default_character_set = "utf8" + default_collation = "utf8_bin" +} +` diff --git a/builtin/providers/openstack/devstack/deploy.sh b/builtin/providers/openstack/devstack/deploy.sh new file mode 100644 index 0000000000..2225478e1f --- /dev/null +++ b/builtin/providers/openstack/devstack/deploy.sh @@ -0,0 +1,125 @@ +#!/bin/bash + +sudo apt-get update +sudo apt-get install -y git make mercurial + +GOPKG=go1.5.2.linux-amd64.tar.gz +wget https://storage.googleapis.com/golang/$GOPKG +sudo tar -xvf $GOPKG -C /usr/local/ + +mkdir ~/go +echo 'export GOPATH=$HOME/go' >> .bashrc +echo 'export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin' >> .bashrc +source .bashrc +export GOPATH=$HOME/go +export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin + +go get github.com/hashicorp/terraform +cd $GOPATH/src/github.com/hashicorp/terraform +make updatedeps + +cd +git clone https://git.openstack.org/openstack-dev/devstack -b stable/liberty +cd devstack +cat >local.conf <> openrc +echo export OS_IMAGE_ID="$_IMAGE_ID" >> openrc +echo export OS_NETWORK_ID=$_NETWORK_ID >> openrc +echo export OS_POOL_NAME="public" >> openrc +echo export OS_FLAVOR_ID=99 >> openrc +source openrc demo + +cd $GOPATH/src/github.com/hashicorp/terraform +make updatedeps + +# Replace the below lines with the repo/branch you want to test +#git remote add jtopjian https://github.com/jtopjian/terraform +#git fetch jtopjian +#git checkout --track jtopjian/openstack-acctest-fixes +#make testacc TEST=./builtin/providers/openstack TESTARGS='-run=AccBlockStorageV1' +#make testacc TEST=./builtin/providers/openstack TESTARGS='-run=AccCompute' +#make testacc TEST=./builtin/providers/openstack diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go index f8fde11eff..8fb445c279 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go @@ -137,7 +137,7 @@ func resourceBlockStorageVolumeV1Create(d *schema.ResourceData, meta interface{} stateConf := &resource.StateChangeConf{ Pending: []string{"downloading", "creating"}, - Target: "available", + Target: []string{"available"}, Refresh: VolumeV1StateRefreshFunc(blockStorageClient, v.ID), Timeout: 10 * time.Minute, Delay: 10 * time.Second, @@ -243,7 +243,7 @@ func resourceBlockStorageVolumeV1Delete(d *schema.ResourceData, meta interface{} stateConf := &resource.StateChangeConf{ Pending: []string{"in-use", "attaching"}, - Target: "available", + Target: []string{"available"}, Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()), Timeout: 10 * time.Minute, Delay: 10 * time.Second, @@ -259,9 +259,13 @@ func resourceBlockStorageVolumeV1Delete(d *schema.ResourceData, meta interface{} } } - err = volumes.Delete(blockStorageClient, d.Id()).ExtractErr() - if err != nil { - return fmt.Errorf("Error deleting OpenStack volume: %s", err) + // It's possible that this volume was used as a boot device and is currently + // in a "deleting" state from when the instance was terminated. + // If this is true, just move on. It'll eventually delete. + if v.Status != "deleting" { + if err := volumes.Delete(blockStorageClient, d.Id()).ExtractErr(); err != nil { + return CheckDeleted(d, err, "volume") + } } // Wait for the volume to delete before moving on. @@ -269,7 +273,7 @@ func resourceBlockStorageVolumeV1Delete(d *schema.ResourceData, meta interface{} stateConf := &resource.StateChangeConf{ Pending: []string{"deleting", "downloading", "available"}, - Target: "deleted", + Target: []string{"deleted"}, Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()), Timeout: 10 * time.Minute, Delay: 10 * time.Second, diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go index d21e1afedb..44f959088f 100644 --- a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go @@ -176,12 +176,7 @@ func resourceComputeInstanceV2() *schema.Resource { ForceNew: true, }, "block_device": &schema.Schema{ - // TODO: This is a set because we don't support singleton - // sub-resources today. We'll enforce that the set only ever has - // length zero or one below. When TF gains support for - // sub-resources this can be converted. - // As referenced in resource_aws_instance.go - Type: schema.TypeSet, + Type: schema.TypeList, Optional: true, ForceNew: true, Elem: &schema.Resource{ @@ -213,10 +208,6 @@ func resourceComputeInstanceV2() *schema.Resource { }, }, }, - Set: func(v interface{}) int { - // there can only be one bootable block device; no need to hash anything - return 0 - }, }, "volume": &schema.Schema{ Type: schema.TypeSet, @@ -284,6 +275,24 @@ func resourceComputeInstanceV2() *schema.Resource { }, Set: resourceComputeSchedulerHintsHash, }, + "personality": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "file": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "content": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + Set: resourceComputeInstancePersonalityHash, + }, }, } } @@ -343,6 +352,7 @@ func resourceComputeInstanceV2Create(d *schema.ResourceData, meta interface{}) e ConfigDrive: d.Get("config_drive").(bool), AdminPass: d.Get("admin_pass").(string), UserData: []byte(d.Get("user_data").(string)), + Personality: resourceInstancePersonalityV2(d), } if keyName, ok := d.Get("key_pair").(string); ok && keyName != "" { @@ -352,9 +362,8 @@ func resourceComputeInstanceV2Create(d *schema.ResourceData, meta interface{}) e } } - if v, ok := d.GetOk("block_device"); ok { - vL := v.(*schema.Set).List() - for _, v := range vL { + if vL, ok := d.GetOk("block_device"); ok { + for _, v := range vL.([]interface{}) { blockDeviceRaw := v.(map[string]interface{}) blockDevice := resourceInstanceBlockDeviceV2(d, blockDeviceRaw) createOpts = &bootfromvolume.CreateOptsExt{ @@ -402,9 +411,9 @@ func resourceComputeInstanceV2Create(d *schema.ResourceData, meta interface{}) e stateConf := &resource.StateChangeConf{ Pending: []string{"BUILD"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: ServerV2StateRefreshFunc(computeClient, server.ID), - Timeout: 10 * time.Minute, + Timeout: 30 * time.Minute, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -735,7 +744,7 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e stateConf := &resource.StateChangeConf{ Pending: []string{"RESIZE"}, - Target: "VERIFY_RESIZE", + Target: []string{"VERIFY_RESIZE"}, Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), Timeout: 3 * time.Minute, Delay: 10 * time.Second, @@ -756,7 +765,7 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e stateConf = &resource.StateChangeConf{ Pending: []string{"VERIFY_RESIZE"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), Timeout: 3 * time.Minute, Delay: 10 * time.Second, @@ -789,9 +798,9 @@ func resourceComputeInstanceV2Delete(d *schema.ResourceData, meta interface{}) e stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), - Timeout: 10 * time.Minute, + Timeout: 30 * time.Minute, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -1149,7 +1158,7 @@ func attachVolumesToInstance(computeClient *gophercloud.ServiceClient, blockClie stateConf := &resource.StateChangeConf{ Pending: []string{"attaching", "available"}, - Target: "in-use", + Target: []string{"in-use"}, Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)), Timeout: 30 * time.Minute, Delay: 5 * time.Second, @@ -1176,7 +1185,7 @@ func detachVolumesFromInstance(computeClient *gophercloud.ServiceClient, blockCl stateConf := &resource.StateChangeConf{ Pending: []string{"detaching", "in-use"}, - Target: "available", + Target: []string{"available"}, Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)), Timeout: 30 * time.Minute, Delay: 5 * time.Second, @@ -1239,12 +1248,42 @@ func checkVolumeConfig(d *schema.ResourceData) error { } } - if v, ok := d.GetOk("block_device"); ok { - vL := v.(*schema.Set).List() - if len(vL) > 1 { + if vL, ok := d.GetOk("block_device"); ok { + if len(vL.([]interface{})) > 1 { return fmt.Errorf("Can only specify one block device to boot from.") } } return nil } + +func resourceComputeInstancePersonalityHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["file"].(string))) + + return hashcode.String(buf.String()) +} + +func resourceInstancePersonalityV2(d *schema.ResourceData) servers.Personality { + var personalities servers.Personality + + if v := d.Get("personality"); v != nil { + personalityList := v.(*schema.Set).List() + if len(personalityList) > 0 { + for _, p := range personalityList { + rawPersonality := p.(map[string]interface{}) + file := servers.File{ + Path: rawPersonality["file"].(string), + Contents: []byte(rawPersonality["content"].(string)), + } + + log.Printf("[DEBUG] OpenStack Compute Instance Personality: %+v", file) + + personalities = append(personalities, &file) + } + } + } + + return personalities +} diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go index fa5533508f..574f3b1993 100644 --- a/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go @@ -253,9 +253,9 @@ func TestAccComputeV2Instance_multi_secgroups(t *testing.T) { }) } -func TestAccComputeV2Instance_bootFromVolume(t *testing.T) { +func TestAccComputeV2Instance_bootFromVolumeImage(t *testing.T) { var instance servers.Server - var testAccComputeV2Instance_bootFromVolume = fmt.Sprintf(` + var testAccComputeV2Instance_bootFromVolumeImage = fmt.Sprintf(` resource "openstack_compute_instance_v2" "foo" { name = "terraform-test" security_groups = ["default"] @@ -276,7 +276,7 @@ func TestAccComputeV2Instance_bootFromVolume(t *testing.T) { CheckDestroy: testAccCheckComputeV2InstanceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeV2Instance_bootFromVolume, + Config: testAccComputeV2Instance_bootFromVolumeImage, Check: resource.ComposeTestCheckFunc( testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), testAccCheckComputeV2InstanceBootVolumeAttachment(&instance), @@ -286,6 +286,77 @@ func TestAccComputeV2Instance_bootFromVolume(t *testing.T) { }) } +func TestAccComputeV2Instance_bootFromVolumeVolume(t *testing.T) { + var instance servers.Server + var testAccComputeV2Instance_bootFromVolumeVolume = fmt.Sprintf(` + resource "openstack_blockstorage_volume_v1" "foo" { + name = "terraform-test" + size = 5 + image_id = "%s" + } + + resource "openstack_compute_instance_v2" "foo" { + name = "terraform-test" + security_groups = ["default"] + block_device { + uuid = "${openstack_blockstorage_volume_v1.foo.id}" + source_type = "volume" + volume_size = 5 + boot_index = 0 + destination_type = "volume" + delete_on_termination = true + } + }`, + os.Getenv("OS_IMAGE_ID")) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2InstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Instance_bootFromVolumeVolume, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), + testAccCheckComputeV2InstanceBootVolumeAttachment(&instance), + ), + }, + }, + }) +} + +// TODO: verify the personality really exists on the instance. +func TestAccComputeV2Instance_personality(t *testing.T) { + var instance servers.Server + var testAccComputeV2Instance_personality = fmt.Sprintf(` + resource "openstack_compute_instance_v2" "foo" { + name = "terraform-test" + security_groups = ["default"] + personality { + file = "/tmp/foobar.txt" + content = "happy" + } + personality { + file = "/tmp/barfoo.txt" + content = "angry" + } + }`) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2InstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Instance_personality, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), + ), + }, + }, + }) +} + func testAccCheckComputeV2InstanceDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) computeClient, err := config.computeV2Client(OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go index e3d281b2e1..d8d559b9b0 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go @@ -93,6 +93,12 @@ func resourceComputeSecGroupV2Create(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error creating OpenStack compute client: %s", err) } + // Before creating the security group, make sure all rules are valid. + if err := checkSecGroupV2RulesForErrors(d); err != nil { + return err + } + + // If all rules are valid, proceed with creating the security gruop. createOpts := secgroups.CreateOpts{ Name: d.Get("name").(string), Description: d.Get("description").(string), @@ -106,6 +112,7 @@ func resourceComputeSecGroupV2Create(d *schema.ResourceData, meta interface{}) e d.SetId(sg.ID) + // Now that the security group has been created, iterate through each rule and create it createRuleOptsList := resourceSecGroupRulesV2(d) for _, createRuleOpts := range createRuleOptsList { _, err := secgroups.CreateRule(computeClient, createRuleOpts).Extract() @@ -210,7 +217,7 @@ func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) e stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: SecGroupV2StateRefreshFunc(computeClient, d), Timeout: 10 * time.Minute, Delay: 10 * time.Second, @@ -251,6 +258,42 @@ func resourceSecGroupRuleCreateOptsV2(d *schema.ResourceData, rawRule interface{ } } +func checkSecGroupV2RulesForErrors(d *schema.ResourceData) error { + rawRules := d.Get("rule").(*schema.Set).List() + for _, rawRule := range rawRules { + rawRuleMap := rawRule.(map[string]interface{}) + + // only one of cidr, from_group_id, or self can be set + cidr := rawRuleMap["cidr"].(string) + groupId := rawRuleMap["from_group_id"].(string) + self := rawRuleMap["self"].(bool) + errorMessage := fmt.Errorf("Only one of cidr, from_group_id, or self can be set.") + + // if cidr is set, from_group_id and self cannot be set + if cidr != "" { + if groupId != "" || self { + return errorMessage + } + } + + // if from_group_id is set, cidr and self cannot be set + if groupId != "" { + if cidr != "" || self { + return errorMessage + } + } + + // if self is set, cidr and from_group_id cannot be set + if self { + if cidr != "" || groupId != "" { + return errorMessage + } + } + } + + return nil +} + func resourceSecGroupRuleV2(d *schema.ResourceData, rawRule interface{}) secgroups.Rule { rawRuleMap := rawRule.(map[string]interface{}) return secgroups.Rule{ diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go index 4cb99fa741..28223fa1bb 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go @@ -97,9 +97,9 @@ func TestAccComputeV2SecGroup_self(t *testing.T) { testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_1", &secgroup), testAccCheckComputeV2SecGroupGroupIDMatch(t, &secgroup, &secgroup), resource.TestCheckResourceAttr( - "openstack_compute_secgroup_v2.test_group_1", "rule.1118853483.self", "true"), + "openstack_compute_secgroup_v2.test_group_1", "rule.3170486100.self", "true"), resource.TestCheckResourceAttr( - "openstack_compute_secgroup_v2.test_group_1", "rule.1118853483.from_group_id", ""), + "openstack_compute_secgroup_v2.test_group_1", "rule.3170486100.from_group_id", ""), ), }, }, diff --git a/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go b/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go index 2fa505e56e..b4099a7cb0 100644 --- a/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go +++ b/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go @@ -81,7 +81,7 @@ func resourceFWFirewallV1Create(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"PENDING_CREATE"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForFirewallActive(networkingClient, firewall.ID), Timeout: 30 * time.Second, Delay: 0, @@ -150,7 +150,7 @@ func resourceFWFirewallV1Update(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForFirewallActive(networkingClient, d.Id()), Timeout: 30 * time.Second, Delay: 0, @@ -178,7 +178,7 @@ func resourceFWFirewallV1Delete(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForFirewallActive(networkingClient, d.Id()), Timeout: 30 * time.Second, Delay: 0, @@ -195,7 +195,7 @@ func resourceFWFirewallV1Delete(d *schema.ResourceData, meta interface{}) error stateConf = &resource.StateChangeConf{ Pending: []string{"DELETING"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: waitForFirewallDeletion(networkingClient, d.Id()), Timeout: 2 * time.Minute, Delay: 0, diff --git a/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go b/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go index 8774dadca0..678c63b1a7 100644 --- a/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go @@ -4,8 +4,12 @@ import ( "fmt" "log" "strconv" + "time" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + + "github.com/rackspace/gophercloud" "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/monitors" ) @@ -108,6 +112,22 @@ func resourceLBMonitorV1Create(d *schema.ResourceData, meta interface{}) error { } log.Printf("[INFO] LB Monitor ID: %s", m.ID) + log.Printf("[DEBUG] Waiting for OpenStack LB Monitor (%s) to become available.", m.ID) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING"}, + Target: []string{"ACTIVE"}, + Refresh: waitForLBMonitorActive(networkingClient, m.ID), + Timeout: 2 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return err + } + d.SetId(m.ID) return resourceLBMonitorV1Read(d, meta) @@ -184,7 +204,16 @@ func resourceLBMonitorV1Delete(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("Error creating OpenStack networking client: %s", err) } - err = monitors.Delete(networkingClient, d.Id()).ExtractErr() + stateConf := &resource.StateChangeConf{ + Pending: []string{"ACTIVE", "PENDING"}, + Target: []string{"DELETED"}, + Refresh: waitForLBMonitorDelete(networkingClient, d.Id()), + Timeout: 2 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() if err != nil { return fmt.Errorf("Error deleting OpenStack LB Monitor: %s", err) } @@ -192,3 +221,59 @@ func resourceLBMonitorV1Delete(d *schema.ResourceData, meta interface{}) error { d.SetId("") return nil } + +func waitForLBMonitorActive(networkingClient *gophercloud.ServiceClient, monitorId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + m, err := monitors.Get(networkingClient, monitorId).Extract() + if err != nil { + return nil, "", err + } + + // The monitor resource has no Status attribute, so a successful Get is the best we can do + log.Printf("[DEBUG] OpenStack LB Monitor: %+v", m) + return m, "ACTIVE", nil + } +} + +func waitForLBMonitorDelete(networkingClient *gophercloud.ServiceClient, monitorId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + log.Printf("[DEBUG] Attempting to delete OpenStack LB Monitor %s", monitorId) + + m, err := monitors.Get(networkingClient, monitorId).Extract() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return m, "ACTIVE", err + } + if errCode.Actual == 404 { + log.Printf("[DEBUG] Successfully deleted OpenStack LB Monitor %s", monitorId) + return m, "DELETED", nil + } + if errCode.Actual == 409 { + log.Printf("[DEBUG] OpenStack LB Monitor (%s) is waiting for Pool to delete.", monitorId) + return m, "PENDING", nil + } + } + + log.Printf("[DEBUG] OpenStack LB Monitor: %+v", m) + err = monitors.Delete(networkingClient, monitorId).ExtractErr() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return m, "ACTIVE", err + } + if errCode.Actual == 404 { + log.Printf("[DEBUG] Successfully deleted OpenStack LB Monitor %s", monitorId) + return m, "DELETED", nil + } + if errCode.Actual == 409 { + log.Printf("[DEBUG] OpenStack LB Monitor (%s) is waiting for Pool to delete.", monitorId) + return m, "PENDING", nil + } + } + + log.Printf("[DEBUG] OpenStack LB Monitor %s still active.", monitorId) + return m, "ACTIVE", nil + } + +} diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go index 64e0436dbc..d71fb168d6 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go @@ -4,9 +4,13 @@ import ( "bytes" "fmt" "log" + "time" "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + + "github.com/rackspace/gophercloud" "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/members" "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/pools" "github.com/rackspace/gophercloud/pagination" @@ -123,6 +127,21 @@ func resourceLBPoolV1Create(d *schema.ResourceData, meta interface{}) error { } log.Printf("[INFO] LB Pool ID: %s", p.ID) + log.Printf("[DEBUG] Waiting for OpenStack LB pool (%s) to become available.", p.ID) + + stateConf := &resource.StateChangeConf{ + Target: []string{"ACTIVE"}, + Refresh: waitForLBPoolActive(networkingClient, p.ID), + Timeout: 2 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return err + } + d.SetId(p.ID) if mIDs := resourcePoolMonitorIDsV1(d); mIDs != nil { @@ -273,7 +292,16 @@ func resourceLBPoolV1Delete(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("Error creating OpenStack networking client: %s", err) } - err = pools.Delete(networkingClient, d.Id()).ExtractErr() + stateConf := &resource.StateChangeConf{ + Pending: []string{"ACTIVE"}, + Target: []string{"DELETED"}, + Refresh: waitForLBPoolDelete(networkingClient, d.Id()), + Timeout: 2 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() if err != nil { return fmt.Errorf("Error deleting OpenStack LB Pool: %s", err) } @@ -326,3 +354,54 @@ func resourceLBMemberV1Hash(v interface{}) int { return hashcode.String(buf.String()) } + +func waitForLBPoolActive(networkingClient *gophercloud.ServiceClient, poolId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + p, err := pools.Get(networkingClient, poolId).Extract() + if err != nil { + return nil, "", err + } + + log.Printf("[DEBUG] OpenStack LB Pool: %+v", p) + if p.Status == "ACTIVE" { + return p, "ACTIVE", nil + } + + return p, p.Status, nil + } +} + +func waitForLBPoolDelete(networkingClient *gophercloud.ServiceClient, poolId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + log.Printf("[DEBUG] Attempting to delete OpenStack LB Pool %s", poolId) + + p, err := pools.Get(networkingClient, poolId).Extract() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return p, "ACTIVE", err + } + if errCode.Actual == 404 { + log.Printf("[DEBUG] Successfully deleted OpenStack LB Pool %s", poolId) + return p, "DELETED", nil + } + } + + log.Printf("[DEBUG] OpenStack LB Pool: %+v", p) + err = pools.Delete(networkingClient, poolId).ExtractErr() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return p, "ACTIVE", err + } + if errCode.Actual == 404 { + log.Printf("[DEBUG] Successfully deleted OpenStack LB Pool %s", poolId) + return p, "DELETED", nil + } + } + + log.Printf("[DEBUG] OpenStack LB Pool %s still active.", poolId) + return p, "ACTIVE", nil + } + +} diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go index 1889c23845..104e359485 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go @@ -7,7 +7,13 @@ import ( "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/secgroups" + "github.com/rackspace/gophercloud/openstack/compute/v2/servers" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/monitors" "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/pools" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/vips" + "github.com/rackspace/gophercloud/openstack/networking/v2/networks" + "github.com/rackspace/gophercloud/openstack/networking/v2/subnets" ) func TestAccLBV1Pool_basic(t *testing.T) { @@ -34,6 +40,37 @@ func TestAccLBV1Pool_basic(t *testing.T) { }) } +func TestAccLBV1Pool_fullstack(t *testing.T) { + var instance1, instance2 servers.Server + var monitor monitors.Monitor + var network networks.Network + var pool pools.Pool + var secgroup secgroups.SecurityGroup + var subnet subnets.Subnet + var vip vips.VirtualIP + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1PoolDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1Pool_fullstack, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.network_1", &network), + testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.subnet_1", &subnet), + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.secgroup_1", &secgroup), + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.instance_1", &instance1), + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.instance_2", &instance2), + testAccCheckLBV1PoolExists(t, "openstack_lb_pool_v1.pool_1", &pool), + testAccCheckLBV1MonitorExists(t, "openstack_lb_monitor_v1.monitor_1", &monitor), + testAccCheckLBV1VIPExists(t, "openstack_lb_vip_v1.vip_1", &vip), + ), + }, + }, + }) +} + func testAccCheckLBV1PoolDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -132,3 +169,86 @@ var testAccLBV1Pool_update = fmt.Sprintf(` lb_method = "ROUND_ROBIN" }`, OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME) + +var testAccLBV1Pool_fullstack = fmt.Sprintf(` + resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } + + resource "openstack_compute_secgroup_v2" "secgroup_1" { + name = "secgroup_1" + description = "Rules for secgroup_1" + + rule { + from_port = -1 + to_port = -1 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } + + rule { + from_port = 80 + to_port = 80 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + } + + resource "openstack_compute_instance_v2" "instance_1" { + name = "instance_1" + security_groups = ["default", "${openstack_compute_secgroup_v2.secgroup_1.name}"] + network { + uuid = "${openstack_networking_network_v2.network_1.id}" + } + } + + resource "openstack_compute_instance_v2" "instance_2" { + name = "instance_2" + security_groups = ["default", "${openstack_compute_secgroup_v2.secgroup_1.name}"] + network { + uuid = "${openstack_networking_network_v2.network_1.id}" + } + } + + resource "openstack_lb_monitor_v1" "monitor_1" { + type = "TCP" + delay = 30 + timeout = 5 + max_retries = 3 + admin_state_up = "true" + } + + resource "openstack_lb_pool_v1" "pool_1" { + name = "pool_1" + protocol = "TCP" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + lb_method = "ROUND_ROBIN" + monitor_ids = ["${openstack_lb_monitor_v1.monitor_1.id}"] + + member { + address = "${openstack_compute_instance_v2.instance_1.access_ip_v4}" + port = 80 + admin_state_up = "true" + } + + member { + address = "${openstack_compute_instance_v2.instance_2.access_ip_v4}" + port = 80 + admin_state_up = "true" + } + } + + resource "openstack_lb_vip_v1" "vip_1" { + name = "vip_1" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + protocol = "TCP" + port = 80 + pool_id = "${openstack_lb_pool_v1.pool_1.id}" + }`) diff --git a/builtin/providers/openstack/resource_openstack_lb_vip_v1.go b/builtin/providers/openstack/resource_openstack_lb_vip_v1.go index dd165df772..89d148bdac 100644 --- a/builtin/providers/openstack/resource_openstack_lb_vip_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_vip_v1.go @@ -3,7 +3,9 @@ package openstack import ( "fmt" "log" + "time" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/rackspace/gophercloud" "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/floatingips" @@ -128,6 +130,22 @@ func resourceLBVipV1Create(d *schema.ResourceData, meta interface{}) error { } log.Printf("[INFO] LB VIP ID: %s", p.ID) + log.Printf("[DEBUG] Waiting for OpenStack LB VIP (%s) to become available.", p.ID) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING_CREATE"}, + Target: []string{"ACTIVE"}, + Refresh: waitForLBVIPActive(networkingClient, p.ID), + Timeout: 2 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return err + } + floatingIP := d.Get("floating_ip").(string) if floatingIP != "" { lbVipV1AssignFloatingIP(floatingIP, p.PortID, networkingClient) @@ -245,7 +263,16 @@ func resourceLBVipV1Delete(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("Error creating OpenStack networking client: %s", err) } - err = vips.Delete(networkingClient, d.Id()).ExtractErr() + stateConf := &resource.StateChangeConf{ + Pending: []string{"ACTIVE"}, + Target: []string{"DELETED"}, + Refresh: waitForLBVIPDelete(networkingClient, d.Id()), + Timeout: 2 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() if err != nil { return fmt.Errorf("Error deleting OpenStack LB VIP: %s", err) } @@ -298,3 +325,54 @@ func lbVipV1AssignFloatingIP(floatingIP, portID string, networkingClient *gopher return nil } + +func waitForLBVIPActive(networkingClient *gophercloud.ServiceClient, vipId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + p, err := vips.Get(networkingClient, vipId).Extract() + if err != nil { + return nil, "", err + } + + log.Printf("[DEBUG] OpenStack LB VIP: %+v", p) + if p.Status == "ACTIVE" { + return p, "ACTIVE", nil + } + + return p, p.Status, nil + } +} + +func waitForLBVIPDelete(networkingClient *gophercloud.ServiceClient, vipId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + log.Printf("[DEBUG] Attempting to delete OpenStack LB VIP %s", vipId) + + p, err := vips.Get(networkingClient, vipId).Extract() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return p, "ACTIVE", err + } + if errCode.Actual == 404 { + log.Printf("[DEBUG] Successfully deleted OpenStack LB VIP %s", vipId) + return p, "DELETED", nil + } + } + + log.Printf("[DEBUG] OpenStack LB VIP: %+v", p) + err = vips.Delete(networkingClient, vipId).ExtractErr() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return p, "ACTIVE", err + } + if errCode.Actual == 404 { + log.Printf("[DEBUG] Successfully deleted OpenStack LB VIP %s", vipId) + return p, "DELETED", nil + } + } + + log.Printf("[DEBUG] OpenStack LB VIP %s still active.", vipId) + return p, "ACTIVE", nil + } + +} diff --git a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go index 5d393c9ad6..4ec8b0a720 100644 --- a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go @@ -74,7 +74,7 @@ func resourceNetworkFloatingIPV2Create(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Waiting for OpenStack Neutron Floating IP (%s) to become available.", floatingIP.ID) stateConf := &resource.StateChangeConf{ - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForFloatingIPActive(networkingClient, floatingIP.ID), Timeout: 2 * time.Minute, Delay: 5 * time.Second, @@ -143,7 +143,7 @@ func resourceNetworkFloatingIPV2Delete(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: waitForFloatingIPDelete(networkingClient, d.Id()), Timeout: 2 * time.Minute, Delay: 5 * time.Second, diff --git a/builtin/providers/openstack/resource_openstack_networking_network_v2.go b/builtin/providers/openstack/resource_openstack_networking_network_v2.go index 4073a76121..a4d05cec17 100644 --- a/builtin/providers/openstack/resource_openstack_networking_network_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_network_v2.go @@ -95,7 +95,7 @@ func resourceNetworkingNetworkV2Create(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"BUILD"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForNetworkActive(networkingClient, n.ID), Timeout: 2 * time.Minute, Delay: 5 * time.Second, @@ -182,7 +182,7 @@ func resourceNetworkingNetworkV2Delete(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: waitForNetworkDelete(networkingClient, d.Id()), Timeout: 2 * time.Minute, Delay: 5 * time.Second, diff --git a/builtin/providers/openstack/resource_openstack_networking_port_v2.go b/builtin/providers/openstack/resource_openstack_networking_port_v2.go index 0b8d33ad5a..987e1025e1 100644 --- a/builtin/providers/openstack/resource_openstack_networking_port_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_port_v2.go @@ -127,7 +127,7 @@ func resourceNetworkingPortV2Create(d *schema.ResourceData, meta interface{}) er log.Printf("[DEBUG] Waiting for OpenStack Neutron Port (%s) to become available.", p.ID) stateConf := &resource.StateChangeConf{ - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForNetworkPortActive(networkingClient, p.ID), Timeout: 2 * time.Minute, Delay: 5 * time.Second, @@ -220,7 +220,7 @@ func resourceNetworkingPortV2Delete(d *schema.ResourceData, meta interface{}) er stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: waitForNetworkPortDelete(networkingClient, d.Id()), Timeout: 2 * time.Minute, Delay: 5 * time.Second, @@ -245,8 +245,13 @@ func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string { return groups } -func resourcePortFixedIpsV2(d *schema.ResourceData) []ports.IP { +func resourcePortFixedIpsV2(d *schema.ResourceData) interface{} { rawIP := d.Get("fixed_ip").([]interface{}) + + if len(rawIP) == 0 { + return nil + } + ip := make([]ports.IP, len(rawIP)) for i, raw := range rawIP { rawMap := raw.(map[string]interface{}) @@ -255,8 +260,8 @@ func resourcePortFixedIpsV2(d *schema.ResourceData) []ports.IP { IPAddress: rawMap["ip_address"].(string), } } - return ip + } func resourcePortAdminStateUpV2(d *schema.ResourceData) *bool { diff --git a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go index 8241a6f446..a744daf073 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go @@ -68,7 +68,7 @@ func resourceNetworkingRouterInterfaceV2Create(d *schema.ResourceData, meta inte stateConf := &resource.StateChangeConf{ Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForRouterInterfaceActive(networkingClient, n.PortID), Timeout: 2 * time.Minute, Delay: 5 * time.Second, @@ -117,7 +117,7 @@ func resourceNetworkingRouterInterfaceV2Delete(d *schema.ResourceData, meta inte stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: waitForRouterInterfaceDelete(networkingClient, d), Timeout: 2 * time.Minute, Delay: 5 * time.Second, diff --git a/builtin/providers/openstack/resource_openstack_networking_router_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_v2.go index 9c030eafb2..db488c0316 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_v2.go @@ -87,7 +87,7 @@ func resourceNetworkingRouterV2Create(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Waiting for OpenStack Neutron Router (%s) to become available", n.ID) stateConf := &resource.StateChangeConf{ Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"}, - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForRouterActive(networkingClient, n.ID), Timeout: 2 * time.Minute, Delay: 5 * time.Second, @@ -167,7 +167,7 @@ func resourceNetworkingRouterV2Delete(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: waitForRouterDelete(networkingClient, d.Id()), Timeout: 2 * time.Minute, Delay: 5 * time.Second, diff --git a/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go b/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go index 74ac91a341..13c9d2fab4 100644 --- a/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go @@ -146,7 +146,7 @@ func resourceNetworkingSubnetV2Create(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Waiting for Subnet (%s) to become available", s.ID) stateConf := &resource.StateChangeConf{ - Target: "ACTIVE", + Target: []string{"ACTIVE"}, Refresh: waitForSubnetActive(networkingClient, s.ID), Timeout: 2 * time.Minute, Delay: 5 * time.Second, @@ -237,7 +237,7 @@ func resourceNetworkingSubnetV2Delete(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"ACTIVE"}, - Target: "DELETED", + Target: []string{"DELETED"}, Refresh: waitForSubnetDelete(networkingClient, d.Id()), Timeout: 2 * time.Minute, Delay: 5 * time.Second, diff --git a/builtin/providers/packet/config.go b/builtin/providers/packet/config.go index bce54bf48c..92d0c22af8 100644 --- a/builtin/providers/packet/config.go +++ b/builtin/providers/packet/config.go @@ -13,7 +13,7 @@ type Config struct { AuthToken string } -// Client() returns a new client for accessing packet. +// Client() returns a new client for accessing Packet's API. func (c *Config) Client() *packngo.Client { return packngo.NewClient(consumerToken, c.AuthToken, cleanhttp.DefaultClient()) } diff --git a/builtin/providers/packet/errors.go b/builtin/providers/packet/errors.go new file mode 100644 index 0000000000..1c19dc4d91 --- /dev/null +++ b/builtin/providers/packet/errors.go @@ -0,0 +1,43 @@ +package packet + +import ( + "net/http" + "strings" + + "github.com/packethost/packngo" +) + +func friendlyError(err error) error { + if e, ok := err.(*packngo.ErrorResponse); ok { + return &ErrorResponse{ + StatusCode: e.Response.StatusCode, + Errors: Errors(e.Errors), + } + } + return err +} + +func isForbidden(err error) bool { + if r, ok := err.(*ErrorResponse); ok { + return r.StatusCode == http.StatusForbidden + } + return false +} + +func isNotFound(err error) bool { + if r, ok := err.(*ErrorResponse); ok { + return r.StatusCode == http.StatusNotFound + } + return false +} + +type Errors []string + +func (e Errors) Error() string { + return strings.Join(e, "; ") +} + +type ErrorResponse struct { + StatusCode int + Errors +} diff --git a/builtin/providers/packet/provider.go b/builtin/providers/packet/provider.go index c1efd6e838..82f7dbf77d 100644 --- a/builtin/providers/packet/provider.go +++ b/builtin/providers/packet/provider.go @@ -5,7 +5,7 @@ import ( "github.com/hashicorp/terraform/terraform" ) -// Provider returns a schema.Provider for Packet. +// Provider returns a schema.Provider for managing Packet infrastructure. func Provider() terraform.ResourceProvider { return &schema.Provider{ Schema: map[string]*schema.Schema{ @@ -31,6 +31,5 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { config := Config{ AuthToken: d.Get("auth_token").(string), } - return config.Client(), nil } diff --git a/builtin/providers/packet/resource_packet_device.go b/builtin/providers/packet/resource_packet_device.go index c7cd777a2f..2c6e3de548 100644 --- a/builtin/providers/packet/resource_packet_device.go +++ b/builtin/providers/packet/resource_packet_device.go @@ -1,8 +1,8 @@ package packet import ( + "errors" "fmt" - "log" "time" "github.com/hashicorp/terraform/helper/resource" @@ -146,22 +146,23 @@ func resourcePacketDeviceCreate(d *schema.ResourceData, meta interface{}) error } } - log.Printf("[DEBUG] Device create configuration: %#v", createRequest) - newDevice, _, err := client.Devices.Create(createRequest) if err != nil { - return fmt.Errorf("Error creating device: %s", err) + return friendlyError(err) } - // Assign the device id d.SetId(newDevice.ID) - log.Printf("[INFO] Device ID: %s", d.Id()) - - _, err = WaitForDeviceAttribute(d, "active", []string{"queued", "provisioning"}, "state", meta) + // Wait for the device so we can get the networking attributes that show up after a while. + _, err = waitForDeviceAttribute(d, "active", []string{"queued", "provisioning"}, "state", meta) if err != nil { - return fmt.Errorf( - "Error waiting for device (%s) to become ready: %s", d.Id(), err) + if isForbidden(err) { + // If the device doesn't get to the active state, we can't recover it from here. + d.SetId("") + + return errors.New("provisioning time limit exceeded; the Packet team will investigate") + } + return err } return resourcePacketDeviceRead(d, meta) @@ -170,10 +171,17 @@ func resourcePacketDeviceCreate(d *schema.ResourceData, meta interface{}) error func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*packngo.Client) - // Retrieve the device properties for updating the state device, _, err := client.Devices.Get(d.Id()) if err != nil { - return fmt.Errorf("Error retrieving device: %s", err) + err = friendlyError(err) + + // If the device somehow already destroyed, mark as succesfully gone. + if isNotFound(err) { + d.SetId("") + return nil + } + + return err } d.Set("name", device.Hostname) @@ -186,35 +194,36 @@ func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error { d.Set("created", device.Created) d.Set("updated", device.Updated) - tags := make([]string, 0) + tags := make([]string, 0, len(device.Tags)) for _, tag := range device.Tags { tags = append(tags, tag) } d.Set("tags", tags) - provisionerAddress := "" - - networks := make([]map[string]interface{}, 0, 1) + var ( + host string + networks = make([]map[string]interface{}, 0, 1) + ) for _, ip := range device.Network { - network := make(map[string]interface{}) - network["address"] = ip.Address - network["gateway"] = ip.Gateway - network["family"] = ip.Family - network["cidr"] = ip.Cidr - network["public"] = ip.Public + network := map[string]interface{}{ + "address": ip.Address, + "gateway": ip.Gateway, + "family": ip.Family, + "cidr": ip.Cidr, + "public": ip.Public, + } networks = append(networks, network) + if ip.Family == 4 && ip.Public == true { - provisionerAddress = ip.Address + host = ip.Address } } d.Set("network", networks) - log.Printf("[DEBUG] Provisioner Address set to %v", provisionerAddress) - - if provisionerAddress != "" { + if host != "" { d.SetConnInfo(map[string]string{ "type": "ssh", - "host": provisionerAddress, + "host": host, }) } @@ -224,19 +233,15 @@ func resourcePacketDeviceRead(d *schema.ResourceData, meta interface{}) error { func resourcePacketDeviceUpdate(d *schema.ResourceData, meta interface{}) error { client := meta.(*packngo.Client) - if d.HasChange("locked") && d.Get("locked").(bool) { - _, err := client.Devices.Lock(d.Id()) - - if err != nil { - return fmt.Errorf( - "Error locking device (%s): %s", d.Id(), err) + if d.HasChange("locked") { + var action func(string) (*packngo.Response, error) + if d.Get("locked").(bool) { + action = client.Devices.Lock + } else { + action = client.Devices.Unlock } - } else if d.HasChange("locked") { - _, err := client.Devices.Unlock(d.Id()) - - if err != nil { - return fmt.Errorf( - "Error unlocking device (%s): %s", d.Id(), err) + if _, err := action(d.Id()); err != nil { + return friendlyError(err) } } @@ -246,51 +251,38 @@ func resourcePacketDeviceUpdate(d *schema.ResourceData, meta interface{}) error func resourcePacketDeviceDelete(d *schema.ResourceData, meta interface{}) error { client := meta.(*packngo.Client) - log.Printf("[INFO] Deleting device: %s", d.Id()) if _, err := client.Devices.Delete(d.Id()); err != nil { - return fmt.Errorf("Error deleting device: %s", err) + return friendlyError(err) } return nil } -func WaitForDeviceAttribute( - d *schema.ResourceData, target string, pending []string, attribute string, meta interface{}) (interface{}, error) { - // Wait for the device so we can get the networking attributes - // that show up after a while - log.Printf( - "[INFO] Waiting for device (%s) to have %s of %s", - d.Id(), attribute, target) - +func waitForDeviceAttribute(d *schema.ResourceData, target string, pending []string, attribute string, meta interface{}) (interface{}, error) { stateConf := &resource.StateChangeConf{ Pending: pending, - Target: target, + Target: []string{target}, Refresh: newDeviceStateRefreshFunc(d, attribute, meta), Timeout: 60 * time.Minute, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } - return stateConf.WaitForState() } -func newDeviceStateRefreshFunc( - d *schema.ResourceData, attribute string, meta interface{}) resource.StateRefreshFunc { +func newDeviceStateRefreshFunc(d *schema.ResourceData, attribute string, meta interface{}) resource.StateRefreshFunc { client := meta.(*packngo.Client) + return func() (interface{}, string, error) { - err := resourcePacketDeviceRead(d, meta) - if err != nil { + if err := resourcePacketDeviceRead(d, meta); err != nil { return nil, "", err } - // See if we can access our attribute if attr, ok := d.GetOk(attribute); ok { - // Retrieve the device properties device, _, err := client.Devices.Get(d.Id()) if err != nil { - return nil, "", fmt.Errorf("Error retrieving device: %s", err) + return nil, "", friendlyError(err) } - return &device, attr.(string), nil } @@ -298,19 +290,14 @@ func newDeviceStateRefreshFunc( } } -// Powers on the device and waits for it to be active +// powerOnAndWait Powers on the device and waits for it to be active. func powerOnAndWait(d *schema.ResourceData, meta interface{}) error { client := meta.(*packngo.Client) _, err := client.Devices.PowerOn(d.Id()) if err != nil { - return err + return friendlyError(err) } - // Wait for power on - _, err = WaitForDeviceAttribute(d, "active", []string{"off"}, "state", client) - if err != nil { - return err - } - - return nil + _, err = waitForDeviceAttribute(d, "active", []string{"off"}, "state", client) + return err } diff --git a/builtin/providers/packet/resource_packet_project.go b/builtin/providers/packet/resource_packet_project.go index e41ef1381a..05c739b7aa 100644 --- a/builtin/providers/packet/resource_packet_project.go +++ b/builtin/providers/packet/resource_packet_project.go @@ -1,10 +1,6 @@ package packet import ( - "fmt" - "log" - "strings" - "github.com/hashicorp/terraform/helper/schema" "github.com/packethost/packngo" ) @@ -53,14 +49,12 @@ func resourcePacketProjectCreate(d *schema.ResourceData, meta interface{}) error PaymentMethod: d.Get("payment_method").(string), } - log.Printf("[DEBUG] Project create configuration: %#v", createRequest) project, _, err := client.Projects.Create(createRequest) if err != nil { - return fmt.Errorf("Error creating Project: %s", err) + return friendlyError(err) } d.SetId(project.ID) - log.Printf("[INFO] Project created: %s", project.ID) return resourcePacketProjectRead(d, meta) } @@ -70,14 +64,16 @@ func resourcePacketProjectRead(d *schema.ResourceData, meta interface{}) error { key, _, err := client.Projects.Get(d.Id()) if err != nil { - // If the project somehow already destroyed, mark as - // succesfully gone - if strings.Contains(err.Error(), "404") { + err = friendlyError(err) + + // If the project somehow already destroyed, mark as succesfully gone. + if isNotFound(err) { d.SetId("") + return nil } - return fmt.Errorf("Error retrieving Project: %s", err) + return err } d.Set("id", key.ID) @@ -100,10 +96,9 @@ func resourcePacketProjectUpdate(d *schema.ResourceData, meta interface{}) error updateRequest.PaymentMethod = attr.(string) } - log.Printf("[DEBUG] Project update: %#v", d.Get("id")) _, _, err := client.Projects.Update(updateRequest) if err != nil { - return fmt.Errorf("Failed to update Project: %s", err) + return friendlyError(err) } return resourcePacketProjectRead(d, meta) @@ -112,10 +107,9 @@ func resourcePacketProjectUpdate(d *schema.ResourceData, meta interface{}) error func resourcePacketProjectDelete(d *schema.ResourceData, meta interface{}) error { client := meta.(*packngo.Client) - log.Printf("[INFO] Deleting Project: %s", d.Id()) _, err := client.Projects.Delete(d.Id()) if err != nil { - return fmt.Errorf("Error deleting SSH key: %s", err) + return friendlyError(err) } d.SetId("") diff --git a/builtin/providers/packet/resource_packet_project_test.go b/builtin/providers/packet/resource_packet_project_test.go index b0179cfbec..ff1b45f7c6 100644 --- a/builtin/providers/packet/resource_packet_project_test.go +++ b/builtin/providers/packet/resource_packet_project_test.go @@ -37,11 +37,8 @@ func testAccCheckPacketProjectDestroy(s *terraform.State) error { if rs.Type != "packet_project" { continue } - - _, _, err := client.Projects.Get(rs.Primary.ID) - - if err == nil { - fmt.Errorf("Project cstill exists") + if _, _, err := client.Projects.Get(rs.Primary.ID); err == nil { + return fmt.Errorf("Project cstill exists") } } @@ -50,11 +47,9 @@ func testAccCheckPacketProjectDestroy(s *terraform.State) error { func testAccCheckPacketProjectAttributes(project *packngo.Project) resource.TestCheckFunc { return func(s *terraform.State) error { - if project.Name != "foobar" { return fmt.Errorf("Bad name: %s", project.Name) } - return nil } } @@ -62,11 +57,9 @@ func testAccCheckPacketProjectAttributes(project *packngo.Project) resource.Test func testAccCheckPacketProjectExists(n string, project *packngo.Project) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] - if !ok { return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { return fmt.Errorf("No Record ID is set") } @@ -74,11 +67,9 @@ func testAccCheckPacketProjectExists(n string, project *packngo.Project) resourc client := testAccProvider.Meta().(*packngo.Client) foundProject, _, err := client.Projects.Get(rs.Primary.ID) - if err != nil { return err } - if foundProject.ID != rs.Primary.ID { return fmt.Errorf("Record not found: %v - %v", rs.Primary.ID, foundProject) } diff --git a/builtin/providers/packet/resource_packet_ssh_key.go b/builtin/providers/packet/resource_packet_ssh_key.go index 95e04bd8ca..a70ed78a28 100644 --- a/builtin/providers/packet/resource_packet_ssh_key.go +++ b/builtin/providers/packet/resource_packet_ssh_key.go @@ -1,10 +1,6 @@ package packet import ( - "fmt" - "log" - "strings" - "github.com/hashicorp/terraform/helper/schema" "github.com/packethost/packngo" ) @@ -59,14 +55,12 @@ func resourcePacketSSHKeyCreate(d *schema.ResourceData, meta interface{}) error Key: d.Get("public_key").(string), } - log.Printf("[DEBUG] SSH Key create configuration: %#v", createRequest) key, _, err := client.SSHKeys.Create(createRequest) if err != nil { - return fmt.Errorf("Error creating SSH Key: %s", err) + return friendlyError(err) } d.SetId(key.ID) - log.Printf("[INFO] SSH Key: %s", key.ID) return resourcePacketSSHKeyRead(d, meta) } @@ -76,14 +70,16 @@ func resourcePacketSSHKeyRead(d *schema.ResourceData, meta interface{}) error { key, _, err := client.SSHKeys.Get(d.Id()) if err != nil { + err = friendlyError(err) + // If the key is somehow already destroyed, mark as // succesfully gone - if strings.Contains(err.Error(), "404") { + if isNotFound(err) { d.SetId("") return nil } - return fmt.Errorf("Error retrieving SSH key: %s", err) + return err } d.Set("id", key.ID) @@ -105,10 +101,9 @@ func resourcePacketSSHKeyUpdate(d *schema.ResourceData, meta interface{}) error Key: d.Get("public_key").(string), } - log.Printf("[DEBUG] SSH key update: %#v", d.Get("id")) _, _, err := client.SSHKeys.Update(updateRequest) if err != nil { - return fmt.Errorf("Failed to update SSH key: %s", err) + return friendlyError(err) } return resourcePacketSSHKeyRead(d, meta) @@ -117,10 +112,9 @@ func resourcePacketSSHKeyUpdate(d *schema.ResourceData, meta interface{}) error func resourcePacketSSHKeyDelete(d *schema.ResourceData, meta interface{}) error { client := meta.(*packngo.Client) - log.Printf("[INFO] Deleting SSH key: %s", d.Id()) _, err := client.SSHKeys.Delete(d.Id()) if err != nil { - return fmt.Errorf("Error deleting SSH key: %s", err) + return friendlyError(err) } d.SetId("") diff --git a/builtin/providers/packet/resource_packet_ssh_key_test.go b/builtin/providers/packet/resource_packet_ssh_key_test.go index 765086d4fa..43cd4a54b0 100644 --- a/builtin/providers/packet/resource_packet_ssh_key_test.go +++ b/builtin/providers/packet/resource_packet_ssh_key_test.go @@ -40,11 +40,8 @@ func testAccCheckPacketSSHKeyDestroy(s *terraform.State) error { if rs.Type != "packet_ssh_key" { continue } - - _, _, err := client.SSHKeys.Get(rs.Primary.ID) - - if err == nil { - fmt.Errorf("SSH key still exists") + if _, _, err := client.SSHKeys.Get(rs.Primary.ID); err == nil { + return fmt.Errorf("SSH key still exists") } } @@ -53,11 +50,9 @@ func testAccCheckPacketSSHKeyDestroy(s *terraform.State) error { func testAccCheckPacketSSHKeyAttributes(key *packngo.SSHKey) resource.TestCheckFunc { return func(s *terraform.State) error { - if key.Label != "foobar" { return fmt.Errorf("Bad name: %s", key.Label) } - return nil } } @@ -65,11 +60,9 @@ func testAccCheckPacketSSHKeyAttributes(key *packngo.SSHKey) resource.TestCheckF func testAccCheckPacketSSHKeyExists(n string, key *packngo.SSHKey) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] - if !ok { return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { return fmt.Errorf("No Record ID is set") } @@ -77,11 +70,9 @@ func testAccCheckPacketSSHKeyExists(n string, key *packngo.SSHKey) resource.Test client := testAccProvider.Meta().(*packngo.Client) foundKey, _, err := client.SSHKeys.Get(rs.Primary.ID) - if err != nil { return err } - if foundKey.ID != rs.Primary.ID { return fmt.Errorf("SSh Key not found: %v - %v", rs.Primary.ID, foundKey) } diff --git a/builtin/providers/postgresql/config.go b/builtin/providers/postgresql/config.go new file mode 100644 index 0000000000..8bf7b2daa5 --- /dev/null +++ b/builtin/providers/postgresql/config.go @@ -0,0 +1,44 @@ +package postgresql + +import ( + "database/sql" + "fmt" + + _ "github.com/lib/pq" //PostgreSQL db +) + +// Config - provider config +type Config struct { + Host string + Port int + Username string + Password string +} + +// Client struct holding connection string +type Client struct { + username string + connStr string +} + +//NewClient returns new client config +func (c *Config) NewClient() (*Client, error) { + connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres", c.Host, c.Port, c.Username, c.Password) + + client := Client{ + connStr: connStr, + username: c.Username, + } + + return &client, nil +} + +//Connect will manually connect/diconnect to prevent a large number or db connections being made +func (c *Client) Connect() (*sql.DB, error) { + db, err := sql.Open("postgres", c.connStr) + if err != nil { + return nil, fmt.Errorf("Error connecting to postgresql server: %s", err) + } + + return db, nil +} diff --git a/builtin/providers/postgresql/provider.go b/builtin/providers/postgresql/provider.go new file mode 100644 index 0000000000..c048ec3ece --- /dev/null +++ b/builtin/providers/postgresql/provider.go @@ -0,0 +1,63 @@ +package postgresql + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "host": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("POSTGRESQL_HOST", nil), + Description: "The postgresql server address", + }, + "port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 5432, + Description: "The postgresql server port", + }, + "username": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("POSTGRESQL_USERNAME", nil), + Description: "Username for postgresql server connection", + }, + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("POSTGRESQL_PASSWORD", nil), + Description: "Password for postgresql server connection", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "postgresql_database": resourcePostgresqlDatabase(), + "postgresql_role": resourcePostgresqlRole(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + Host: d.Get("host").(string), + Port: d.Get("port").(int), + Username: d.Get("username").(string), + Password: d.Get("password").(string), + } + + client, err := config.NewClient() + if err != nil { + return nil, fmt.Errorf("Error initializing Postgresql client: %s", err) + } + + return client, nil +} diff --git a/builtin/providers/postgresql/provider_test.go b/builtin/providers/postgresql/provider_test.go new file mode 100644 index 0000000000..19c65cb38b --- /dev/null +++ b/builtin/providers/postgresql/provider_test.go @@ -0,0 +1,41 @@ +package postgresql + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "postgresql": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("POSTGRESQL_HOST"); v == "" { + t.Fatal("POSTGRESQL_HOST must be set for acceptance tests") + } + if v := os.Getenv("POSTGRESQL_USERNAME"); v == "" { + t.Fatal("POSTGRESQL_USERNAME must be set for acceptance tests") + } + if v := os.Getenv("POSTGRESQL_PASSWORD"); v == "" { + t.Fatal("POSTGRESQL_PASSWORD must be set for acceptance tests") + } +} diff --git a/builtin/providers/postgresql/resource_postgresql_database.go b/builtin/providers/postgresql/resource_postgresql_database.go new file mode 100644 index 0000000000..bf01ae42ea --- /dev/null +++ b/builtin/providers/postgresql/resource_postgresql_database.go @@ -0,0 +1,160 @@ +package postgresql + +import ( + "database/sql" + "fmt" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/lib/pq" +) + +func resourcePostgresqlDatabase() *schema.Resource { + return &schema.Resource{ + Create: resourcePostgresqlDatabaseCreate, + Read: resourcePostgresqlDatabaseRead, + Update: resourcePostgresqlDatabaseUpdate, + Delete: resourcePostgresqlDatabaseDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "owner": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + Computed: true, + }, + }, + } +} + +func resourcePostgresqlDatabaseCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + dbName := d.Get("name").(string) + dbOwner := d.Get("owner").(string) + connUsername := client.username + + var dbOwnerCfg string + if dbOwner != "" { + dbOwnerCfg = fmt.Sprintf("WITH OWNER=%s", pq.QuoteIdentifier(dbOwner)) + } else { + dbOwnerCfg = "" + } + + //needed in order to set the owner of the db if the connection user is not a superuser + err = grantRoleMembership(conn, dbOwner, connUsername) + if err != nil { + return err + } + + query := fmt.Sprintf("CREATE DATABASE %s %s", pq.QuoteIdentifier(dbName), dbOwnerCfg) + _, err = conn.Query(query) + if err != nil { + return fmt.Errorf("Error creating postgresql database %s: %s", dbName, err) + } + + d.SetId(dbName) + + return nil +} + +func resourcePostgresqlDatabaseDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + dbName := d.Get("name").(string) + connUsername := client.username + dbOwner := d.Get("owner").(string) + //needed in order to set the owner of the db if the connection user is not a superuser + err = grantRoleMembership(conn, dbOwner, connUsername) + if err != nil { + return err + } + + query := fmt.Sprintf("DROP DATABASE %s", pq.QuoteIdentifier(dbName)) + _, err = conn.Query(query) + if err != nil { + return err + } + + d.SetId("") + + return nil +} + +func resourcePostgresqlDatabaseRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + dbName := d.Get("name").(string) + + var owner string + err = conn.QueryRow("SELECT pg_catalog.pg_get_userbyid(d.datdba) from pg_database d WHERE datname=$1", dbName).Scan(&owner) + switch { + case err == sql.ErrNoRows: + d.SetId("") + return nil + case err != nil: + return fmt.Errorf("Error reading info about database: %s", err) + default: + d.Set("owner", owner) + return nil + } +} + +func resourcePostgresqlDatabaseUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + dbName := d.Get("name").(string) + + if d.HasChange("owner") { + owner := d.Get("owner").(string) + if owner != "" { + query := fmt.Sprintf("ALTER DATABASE %s OWNER TO %s", pq.QuoteIdentifier(dbName), pq.QuoteIdentifier(owner)) + _, err := conn.Query(query) + if err != nil { + return fmt.Errorf("Error updating owner for database: %s", err) + } + } + } + + return resourcePostgresqlDatabaseRead(d, meta) +} + +func grantRoleMembership(conn *sql.DB, dbOwner string, connUsername string) error { + if dbOwner != "" && dbOwner != connUsername { + query := fmt.Sprintf("GRANT %s TO %s", pq.QuoteIdentifier(dbOwner), pq.QuoteIdentifier(connUsername)) + _, err := conn.Query(query) + if err != nil { + //is already member or role + if strings.Contains(err.Error(), "duplicate key value violates unique constraint") { + return nil + } + return fmt.Errorf("Error granting membership: %s", err) + } + } + return nil +} diff --git a/builtin/providers/postgresql/resource_postgresql_database_test.go b/builtin/providers/postgresql/resource_postgresql_database_test.go new file mode 100644 index 0000000000..35d2b271c9 --- /dev/null +++ b/builtin/providers/postgresql/resource_postgresql_database_test.go @@ -0,0 +1,144 @@ +package postgresql + +import ( + "database/sql" + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccPostgresqlDatabase_Basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPostgresqlDatabaseDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccPostgresqlDatabaseConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckPostgresqlDatabaseExists("postgresql_database.mydb", "myrole"), + resource.TestCheckResourceAttr( + "postgresql_database.mydb", "name", "mydb"), + resource.TestCheckResourceAttr( + "postgresql_database.mydb", "owner", "myrole"), + ), + }, + }, + }) +} + +func TestAccPostgresqlDatabase_DefaultOwner(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPostgresqlDatabaseDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccPostgresqlDatabaseConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckPostgresqlDatabaseExists("postgresql_database.mydb_default_owner", ""), + resource.TestCheckResourceAttr( + "postgresql_database.mydb_default_owner", "name", "mydb_default_owner"), + ), + }, + }, + }) +} + +func testAccCheckPostgresqlDatabaseDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*Client) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "postgresql_database" { + continue + } + + exists, err := checkDatabaseExists(client, rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error checking db %s", err) + } + + if exists { + return fmt.Errorf("Db still exists after destroy") + } + } + + return nil +} + +func testAccCheckPostgresqlDatabaseExists(n string, owner string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Resource not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + actualOwner := rs.Primary.Attributes["owner"] + if actualOwner != owner { + return fmt.Errorf("Wrong owner for db expected %s got %s", owner, actualOwner) + } + + client := testAccProvider.Meta().(*Client) + exists, err := checkDatabaseExists(client, rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error checking db %s", err) + } + + if !exists { + return fmt.Errorf("Db not found") + } + + return nil + } +} + +func checkDatabaseExists(client *Client, dbName string) (bool, error) { + conn, err := client.Connect() + if err != nil { + return false, err + } + defer conn.Close() + + var _rez int + err = conn.QueryRow("SELECT 1 from pg_database d WHERE datname=$1", dbName).Scan(&_rez) + switch { + case err == sql.ErrNoRows: + return false, nil + case err != nil: + return false, fmt.Errorf("Error reading info about database: %s", err) + default: + return true, nil + } +} + +var testAccPostgresqlDatabaseConfig = ` +resource "postgresql_role" "myrole" { + name = "myrole" + login = true +} + +resource "postgresql_database" "mydb" { + name = "mydb" + owner = "${postgresql_role.myrole.name}" +} + +resource "postgresql_database" "mydb2" { + name = "mydb2" + owner = "${postgresql_role.myrole.name}" +} + +resource "postgresql_database" "mydb_default_owner" { + name = "mydb_default_owner" +} + +` diff --git a/builtin/providers/postgresql/resource_postgresql_role.go b/builtin/providers/postgresql/resource_postgresql_role.go new file mode 100644 index 0000000000..104b5c9d01 --- /dev/null +++ b/builtin/providers/postgresql/resource_postgresql_role.go @@ -0,0 +1,179 @@ +package postgresql + +import ( + "database/sql" + "fmt" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/lib/pq" +) + +func resourcePostgresqlRole() *schema.Resource { + return &schema.Resource{ + Create: resourcePostgresqlRoleCreate, + Read: resourcePostgresqlRoleRead, + Update: resourcePostgresqlRoleUpdate, + Delete: resourcePostgresqlRoleDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "login": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: false, + Default: false, + }, + "password": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "encrypted": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: false, + Default: false, + }, + }, + } +} + +func resourcePostgresqlRoleCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + roleName := d.Get("name").(string) + loginAttr := getLoginStr(d.Get("login").(bool)) + password := d.Get("password").(string) + + encryptedCfg := getEncryptedStr(d.Get("encrypted").(bool)) + + query := fmt.Sprintf("CREATE ROLE %s %s %s PASSWORD '%s'", pq.QuoteIdentifier(roleName), loginAttr, encryptedCfg, password) + _, err = conn.Query(query) + if err != nil { + return fmt.Errorf("Error creating role: %s", err) + } + + d.SetId(roleName) + + return nil +} + +func resourcePostgresqlRoleDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + roleName := d.Get("name").(string) + + query := fmt.Sprintf("DROP ROLE %s", pq.QuoteIdentifier(roleName)) + _, err = conn.Query(query) + if err != nil { + return err + } + + d.SetId("") + + return nil +} + +func resourcePostgresqlRoleRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + roleName := d.Get("name").(string) + + var canLogin bool + err = conn.QueryRow("select rolcanlogin from pg_roles where rolname=$1", roleName).Scan(&canLogin) + switch { + case err == sql.ErrNoRows: + d.SetId("") + return nil + case err != nil: + return fmt.Errorf("Error reading info about role: %s", err) + default: + d.Set("login", canLogin) + return nil + } +} + +func resourcePostgresqlRoleUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Client) + conn, err := client.Connect() + if err != nil { + return err + } + defer conn.Close() + + d.Partial(true) + + roleName := d.Get("name").(string) + + if d.HasChange("login") { + loginAttr := getLoginStr(d.Get("login").(bool)) + query := fmt.Sprintf("ALTER ROLE %s %s", pq.QuoteIdentifier(roleName), pq.QuoteIdentifier(loginAttr)) + _, err := conn.Query(query) + if err != nil { + return fmt.Errorf("Error updating login attribute for role: %s", err) + } + + d.SetPartial("login") + } + + password := d.Get("password").(string) + if d.HasChange("password") { + encryptedCfg := getEncryptedStr(d.Get("encrypted").(bool)) + + query := fmt.Sprintf("ALTER ROLE %s %s PASSWORD '%s'", pq.QuoteIdentifier(roleName), encryptedCfg, password) + _, err := conn.Query(query) + if err != nil { + return fmt.Errorf("Error updating password attribute for role: %s", err) + } + + d.SetPartial("password") + } + + if d.HasChange("encrypted") { + encryptedCfg := getEncryptedStr(d.Get("encrypted").(bool)) + + query := fmt.Sprintf("ALTER ROLE %s %s PASSWORD '%s'", pq.QuoteIdentifier(roleName), encryptedCfg, password) + _, err := conn.Query(query) + if err != nil { + return fmt.Errorf("Error updating encrypted attribute for role: %s", err) + } + + d.SetPartial("encrypted") + } + + d.Partial(false) + return resourcePostgresqlRoleRead(d, meta) +} + +func getLoginStr(canLogin bool) string { + if canLogin { + return "login" + } + return "nologin" +} + +func getEncryptedStr(isEncrypted bool) string { + if isEncrypted { + return "encrypted" + } + return "unencrypted" +} diff --git a/builtin/providers/postgresql/resource_postgresql_role_test.go b/builtin/providers/postgresql/resource_postgresql_role_test.go new file mode 100644 index 0000000000..0839b2ef6c --- /dev/null +++ b/builtin/providers/postgresql/resource_postgresql_role_test.go @@ -0,0 +1,132 @@ +package postgresql + +import ( + "database/sql" + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccPostgresqlRole_Basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckPostgresqlRoleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccPostgresqlRoleConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckPostgresqlRoleExists("postgresql_role.myrole2", "true"), + resource.TestCheckResourceAttr( + "postgresql_role.myrole2", "name", "myrole2"), + resource.TestCheckResourceAttr( + "postgresql_role.myrole2", "login", "true"), + ), + }, + }, + }) +} + +func testAccCheckPostgresqlRoleDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*Client) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "postgresql_role" { + continue + } + + exists, err := checkRoleExists(client, rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error checking role %s", err) + } + + if exists { + return fmt.Errorf("Role still exists after destroy") + } + } + + return nil +} + +func testAccCheckPostgresqlRoleExists(n string, canLogin string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Resource not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + actualCanLogin := rs.Primary.Attributes["login"] + if actualCanLogin != canLogin { + return fmt.Errorf("Wrong value for login expected %s got %s", canLogin, actualCanLogin) + } + + client := testAccProvider.Meta().(*Client) + exists, err := checkRoleExists(client, rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error checking role %s", err) + } + + if !exists { + return fmt.Errorf("Role not found") + } + + return nil + } +} + +func checkRoleExists(client *Client, roleName string) (bool, error) { + conn, err := client.Connect() + if err != nil { + return false, err + } + defer conn.Close() + + var _rez int + err = conn.QueryRow("SELECT 1 from pg_roles d WHERE rolname=$1", roleName).Scan(&_rez) + switch { + case err == sql.ErrNoRows: + return false, nil + case err != nil: + return false, fmt.Errorf("Error reading info about role: %s", err) + default: + return true, nil + } +} + +var testAccPostgresqlRoleConfig = ` +resource "postgresql_role" "myrole2" { + name = "myrole2" + login = true +} + +resource "postgresql_role" "role_with_pwd" { + name = "role_with_pwd" + login = true + password = "mypass" +} + +resource "postgresql_role" "role_with_pwd_encr" { + name = "role_with_pwd_encr" + login = true + password = "mypass" + encrypted = true +} + +resource "postgresql_role" "role_with_pwd_no_login" { + name = "role_with_pwd_no_login" + password = "mypass" +} + +resource "postgresql_role" "role_simple" { + name = "role_simple" +} +` diff --git a/builtin/providers/rundeck/resource_job.go b/builtin/providers/rundeck/resource_job.go index c9af25b0b7..5ef863bd24 100644 --- a/builtin/providers/rundeck/resource_job.go +++ b/builtin/providers/rundeck/resource_job.go @@ -463,7 +463,14 @@ func jobToResourceData(job *rundeck.JobDetail, d *schema.ResourceData) error { d.Set("id", job.ID) d.Set("name", job.Name) d.Set("group_name", job.GroupName) - d.Set("project_name", job.ProjectName) + + // The project name is not consistently returned in all rundeck versions, + // so we'll only update it if it's set. Jobs can't move between projects + // anyway, so this is harmless. + if job.ProjectName != "" { + d.Set("project_name", job.ProjectName) + } + d.Set("description", job.Description) d.Set("log_level", job.LogLevel) d.Set("allow_concurrent_executions", job.AllowConcurrentExecutions) diff --git a/builtin/providers/template/provider.go b/builtin/providers/template/provider.go index 7513341bc1..1ebf3ae22a 100644 --- a/builtin/providers/template/provider.go +++ b/builtin/providers/template/provider.go @@ -8,7 +8,8 @@ import ( func Provider() terraform.ResourceProvider { return &schema.Provider{ ResourcesMap: map[string]*schema.Resource{ - "template_file": resource(), + "template_file": resourceFile(), + "template_cloudinit_config": resourceCloudinitConfig(), }, } } diff --git a/builtin/providers/template/resource_cloudinit_config.go b/builtin/providers/template/resource_cloudinit_config.go new file mode 100644 index 0000000000..78efcecf46 --- /dev/null +++ b/builtin/providers/template/resource_cloudinit_config.go @@ -0,0 +1,228 @@ +package template + +import ( + "bytes" + "compress/gzip" + "encoding/base64" + "fmt" + "io" + "net/textproto" + "strconv" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + + "github.com/sthulb/mime/multipart" +) + +func resourceCloudinitConfig() *schema.Resource { + return &schema.Resource{ + Create: resourceCloudinitConfigCreate, + Delete: resourceCloudinitConfigDelete, + Update: resourceCloudinitConfigCreate, + Exists: resourceCloudinitConfigExists, + Read: resourceCloudinitConfigRead, + + Schema: map[string]*schema.Schema{ + "part": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "content_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if _, supported := supportedContentTypes[value]; !supported { + errors = append(errors, fmt.Errorf("Part has an unsupported content type: %s", v)) + } + + return + }, + }, + "content": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "filename": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "merge_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "gzip": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + ForceNew: true, + }, + "base64_encode": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + ForceNew: true, + }, + "rendered": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: "rendered cloudinit configuration", + }, + }, + } +} + +func resourceCloudinitConfigCreate(d *schema.ResourceData, meta interface{}) error { + rendered, err := renderCloudinitConfig(d) + if err != nil { + return err + } + + d.Set("rendered", rendered) + d.SetId(strconv.Itoa(hashcode.String(rendered))) + return nil +} + +func resourceCloudinitConfigDelete(d *schema.ResourceData, meta interface{}) error { + d.SetId("") + return nil +} + +func resourceCloudinitConfigExists(d *schema.ResourceData, meta interface{}) (bool, error) { + rendered, err := renderCloudinitConfig(d) + if err != nil { + return false, err + } + + return strconv.Itoa(hashcode.String(rendered)) == d.Id(), nil +} + +func resourceCloudinitConfigRead(d *schema.ResourceData, meta interface{}) error { + return nil +} + +func renderCloudinitConfig(d *schema.ResourceData) (string, error) { + gzipOutput := d.Get("gzip").(bool) + base64Output := d.Get("base64_encode").(bool) + + partsValue, hasParts := d.GetOk("part") + if !hasParts { + return "", fmt.Errorf("No parts found in the cloudinit resource declaration") + } + + cloudInitParts := make(cloudInitParts, len(partsValue.([]interface{}))) + for i, v := range partsValue.([]interface{}) { + p := v.(map[string]interface{}) + + part := cloudInitPart{} + if p, ok := p["content_type"]; ok { + part.ContentType = p.(string) + } + if p, ok := p["content"]; ok { + part.Content = p.(string) + } + if p, ok := p["merge_type"]; ok { + part.MergeType = p.(string) + } + if p, ok := p["filename"]; ok { + part.Filename = p.(string) + } + cloudInitParts[i] = part + } + + var buffer bytes.Buffer + + var err error + if gzipOutput { + gzipWriter := gzip.NewWriter(&buffer) + err = renderPartsToWriter(cloudInitParts, gzipWriter) + gzipWriter.Close() + } else { + err = renderPartsToWriter(cloudInitParts, &buffer) + } + if err != nil { + return "", err + } + + output := "" + if base64Output { + output = base64.StdEncoding.EncodeToString(buffer.Bytes()) + } else { + output = buffer.String() + } + + return output, nil +} + +func renderPartsToWriter(parts cloudInitParts, writer io.Writer) error { + mimeWriter := multipart.NewWriter(writer) + defer mimeWriter.Close() + + // we need to set the boundary explictly, otherwise the boundary is random + // and this causes terraform to complain about the resource being different + if err := mimeWriter.SetBoundary("MIMEBOUNDRY"); err != nil { + return err + } + + writer.Write([]byte(fmt.Sprintf("Content-Type: multipart/mixed; boundary=\"%s\"\n", mimeWriter.Boundary()))) + writer.Write([]byte("MIME-Version: 1.0\r\n")) + + for _, part := range parts { + header := textproto.MIMEHeader{} + if part.ContentType == "" { + header.Set("Content-Type", "text/plain") + } else { + header.Set("Content-Type", part.ContentType) + } + + header.Set("MIME-Version", "1.0") + header.Set("Content-Transfer-Encoding", "7bit") + + if part.Filename != "" { + header.Set("Content-Disposition", fmt.Sprintf(`attachment; filename="%s"`, part.Filename)) + } + + if part.MergeType != "" { + header.Set("X-Merge-Type", part.MergeType) + } + + partWriter, err := mimeWriter.CreatePart(header) + if err != nil { + return err + } + + _, err = partWriter.Write([]byte(part.Content)) + if err != nil { + return err + } + } + + return nil +} + +type cloudInitPart struct { + ContentType string + MergeType string + Filename string + Content string +} + +type cloudInitParts []cloudInitPart + +// Support content types as specified by http://cloudinit.readthedocs.org/en/latest/topics/format.html +var supportedContentTypes = map[string]bool{ + "text/x-include-once-url": true, + "text/x-include-url": true, + "text/cloud-config-archive": true, + "text/upstart-job": true, + "text/cloud-config": true, + "text/part-handler": true, + "text/x-shellscript": true, + "text/cloud-boothook": true, +} diff --git a/builtin/providers/template/resource_cloudinit_config_test.go b/builtin/providers/template/resource_cloudinit_config_test.go new file mode 100644 index 0000000000..41cd23214b --- /dev/null +++ b/builtin/providers/template/resource_cloudinit_config_test.go @@ -0,0 +1,133 @@ +package template + +import ( + "testing" + + r "github.com/hashicorp/terraform/helper/resource" +) + +func TestRender(t *testing.T) { + testCases := []struct { + ResourceBlock string + Expected string + }{ + { + `resource "template_cloudinit_config" "foo" { + gzip = false + base64_encode = false + + part { + content_type = "text/x-shellscript" + content = "baz" + } + }`, + "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDRY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDRY--\r\n", + }, + { + `resource "template_cloudinit_config" "foo" { + gzip = false + base64_encode = false + + part { + content_type = "text/x-shellscript" + content = "baz" + filename = "foobar.sh" + } + }`, + "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDRY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDRY\r\nContent-Disposition: attachment; filename=\"foobar.sh\"\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDRY--\r\n", + }, + { + `resource "template_cloudinit_config" "foo" { + gzip = false + base64_encode = false + + part { + content_type = "text/x-shellscript" + content = "baz" + } + part { + content_type = "text/x-shellscript" + content = "ffbaz" + } + }`, + "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDRY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nffbaz\r\n--MIMEBOUNDRY--\r\n", + }, + { + `resource "template_cloudinit_config" "foo" { + gzip = true + base64_encode = false + + part { + content_type = "text/x-shellscript" + content = "baz" + filename = "ah" + } + part { + content_type = "text/x-shellscript" + content = "ffbaz" + } + }`, + "\x1f\x8b\b\x00\x00\tn\x88\x00\xff\xac\xce\xc1J\x031\x10\xc6\xf1{`\xdf!\xe4>VO\u0096^\xb4=xX\x05\xa9\x82\xc7\xd9݉;\x90LB2\x85\xadOo-\x88\x8b\xe2\xadDŽ\x1f\xf3\xfd\xef\x93(\x89\xc2\xfe\x98\xa9\xb5\xf1\x10\x943\x16]E\x9ei\\\xdb>\x1dd\xc4rܸ\xee\xa1\xdb\xdd=\xbd\x03\x00\x00\xff\xffmB\x8c\xeed\x01\x00\x00", + }, + } + + for _, tt := range testCases { + r.Test(t, r.TestCase{ + Providers: testProviders, + Steps: []r.TestStep{ + r.TestStep{ + Config: tt.ResourceBlock, + Check: r.ComposeTestCheckFunc( + r.TestCheckResourceAttr("template_cloudinit_config.foo", "rendered", tt.Expected), + ), + }, + }, + }) + } +} + +func TestCloudConfig_update(t *testing.T) { + r.Test(t, r.TestCase{ + Providers: testProviders, + Steps: []r.TestStep{ + r.TestStep{ + Config: testCloudInitConfig_basic, + Check: r.ComposeTestCheckFunc( + r.TestCheckResourceAttr("template_cloudinit_config.config", "rendered", testCloudInitConfig_basic_expected), + ), + }, + + r.TestStep{ + Config: testCloudInitConfig_update, + Check: r.ComposeTestCheckFunc( + r.TestCheckResourceAttr("template_cloudinit_config.config", "rendered", testCloudInitConfig_update_expected), + ), + }, + }, + }) +} + +var testCloudInitConfig_basic = ` +resource "template_cloudinit_config" "config" { + part { + content_type = "text/x-shellscript" + content = "baz" + } +}` + +var testCloudInitConfig_basic_expected = `Content-Type: multipart/mixed; boundary=\"MIMEBOUNDRY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDRY--\r\n` + +var testCloudInitConfig_update = ` +resource "template_cloudinit_config" "config" { + part { + content_type = "text/x-shellscript" + content = "baz" + } + + part { + content_type = "text/x-shellscript" + content = "ffbaz" + } +}` + +var testCloudInitConfig_update_expected = `Content-Type: multipart/mixed; boundary=\"MIMEBOUNDRY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nffbaz\r\n--MIMEBOUNDRY--\r\n` diff --git a/builtin/providers/template/resource.go b/builtin/providers/template/resource_template_file.go similarity index 85% rename from builtin/providers/template/resource.go rename to builtin/providers/template/resource_template_file.go index 8022c064be..554fa18baf 100644 --- a/builtin/providers/template/resource.go +++ b/builtin/providers/template/resource_template_file.go @@ -15,12 +15,12 @@ import ( "github.com/hashicorp/terraform/helper/schema" ) -func resource() *schema.Resource { +func resourceFile() *schema.Resource { return &schema.Resource{ - Create: Create, - Delete: Delete, - Exists: Exists, - Read: Read, + Create: resourceFileCreate, + Delete: resourceFileDelete, + Exists: resourceFileExists, + Read: resourceFileRead, Schema: map[string]*schema.Schema{ "template": &schema.Schema{ @@ -69,8 +69,8 @@ func resource() *schema.Resource { } } -func Create(d *schema.ResourceData, meta interface{}) error { - rendered, err := render(d) +func resourceFileCreate(d *schema.ResourceData, meta interface{}) error { + rendered, err := renderFile(d) if err != nil { return err } @@ -79,13 +79,13 @@ func Create(d *schema.ResourceData, meta interface{}) error { return nil } -func Delete(d *schema.ResourceData, meta interface{}) error { +func resourceFileDelete(d *schema.ResourceData, meta interface{}) error { d.SetId("") return nil } -func Exists(d *schema.ResourceData, meta interface{}) (bool, error) { - rendered, err := render(d) +func resourceFileExists(d *schema.ResourceData, meta interface{}) (bool, error) { + rendered, err := renderFile(d) if err != nil { if _, ok := err.(templateRenderError); ok { log.Printf("[DEBUG] Got error while rendering in Exists: %s", err) @@ -98,7 +98,7 @@ func Exists(d *schema.ResourceData, meta interface{}) (bool, error) { return hash(rendered) == d.Id(), nil } -func Read(d *schema.ResourceData, meta interface{}) error { +func resourceFileRead(d *schema.ResourceData, meta interface{}) error { // Logic is handled in Exists, which only returns true if the rendered // contents haven't changed. That means if we get here there's nothing to // do. @@ -107,7 +107,7 @@ func Read(d *schema.ResourceData, meta interface{}) error { type templateRenderError error -func render(d *schema.ResourceData) (string, error) { +func renderFile(d *schema.ResourceData) (string, error) { template := d.Get("template").(string) filename := d.Get("filename").(string) vars := d.Get("vars").(map[string]interface{}) @@ -155,7 +155,7 @@ func execute(s string, vars map[string]interface{}) (string, error) { cfg := lang.EvalConfig{ GlobalScope: &ast.BasicScope{ VarMap: varmap, - FuncMap: config.Funcs, + FuncMap: config.Funcs(), }, } diff --git a/builtin/providers/template/resource_test.go b/builtin/providers/template/resource_template_file_test.go similarity index 75% rename from builtin/providers/template/resource_test.go rename to builtin/providers/template/resource_template_file_test.go index 91882d9d37..9f54858dde 100644 --- a/builtin/providers/template/resource_test.go +++ b/builtin/providers/template/resource_template_file_test.go @@ -2,6 +2,7 @@ package template import ( "fmt" + "sync" "testing" r "github.com/hashicorp/terraform/helper/resource" @@ -76,6 +77,29 @@ func TestTemplateVariableChange(t *testing.T) { }) } +// This test covers a panic due to config.Func formerly being a +// shared map, causing multiple template_file resources to try and +// accessing it parallel during their lang.Eval() runs. +// +// Before fix, test fails under `go test -race` +func TestTemplateSharedMemoryRace(t *testing.T) { + var wg sync.WaitGroup + for i := 0; i < 100; i++ { + go func(wg sync.WaitGroup, t *testing.T, i int) { + wg.Add(1) + out, err := execute("don't panic!", map[string]interface{}{}) + if err != nil { + t.Fatalf("err: %s", err) + } + if out != "don't panic!" { + t.Fatalf("bad output: %s", out) + } + wg.Done() + }(wg, t, i) + } + wg.Wait() +} + func testTemplateConfig(template, vars string) string { return fmt.Sprintf(` resource "template_file" "t0" { diff --git a/builtin/providers/tls/provider.go b/builtin/providers/tls/provider.go index 69dfa0dedf..e6c1d61980 100644 --- a/builtin/providers/tls/provider.go +++ b/builtin/providers/tls/provider.go @@ -13,9 +13,10 @@ import ( func Provider() terraform.ResourceProvider { return &schema.Provider{ ResourcesMap: map[string]*schema.Resource{ - "tls_private_key": resourcePrivateKey(), - "tls_self_signed_cert": resourceSelfSignedCert(), - "tls_cert_request": resourceCertRequest(), + "tls_private_key": resourcePrivateKey(), + "tls_locally_signed_cert": resourceLocallySignedCert(), + "tls_self_signed_cert": resourceSelfSignedCert(), + "tls_cert_request": resourceCertRequest(), }, } } diff --git a/builtin/providers/tls/provider_test.go b/builtin/providers/tls/provider_test.go index 31b014733e..7dc7af0d2f 100644 --- a/builtin/providers/tls/provider_test.go +++ b/builtin/providers/tls/provider_test.go @@ -34,3 +34,62 @@ DrUJcPbKUfF4VBqmmwwkpwT938Hr/iCcS6kE3hqXiN9a5XJb4vnk2FdZNPS9hf2J rpxCHbX0xSJh0s8j7exRHMF8W16DHjjkc265YdWPXWo= -----END RSA PRIVATE KEY----- ` + +var testCertRequest = ` +-----BEGIN CERTIFICATE REQUEST----- +MIICYDCCAckCAQAwgcUxFDASBgNVBAMMC2V4YW1wbGUuY29tMQswCQYDVQQGEwJV +UzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDVBpcmF0ZSBIYXJib3IxGTAXBgNVBAkM +EDU4NzkgQ290dG9uIExpbmsxEzARBgNVBBEMCjk1NTU5LTEyMjcxFTATBgNVBAoM +DEV4YW1wbGUsIEluYzEoMCYGA1UECwwfRGVwYXJ0bWVudCBvZiBUZXJyYWZvcm0g +VGVzdGluZzEKMAgGA1UEBRMBMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA +qLFq7Tpmlt0uDCCn5bA/oTj4v16/pXXaD+Ice2bS4rBH2UUM2gca5U4j8QCxrIxh +91mBvloE4VS5xrIGotAwoMgwK3E2md5kzQJToDve/hm8JNOcms+OAOjfjajPc40e ++ue9roT8VjWGU0wz7ttQNuao56GXYr5kOpcfiZMs7RcCAwEAAaBaMFgGCSqGSIb3 +DQEJDjFLMEkwLwYDVR0RBCgwJoILZXhhbXBsZS5jb22CC2V4YW1wbGUubmV0hwR/ +AAABhwR/AAACMAkGA1UdEwQCMAAwCwYDVR0PBAQDAgXgMA0GCSqGSIb3DQEBBQUA +A4GBAGEDWUYnGygtnvScamz3o4PuVMFubBfqIdWCu02hBgzL3Hi3/UkOEsV028GM +M3YMB+it7U8eDdT2XjzBDlvpxWT1hXWnmJFu6z6B8N/JFk8fOkaP7U6YjZlG5N9m +L1A4WtQz0SgXcnIujKisqIaymYrvpANnm4IsqTKsnwZD7CsQ +-----END CERTIFICATE REQUEST----- +` + +var testCAPrivateKey = ` +-----BEGIN RSA PRIVATE KEY----- +MIICXAIBAAKBgQC7QNFtw54heoD9KL2s2Qr7utKZFM/8GXYHh3Y5/Zis9USlJ7Mc +Lorbmm9Lopnr5zUBZULAxAgX51X0FbifK8Re3JIZvpFRyxNw8aWYBnOk/sX7UhUH +pI139dSAhkNAMkRQd1ySpDP+4okCptgZPs7h0bXwoYmWMNFKlaRZHuAQLQIDAQAB +AoGAQ/YwjLAU8n2t1zQ0M0nLDLYvvVOqcQskpXLq2/1Irm2OborMHQxfZXjVsBPh +3ZbazBjec2wyq8pQjfhcO5j8+fj9zLtRNDpWEa9t/VDky0MSGezQyLL1J5+htFDJ +JDCkKK441IWKGCMC31hoVP6PvE/3G2+vWAkrkT4U7ekLQVkCQQD1/RKMxDFJ57Qr +Zlu1y72dnGLsGqoxeNaco6G5JXAEEcWTx8qXghKQX0uHxooeRYQRupOGLBo1Js1p +/AZDR8inAkEAwt/J0GDsojV89RbpJ0h7C1kcxNULooCYQZs/rmJcVXSs6pUIIFdI +oYQIEGnRsfQUPo6EUUGMKh8sSEjF6R8nCwJBAMKYuoT7a9aAYwp2RhTSIaW+oo8P +JRZP9s8hr31tPWkqufeHdSBYOOFXUcQObxM1gR4ZUD0zRGRJ1vSB+F5fOj8CQEuG +HZnTpoHrBuWZnnyp+33XaG3kP2EYQ2nRuClmV3CLCmTTo1WdXjmyiMmLqUg1Vw8z +fpZbN+4vLKNLCOCjQScCQDWmNDrie4Omd5wWKV5B+LVZO8/xMlub6IEioZpMfDGZ +q1Ov/Qw2ge3yumfO+6GzKG0k13yYEn1AcatF5lP8BYY= +-----END RSA PRIVATE KEY----- +` + +var testCACert = ` +-----BEGIN CERTIFICATE----- +MIIDVTCCAr6gAwIBAgIJALLsVgWAcCvxMA0GCSqGSIb3DQEBBQUAMHsxCzAJBgNV +BAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNUGlyYXRlIEhhcmJvcjEVMBMG +A1UEChMMRXhhbXBsZSwgSW5jMSEwHwYDVQQLExhEZXBhcnRtZW50IG9mIENBIFRl +c3RpbmcxDTALBgNVBAMTBHJvb3QwHhcNMTUxMTE0MTY1MTQ0WhcNMTUxMjE0MTY1 +MTQ0WjB7MQswCQYDVQQGEwJVUzELMAkGA1UECBMCQ0ExFjAUBgNVBAcTDVBpcmF0 +ZSBIYXJib3IxFTATBgNVBAoTDEV4YW1wbGUsIEluYzEhMB8GA1UECxMYRGVwYXJ0 +bWVudCBvZiBDQSBUZXN0aW5nMQ0wCwYDVQQDEwRyb290MIGfMA0GCSqGSIb3DQEB +AQUAA4GNADCBiQKBgQC7QNFtw54heoD9KL2s2Qr7utKZFM/8GXYHh3Y5/Zis9USl +J7McLorbmm9Lopnr5zUBZULAxAgX51X0FbifK8Re3JIZvpFRyxNw8aWYBnOk/sX7 +UhUHpI139dSAhkNAMkRQd1ySpDP+4okCptgZPs7h0bXwoYmWMNFKlaRZHuAQLQID +AQABo4HgMIHdMB0GA1UdDgQWBBQyrsMhTd85ATqm9vNybTtAbwnGkDCBrQYDVR0j +BIGlMIGigBQyrsMhTd85ATqm9vNybTtAbwnGkKF/pH0wezELMAkGA1UEBhMCVVMx +CzAJBgNVBAgTAkNBMRYwFAYDVQQHEw1QaXJhdGUgSGFyYm9yMRUwEwYDVQQKEwxF +eGFtcGxlLCBJbmMxITAfBgNVBAsTGERlcGFydG1lbnQgb2YgQ0EgVGVzdGluZzEN +MAsGA1UEAxMEcm9vdIIJALLsVgWAcCvxMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcN +AQEFBQADgYEAuJ7JGZlSzbQOuAFz2t3c1pQzUIiS74blFbg6RPvNPSSjoBg3Ly61 +FbliR8P3qiSWA/X03/XSMTH1XkHU8re+P0uILUzLJkKBkdHJfdwfk8kifDjdO14+ +tffPaqAEFUkwhbiQUoj9aeTOOS6kEjbMV6+o7fsz5pPUHbj/l4idys0= +-----END CERTIFICATE----- +` diff --git a/builtin/providers/tls/resource_cert_request.go b/builtin/providers/tls/resource_cert_request.go index ac1f70071f..7dd1430c6b 100644 --- a/builtin/providers/tls/resource_cert_request.go +++ b/builtin/providers/tls/resource_cert_request.go @@ -10,6 +10,8 @@ import ( "github.com/hashicorp/terraform/helper/schema" ) +const pemCertReqType = "CERTIFICATE REQUEST" + func resourceCertRequest() *schema.Resource { return &schema.Resource{ Create: CreateCertRequest, @@ -71,19 +73,9 @@ func resourceCertRequest() *schema.Resource { } func CreateCertRequest(d *schema.ResourceData, meta interface{}) error { - keyAlgoName := d.Get("key_algorithm").(string) - var keyFunc keyParser - var ok bool - if keyFunc, ok = keyParsers[keyAlgoName]; !ok { - return fmt.Errorf("invalid key_algorithm %#v", keyAlgoName) - } - keyBlock, _ := pem.Decode([]byte(d.Get("private_key_pem").(string))) - if keyBlock == nil { - return fmt.Errorf("no PEM block found in private_key_pem") - } - key, err := keyFunc(keyBlock.Bytes) + key, err := parsePrivateKey(d, "private_key_pem", "key_algorithm") if err != nil { - return fmt.Errorf("failed to decode private_key_pem: %s", err) + return err } subjectConfs := d.Get("subject").([]interface{}) @@ -117,7 +109,7 @@ func CreateCertRequest(d *schema.ResourceData, meta interface{}) error { if err != nil { fmt.Errorf("Error creating certificate request: %s", err) } - certReqPem := string(pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE REQUEST", Bytes: certReqBytes})) + certReqPem := string(pem.EncodeToMemory(&pem.Block{Type: pemCertReqType, Bytes: certReqBytes})) d.SetId(hashForState(string(certReqBytes))) d.Set("cert_request_pem", certReqPem) diff --git a/builtin/providers/tls/resource_certificate.go b/builtin/providers/tls/resource_certificate.go new file mode 100644 index 0000000000..bfdc6eea7f --- /dev/null +++ b/builtin/providers/tls/resource_certificate.go @@ -0,0 +1,210 @@ +package tls + +import ( + "crypto" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/rsa" + "crypto/sha1" + "crypto/x509" + "encoding/asn1" + "encoding/pem" + "errors" + "fmt" + "math/big" + "time" + + "github.com/hashicorp/terraform/helper/schema" +) + +const pemCertType = "CERTIFICATE" + +var keyUsages map[string]x509.KeyUsage = map[string]x509.KeyUsage{ + "digital_signature": x509.KeyUsageDigitalSignature, + "content_commitment": x509.KeyUsageContentCommitment, + "key_encipherment": x509.KeyUsageKeyEncipherment, + "data_encipherment": x509.KeyUsageDataEncipherment, + "key_agreement": x509.KeyUsageKeyAgreement, + "cert_signing": x509.KeyUsageCertSign, + "crl_signing": x509.KeyUsageCRLSign, + "encipher_only": x509.KeyUsageEncipherOnly, + "decipher_only": x509.KeyUsageDecipherOnly, +} + +var extKeyUsages map[string]x509.ExtKeyUsage = map[string]x509.ExtKeyUsage{ + "any_extended": x509.ExtKeyUsageAny, + "server_auth": x509.ExtKeyUsageServerAuth, + "client_auth": x509.ExtKeyUsageClientAuth, + "code_signing": x509.ExtKeyUsageCodeSigning, + "email_protection": x509.ExtKeyUsageEmailProtection, + "ipsec_end_system": x509.ExtKeyUsageIPSECEndSystem, + "ipsec_tunnel": x509.ExtKeyUsageIPSECTunnel, + "ipsec_user": x509.ExtKeyUsageIPSECUser, + "timestamping": x509.ExtKeyUsageTimeStamping, + "ocsp_signing": x509.ExtKeyUsageOCSPSigning, + "microsoft_server_gated_crypto": x509.ExtKeyUsageMicrosoftServerGatedCrypto, + "netscape_server_gated_crypto": x509.ExtKeyUsageNetscapeServerGatedCrypto, +} + +// rsaPublicKey reflects the ASN.1 structure of a PKCS#1 public key. +type rsaPublicKey struct { + N *big.Int + E int +} + +// generateSubjectKeyID generates a SHA-1 hash of the subject public key. +func generateSubjectKeyID(pub crypto.PublicKey) ([]byte, error) { + var publicKeyBytes []byte + var err error + + switch pub := pub.(type) { + case *rsa.PublicKey: + publicKeyBytes, err = asn1.Marshal(rsaPublicKey{N: pub.N, E: pub.E}) + if err != nil { + return nil, err + } + case *ecdsa.PublicKey: + publicKeyBytes = elliptic.Marshal(pub.Curve, pub.X, pub.Y) + default: + return nil, errors.New("only RSA and ECDSA public keys supported") + } + + hash := sha1.Sum(publicKeyBytes) + return hash[:], nil +} + +func resourceCertificateCommonSchema() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "validity_period_hours": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + Description: "Number of hours that the certificate will remain valid for", + ForceNew: true, + }, + + "early_renewal_hours": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 0, + Description: "Number of hours before the certificates expiry when a new certificate will be generated", + ForceNew: true, + }, + + "is_ca_certificate": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Description: "Whether the generated certificate will be usable as a CA certificate", + ForceNew: true, + }, + + "allowed_uses": &schema.Schema{ + Type: schema.TypeList, + Required: true, + Description: "Uses that are allowed for the certificate", + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + + "cert_pem": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "validity_start_time": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "validity_end_time": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + } +} + +func createCertificate(d *schema.ResourceData, template, parent *x509.Certificate, pub crypto.PublicKey, priv interface{}) error { + var err error + + template.NotBefore = time.Now() + template.NotAfter = template.NotBefore.Add(time.Duration(d.Get("validity_period_hours").(int)) * time.Hour) + + serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128) + template.SerialNumber, err = rand.Int(rand.Reader, serialNumberLimit) + if err != nil { + return fmt.Errorf("failed to generate serial number: %s", err) + } + + keyUsesI := d.Get("allowed_uses").([]interface{}) + for _, keyUseI := range keyUsesI { + keyUse := keyUseI.(string) + if usage, ok := keyUsages[keyUse]; ok { + template.KeyUsage |= usage + } + if usage, ok := extKeyUsages[keyUse]; ok { + template.ExtKeyUsage = append(template.ExtKeyUsage, usage) + } + } + + if d.Get("is_ca_certificate").(bool) { + template.IsCA = true + + template.SubjectKeyId, err = generateSubjectKeyID(pub) + if err != nil { + return fmt.Errorf("failed to set subject key identifier: %s", err) + } + } + + certBytes, err := x509.CreateCertificate(rand.Reader, template, parent, pub, priv) + if err != nil { + fmt.Errorf("error creating certificate: %s", err) + } + certPem := string(pem.EncodeToMemory(&pem.Block{Type: pemCertType, Bytes: certBytes})) + + validFromBytes, err := template.NotBefore.MarshalText() + if err != nil { + return fmt.Errorf("error serializing validity_start_time: %s", err) + } + validToBytes, err := template.NotAfter.MarshalText() + if err != nil { + return fmt.Errorf("error serializing validity_end_time: %s", err) + } + + d.SetId(template.SerialNumber.String()) + d.Set("cert_pem", certPem) + d.Set("validity_start_time", string(validFromBytes)) + d.Set("validity_end_time", string(validToBytes)) + + return nil +} + +func DeleteCertificate(d *schema.ResourceData, meta interface{}) error { + d.SetId("") + return nil +} + +func ReadCertificate(d *schema.ResourceData, meta interface{}) error { + + endTimeStr := d.Get("validity_end_time").(string) + endTime := time.Now() + err := endTime.UnmarshalText([]byte(endTimeStr)) + if err != nil { + // If end time is invalid then we'll just throw away the whole + // thing so we can generate a new one. + d.SetId("") + return nil + } + + earlyRenewalPeriod := time.Duration(-d.Get("early_renewal_hours").(int)) * time.Hour + endTime = endTime.Add(earlyRenewalPeriod) + + if time.Now().After(endTime) { + // Treat an expired certificate as not existing, so we'll generate + // a new one with the next plan. + d.SetId("") + } + + return nil +} diff --git a/builtin/providers/tls/resource_locally_signed_cert.go b/builtin/providers/tls/resource_locally_signed_cert.go new file mode 100644 index 0000000000..39c90022f8 --- /dev/null +++ b/builtin/providers/tls/resource_locally_signed_cert.go @@ -0,0 +1,79 @@ +package tls + +import ( + "crypto/x509" + + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceLocallySignedCert() *schema.Resource { + s := resourceCertificateCommonSchema() + + s["cert_request_pem"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "PEM-encoded certificate request", + ForceNew: true, + StateFunc: func(v interface{}) string { + return hashForState(v.(string)) + }, + } + + s["ca_key_algorithm"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "Name of the algorithm used to generate the certificate's private key", + ForceNew: true, + } + + s["ca_private_key_pem"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "PEM-encoded CA private key used to sign the certificate", + ForceNew: true, + StateFunc: func(v interface{}) string { + return hashForState(v.(string)) + }, + } + + s["ca_cert_pem"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "PEM-encoded CA certificate", + ForceNew: true, + StateFunc: func(v interface{}) string { + return hashForState(v.(string)) + }, + } + + return &schema.Resource{ + Create: CreateLocallySignedCert, + Delete: DeleteCertificate, + Read: ReadCertificate, + Schema: s, + } +} + +func CreateLocallySignedCert(d *schema.ResourceData, meta interface{}) error { + certReq, err := parseCertificateRequest(d, "cert_request_pem") + if err != nil { + return err + } + caKey, err := parsePrivateKey(d, "ca_private_key_pem", "ca_key_algorithm") + if err != nil { + return err + } + caCert, err := parseCertificate(d, "ca_cert_pem") + if err != nil { + return err + } + + cert := x509.Certificate{ + Subject: certReq.Subject, + DNSNames: certReq.DNSNames, + IPAddresses: certReq.IPAddresses, + BasicConstraintsValid: true, + } + + return createCertificate(d, &cert, caCert, certReq.PublicKey, caKey) +} diff --git a/builtin/providers/tls/resource_locally_signed_cert_test.go b/builtin/providers/tls/resource_locally_signed_cert_test.go new file mode 100644 index 0000000000..7e9688d121 --- /dev/null +++ b/builtin/providers/tls/resource_locally_signed_cert_test.go @@ -0,0 +1,162 @@ +package tls + +import ( + "bytes" + "crypto/x509" + "encoding/pem" + "errors" + "fmt" + "strings" + "testing" + "time" + + r "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestLocallySignedCert(t *testing.T) { + r.Test(t, r.TestCase{ + Providers: testProviders, + Steps: []r.TestStep{ + r.TestStep{ + Config: fmt.Sprintf(` + resource "tls_locally_signed_cert" "test" { + cert_request_pem = < (2 * time.Minute) { + return fmt.Errorf("certificate validity begins more than two minutes in the past") + } + if cert.NotAfter.Sub(cert.NotBefore) != time.Hour { + return fmt.Errorf("certificate validity is not one hour") + } + + caBlock, _ := pem.Decode([]byte(testCACert)) + caCert, err := x509.ParseCertificate(caBlock.Bytes) + if err != nil { + return fmt.Errorf("error parsing ca cert: %s", err) + } + certPool := x509.NewCertPool() + + // Verify certificate + _, err = cert.Verify(x509.VerifyOptions{Roots: certPool}) + if err == nil { + return errors.New("incorrectly verified certificate") + } else if _, ok := err.(x509.UnknownAuthorityError); !ok { + return fmt.Errorf("incorrect verify error: expected UnknownAuthorityError, got %v", err) + } + certPool.AddCert(caCert) + if _, err = cert.Verify(x509.VerifyOptions{Roots: certPool}); err != nil { + return fmt.Errorf("verify failed: %s", err) + } + + return nil + }, + }, + }, + }) +} diff --git a/builtin/providers/tls/resource_private_key.go b/builtin/providers/tls/resource_private_key.go index f3fdd3f9bc..8270cc624f 100644 --- a/builtin/providers/tls/resource_private_key.go +++ b/builtin/providers/tls/resource_private_key.go @@ -9,6 +9,8 @@ import ( "encoding/pem" "fmt" + "golang.org/x/crypto/ssh" + "github.com/hashicorp/terraform/helper/schema" ) @@ -80,6 +82,16 @@ func resourcePrivateKey() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + "public_key_pem": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "public_key_openssh": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -100,25 +112,47 @@ func CreatePrivateKey(d *schema.ResourceData, meta interface{}) error { var keyPemBlock *pem.Block switch k := key.(type) { case *rsa.PrivateKey: - keyPemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(k)} + keyPemBlock = &pem.Block{ + Type: "RSA PRIVATE KEY", + Bytes: x509.MarshalPKCS1PrivateKey(k), + } case *ecdsa.PrivateKey: - b, err := x509.MarshalECPrivateKey(k) + keyBytes, err := x509.MarshalECPrivateKey(k) if err != nil { return fmt.Errorf("error encoding key to PEM: %s", err) } - keyPemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: b} + keyPemBlock = &pem.Block{ + Type: "EC PRIVATE KEY", + Bytes: keyBytes, + } default: return fmt.Errorf("unsupported private key type") } keyPem := string(pem.EncodeToMemory(keyPemBlock)) - pubKeyBytes, err := x509.MarshalPKIXPublicKey(publicKey(key)) + pubKey := publicKey(key) + pubKeyBytes, err := x509.MarshalPKIXPublicKey(pubKey) if err != nil { return fmt.Errorf("failed to marshal public key: %s", err) } + pubKeyPemBlock := &pem.Block{ + Type: "PUBLIC KEY", + Bytes: pubKeyBytes, + } d.SetId(hashForState(string((pubKeyBytes)))) d.Set("private_key_pem", keyPem) + d.Set("public_key_pem", string(pem.EncodeToMemory(pubKeyPemBlock))) + + sshPubKey, err := ssh.NewPublicKey(pubKey) + if err == nil { + // Not all EC types can be SSH keys, so we'll produce this only + // if an appropriate type was selected. + sshPubKeyBytes := ssh.MarshalAuthorizedKey(sshPubKey) + d.Set("public_key_openssh", string(sshPubKeyBytes)) + } else { + d.Set("public_key_openssh", "") + } return nil } diff --git a/builtin/providers/tls/resource_private_key_test.go b/builtin/providers/tls/resource_private_key_test.go index e0bcf44c49..00fc8abbd6 100644 --- a/builtin/providers/tls/resource_private_key_test.go +++ b/builtin/providers/tls/resource_private_key_test.go @@ -18,18 +18,35 @@ func TestPrivateKeyRSA(t *testing.T) { resource "tls_private_key" "test" { algorithm = "RSA" } - output "key_pem" { + output "private_key_pem" { value = "${tls_private_key.test.private_key_pem}" } + output "public_key_pem" { + value = "${tls_private_key.test.public_key_pem}" + } + output "public_key_openssh" { + value = "${tls_private_key.test.public_key_openssh}" + } `, Check: func(s *terraform.State) error { - got := s.RootModule().Outputs["key_pem"] - if !strings.HasPrefix(got, "-----BEGIN RSA PRIVATE KEY----") { - return fmt.Errorf("key is missing RSA key PEM preamble") + gotPrivate := s.RootModule().Outputs["private_key_pem"] + if !strings.HasPrefix(gotPrivate, "-----BEGIN RSA PRIVATE KEY----") { + return fmt.Errorf("private key is missing RSA key PEM preamble") } - if len(got) > 1700 { - return fmt.Errorf("key PEM looks too long for a 2048-bit key (got %v characters)", len(got)) + if len(gotPrivate) > 1700 { + return fmt.Errorf("private key PEM looks too long for a 2048-bit key (got %v characters)", len(gotPrivate)) } + + gotPublic := s.RootModule().Outputs["public_key_pem"] + if !strings.HasPrefix(gotPublic, "-----BEGIN PUBLIC KEY----") { + return fmt.Errorf("public key is missing public key PEM preamble") + } + + gotPublicSSH := s.RootModule().Outputs["public_key_openssh"] + if !strings.HasPrefix(gotPublicSSH, "ssh-rsa ") { + return fmt.Errorf("SSH public key is missing ssh-rsa prefix") + } + return nil }, }, @@ -67,15 +84,67 @@ func TestPrivateKeyECDSA(t *testing.T) { resource "tls_private_key" "test" { algorithm = "ECDSA" } - output "key_pem" { + output "private_key_pem" { value = "${tls_private_key.test.private_key_pem}" } + output "public_key_pem" { + value = "${tls_private_key.test.public_key_pem}" + } + output "public_key_openssh" { + value = "${tls_private_key.test.public_key_openssh}" + } `, Check: func(s *terraform.State) error { - got := s.RootModule().Outputs["key_pem"] - if !strings.HasPrefix(got, "-----BEGIN EC PRIVATE KEY----") { - return fmt.Errorf("Key is missing EC key PEM preamble") + gotPrivate := s.RootModule().Outputs["private_key_pem"] + if !strings.HasPrefix(gotPrivate, "-----BEGIN EC PRIVATE KEY----") { + return fmt.Errorf("Private key is missing EC key PEM preamble") } + + gotPublic := s.RootModule().Outputs["public_key_pem"] + if !strings.HasPrefix(gotPublic, "-----BEGIN PUBLIC KEY----") { + return fmt.Errorf("public key is missing public key PEM preamble") + } + + gotPublicSSH := s.RootModule().Outputs["public_key_openssh"] + if gotPublicSSH != "" { + return fmt.Errorf("P224 EC key should not generate OpenSSH public key") + } + + return nil + }, + }, + r.TestStep{ + Config: ` + resource "tls_private_key" "test" { + algorithm = "ECDSA" + ecdsa_curve = "P256" + } + output "private_key_pem" { + value = "${tls_private_key.test.private_key_pem}" + } + output "public_key_pem" { + value = "${tls_private_key.test.public_key_pem}" + } + output "public_key_openssh" { + value = "${tls_private_key.test.public_key_openssh}" + } + `, + Check: func(s *terraform.State) error { + gotPrivate := s.RootModule().Outputs["private_key_pem"] + if !strings.HasPrefix(gotPrivate, "-----BEGIN EC PRIVATE KEY----") { + return fmt.Errorf("Private key is missing EC key PEM preamble") + } + + gotPublic := s.RootModule().Outputs["public_key_pem"] + if !strings.HasPrefix(gotPublic, "-----BEGIN PUBLIC KEY----") { + return fmt.Errorf("public key is missing public key PEM preamble") + } + + gotPublicSSH := s.RootModule().Outputs["public_key_openssh"] + if !strings.HasPrefix(gotPublicSSH, "ecdsa-sha2-nistp256 ") { + return fmt.Errorf("P256 SSH public key is missing ecdsa prefix") + } + return nil }, }, diff --git a/builtin/providers/tls/resource_self_signed_cert.go b/builtin/providers/tls/resource_self_signed_cert.go index 4055352453..29e04154db 100644 --- a/builtin/providers/tls/resource_self_signed_cert.go +++ b/builtin/providers/tls/resource_self_signed_cert.go @@ -1,169 +1,72 @@ package tls import ( - "crypto/rand" "crypto/x509" - "encoding/pem" "fmt" - "math/big" "net" - "time" "github.com/hashicorp/terraform/helper/schema" ) -var keyUsages map[string]x509.KeyUsage = map[string]x509.KeyUsage{ - "digital_signature": x509.KeyUsageDigitalSignature, - "content_commitment": x509.KeyUsageContentCommitment, - "key_encipherment": x509.KeyUsageKeyEncipherment, - "data_encipherment": x509.KeyUsageDataEncipherment, - "key_agreement": x509.KeyUsageKeyAgreement, - "cert_signing": x509.KeyUsageCertSign, - "crl_signing": x509.KeyUsageCRLSign, - "encipher_only": x509.KeyUsageEncipherOnly, - "decipher_only": x509.KeyUsageDecipherOnly, -} - -var extKeyUsages map[string]x509.ExtKeyUsage = map[string]x509.ExtKeyUsage{ - "any_extended": x509.ExtKeyUsageAny, - "server_auth": x509.ExtKeyUsageServerAuth, - "client_auth": x509.ExtKeyUsageClientAuth, - "code_signing": x509.ExtKeyUsageCodeSigning, - "email_protection": x509.ExtKeyUsageEmailProtection, - "ipsec_end_system": x509.ExtKeyUsageIPSECEndSystem, - "ipsec_tunnel": x509.ExtKeyUsageIPSECTunnel, - "ipsec_user": x509.ExtKeyUsageIPSECUser, - "timestamping": x509.ExtKeyUsageTimeStamping, - "ocsp_signing": x509.ExtKeyUsageOCSPSigning, - "microsoft_server_gated_crypto": x509.ExtKeyUsageMicrosoftServerGatedCrypto, - "netscape_server_gated_crypto": x509.ExtKeyUsageNetscapeServerGatedCrypto, -} - func resourceSelfSignedCert() *schema.Resource { + s := resourceCertificateCommonSchema() + + s["subject"] = &schema.Schema{ + Type: schema.TypeList, + Required: true, + Elem: nameSchema, + ForceNew: true, + } + + s["dns_names"] = &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Description: "List of DNS names to use as subjects of the certificate", + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + } + + s["ip_addresses"] = &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Description: "List of IP addresses to use as subjects of the certificate", + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + } + + s["key_algorithm"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "Name of the algorithm to use to generate the certificate's private key", + ForceNew: true, + } + + s["private_key_pem"] = &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "PEM-encoded private key that the certificate will belong to", + ForceNew: true, + StateFunc: func(v interface{}) string { + return hashForState(v.(string)) + }, + } + return &schema.Resource{ Create: CreateSelfSignedCert, - Delete: DeleteSelfSignedCert, - Read: ReadSelfSignedCert, - - Schema: map[string]*schema.Schema{ - - "dns_names": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - Description: "List of DNS names to use as subjects of the certificate", - ForceNew: true, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - - "ip_addresses": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - Description: "List of IP addresses to use as subjects of the certificate", - ForceNew: true, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - - "validity_period_hours": &schema.Schema{ - Type: schema.TypeInt, - Required: true, - Description: "Number of hours that the certificate will remain valid for", - ForceNew: true, - }, - - "early_renewal_hours": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Default: 0, - Description: "Number of hours before the certificates expiry when a new certificate will be generated", - ForceNew: true, - }, - - "is_ca_certificate": &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Description: "Whether the generated certificate will be usable as a CA certificate", - ForceNew: true, - }, - - "allowed_uses": &schema.Schema{ - Type: schema.TypeList, - Required: true, - Description: "Uses that are allowed for the certificate", - ForceNew: true, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - - "key_algorithm": &schema.Schema{ - Type: schema.TypeString, - Required: true, - Description: "Name of the algorithm to use to generate the certificate's private key", - ForceNew: true, - }, - - "private_key_pem": &schema.Schema{ - Type: schema.TypeString, - Required: true, - Description: "PEM-encoded private key that the certificate will belong to", - ForceNew: true, - StateFunc: func(v interface{}) string { - return hashForState(v.(string)) - }, - }, - - "subject": &schema.Schema{ - Type: schema.TypeList, - Required: true, - Elem: nameSchema, - ForceNew: true, - }, - - "cert_pem": &schema.Schema{ - Type: schema.TypeString, - Computed: true, - }, - - "validity_start_time": &schema.Schema{ - Type: schema.TypeString, - Computed: true, - }, - - "validity_end_time": &schema.Schema{ - Type: schema.TypeString, - Computed: true, - }, - }, + Delete: DeleteCertificate, + Read: ReadCertificate, + Schema: s, } } func CreateSelfSignedCert(d *schema.ResourceData, meta interface{}) error { - keyAlgoName := d.Get("key_algorithm").(string) - var keyFunc keyParser - var ok bool - if keyFunc, ok = keyParsers[keyAlgoName]; !ok { - return fmt.Errorf("invalid key_algorithm %#v", keyAlgoName) - } - keyBlock, _ := pem.Decode([]byte(d.Get("private_key_pem").(string))) - if keyBlock == nil { - return fmt.Errorf("no PEM block found in private_key_pem") - } - key, err := keyFunc(keyBlock.Bytes) + key, err := parsePrivateKey(d, "private_key_pem", "key_algorithm") if err != nil { - return fmt.Errorf("failed to decode private_key_pem: %s", err) - } - - notBefore := time.Now() - notAfter := notBefore.Add(time.Duration(d.Get("validity_period_hours").(int)) * time.Hour) - - serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128) - serialNumber, err := rand.Int(rand.Reader, serialNumberLimit) - if err != nil { - return fmt.Errorf("failed to generate serial number: %s", err) + return err } subjectConfs := d.Get("subject").([]interface{}) @@ -177,24 +80,10 @@ func CreateSelfSignedCert(d *schema.ResourceData, meta interface{}) error { } cert := x509.Certificate{ - SerialNumber: serialNumber, Subject: *subject, - NotBefore: notBefore, - NotAfter: notAfter, BasicConstraintsValid: true, } - keyUsesI := d.Get("allowed_uses").([]interface{}) - for _, keyUseI := range keyUsesI { - keyUse := keyUseI.(string) - if usage, ok := keyUsages[keyUse]; ok { - cert.KeyUsage |= usage - } - if usage, ok := extKeyUsages[keyUse]; ok { - cert.ExtKeyUsage = append(cert.ExtKeyUsage, usage) - } - } - dnsNamesI := d.Get("dns_names").([]interface{}) for _, nameI := range dnsNamesI { cert.DNSNames = append(cert.DNSNames, nameI.(string)) @@ -208,58 +97,5 @@ func CreateSelfSignedCert(d *schema.ResourceData, meta interface{}) error { cert.IPAddresses = append(cert.IPAddresses, ip) } - if d.Get("is_ca_certificate").(bool) { - cert.IsCA = true - } - - certBytes, err := x509.CreateCertificate(rand.Reader, &cert, &cert, publicKey(key), key) - if err != nil { - fmt.Errorf("Error creating certificate: %s", err) - } - certPem := string(pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: certBytes})) - - validFromBytes, err := notBefore.MarshalText() - if err != nil { - return fmt.Errorf("error serializing validity_start_time: %s", err) - } - validToBytes, err := notAfter.MarshalText() - if err != nil { - return fmt.Errorf("error serializing validity_end_time: %s", err) - } - - d.SetId(serialNumber.String()) - d.Set("cert_pem", certPem) - d.Set("validity_start_time", string(validFromBytes)) - d.Set("validity_end_time", string(validToBytes)) - - return nil -} - -func DeleteSelfSignedCert(d *schema.ResourceData, meta interface{}) error { - d.SetId("") - return nil -} - -func ReadSelfSignedCert(d *schema.ResourceData, meta interface{}) error { - - endTimeStr := d.Get("validity_end_time").(string) - endTime := time.Now() - err := endTime.UnmarshalText([]byte(endTimeStr)) - if err != nil { - // If end time is invalid then we'll just throw away the whole - // thing so we can generate a new one. - d.SetId("") - return nil - } - - earlyRenewalPeriod := time.Duration(-d.Get("early_renewal_hours").(int)) * time.Hour - endTime = endTime.Add(earlyRenewalPeriod) - - if time.Now().After(endTime) { - // Treat an expired certificate as not existing, so we'll generate - // a new one with the next plan. - d.SetId("") - } - - return nil + return createCertificate(d, &cert, &cert, publicKey(key), key) } diff --git a/builtin/providers/tls/util.go b/builtin/providers/tls/util.go new file mode 100644 index 0000000000..b1ff32e5b0 --- /dev/null +++ b/builtin/providers/tls/util.go @@ -0,0 +1,76 @@ +package tls + +import ( + "crypto/x509" + "encoding/pem" + "fmt" + + "github.com/hashicorp/terraform/helper/schema" +) + +func decodePEM(d *schema.ResourceData, pemKey, pemType string) (*pem.Block, error) { + block, _ := pem.Decode([]byte(d.Get(pemKey).(string))) + if block == nil { + return nil, fmt.Errorf("no PEM block found in %s", pemKey) + } + if pemType != "" && block.Type != pemType { + return nil, fmt.Errorf("invalid PEM type in %s: %s", pemKey, block.Type) + } + + return block, nil +} + +func parsePrivateKey(d *schema.ResourceData, pemKey, algoKey string) (interface{}, error) { + algoName := d.Get(algoKey).(string) + + keyFunc, ok := keyParsers[algoName] + if !ok { + return nil, fmt.Errorf("invalid %s: %#v", algoKey, algoName) + } + + block, err := decodePEM(d, pemKey, "") + if err != nil { + return nil, err + } + + key, err := keyFunc(block.Bytes) + if err != nil { + return nil, fmt.Errorf("failed to decode %s: %s", pemKey, err) + } + + return key, nil +} + +func parseCertificate(d *schema.ResourceData, pemKey string) (*x509.Certificate, error) { + block, err := decodePEM(d, pemKey, "") + if err != nil { + return nil, err + } + + certs, err := x509.ParseCertificates(block.Bytes) + if err != nil { + return nil, fmt.Errorf("failed to parse %s: %s", pemKey, err) + } + if len(certs) < 1 { + return nil, fmt.Errorf("no certificates found in %s", pemKey) + } + if len(certs) > 1 { + return nil, fmt.Errorf("multiple certificates found in %s", pemKey) + } + + return certs[0], nil +} + +func parseCertificateRequest(d *schema.ResourceData, pemKey string) (*x509.CertificateRequest, error) { + block, err := decodePEM(d, pemKey, pemCertReqType) + if err != nil { + return nil, err + } + + certReq, err := x509.ParseCertificateRequest(block.Bytes) + if err != nil { + return nil, fmt.Errorf("failed to parse %s: %s", pemKey, err) + } + + return certReq, nil +} diff --git a/builtin/providers/vcd/config.go b/builtin/providers/vcd/config.go new file mode 100644 index 0000000000..44403146e4 --- /dev/null +++ b/builtin/providers/vcd/config.go @@ -0,0 +1,40 @@ +package vcd + +import ( + "fmt" + "net/url" + + "github.com/hmrc/vmware-govcd" +) + +type Config struct { + User string + Password string + Org string + Href string + VDC string + MaxRetryTimeout int +} + +type VCDClient struct { + *govcd.VCDClient + MaxRetryTimeout int +} + +func (c *Config) Client() (*VCDClient, error) { + u, err := url.ParseRequestURI(c.Href) + if err != nil { + return nil, fmt.Errorf("Something went wrong: %s", err) + } + + vcdclient := &VCDClient{ + govcd.NewVCDClient(*u), + c.MaxRetryTimeout} + org, vcd, err := vcdclient.Authenticate(c.User, c.Password, c.Org, c.VDC) + if err != nil { + return nil, fmt.Errorf("Something went wrong: %s", err) + } + vcdclient.Org = org + vcdclient.OrgVdc = vcd + return vcdclient, nil +} diff --git a/builtin/providers/vcd/provider.go b/builtin/providers/vcd/provider.go new file mode 100644 index 0000000000..6ba1a07a6e --- /dev/null +++ b/builtin/providers/vcd/provider.go @@ -0,0 +1,78 @@ +package vcd + +import ( + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "user": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VCD_USER", nil), + Description: "The user name for vcd API operations.", + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VCD_PASSWORD", nil), + Description: "The user password for vcd API operations.", + }, + + "org": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VCD_ORG", nil), + Description: "The vcd org for API operations", + }, + + "url": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("VCD_URL", nil), + Description: "The vcd url for vcd API operations.", + }, + + "vdc": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("VCD_VDC", ""), + Description: "The name of the VDC to run operations on", + }, + + "maxRetryTimeout": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("VCD_MAX_RETRY_TIMEOUT", 60), + Description: "Max num seconds to wait for successful response when operating on resources within vCloud (defaults to 60)", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "vcd_network": resourceVcdNetwork(), + "vcd_vapp": resourceVcdVApp(), + "vcd_firewall_rules": resourceVcdFirewallRules(), + "vcd_dnat": resourceVcdDNAT(), + "vcd_snat": resourceVcdSNAT(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + User: d.Get("user").(string), + Password: d.Get("password").(string), + Org: d.Get("org").(string), + Href: d.Get("url").(string), + VDC: d.Get("vdc").(string), + MaxRetryTimeout: d.Get("maxRetryTimeout").(int), + } + + return config.Client() +} diff --git a/builtin/providers/vcd/provider_test.go b/builtin/providers/vcd/provider_test.go new file mode 100644 index 0000000000..48ee207219 --- /dev/null +++ b/builtin/providers/vcd/provider_test.go @@ -0,0 +1,50 @@ +package vcd + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "vcd": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("VCD_USER"); v == "" { + t.Fatal("VCD_USER must be set for acceptance tests") + } + if v := os.Getenv("VCD_PASSWORD"); v == "" { + t.Fatal("VCD_PASSWORD must be set for acceptance tests") + } + if v := os.Getenv("VCD_ORG"); v == "" { + t.Fatal("VCD_ORG must be set for acceptance tests") + } + if v := os.Getenv("VCD_URL"); v == "" { + t.Fatal("VCD_URL must be set for acceptance tests") + } + if v := os.Getenv("VCD_EDGE_GATEWAY"); v == "" { + t.Fatal("VCD_EDGE_GATEWAY must be set for acceptance tests") + } + if v := os.Getenv("VCD_VDC"); v == "" { + t.Fatal("VCD_VDC must be set for acceptance tests") + } +} diff --git a/builtin/providers/vcd/resource_vcd_dnat.go b/builtin/providers/vcd/resource_vcd_dnat.go new file mode 100644 index 0000000000..b764e13ba7 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_dnat.go @@ -0,0 +1,136 @@ +package vcd + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceVcdDNAT() *schema.Resource { + return &schema.Resource{ + Create: resourceVcdDNATCreate, + Delete: resourceVcdDNATDelete, + Read: resourceVcdDNATRead, + + Schema: map[string]*schema.Schema{ + "edge_gateway": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "external_ip": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "internal_ip": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceVcdDNATCreate(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + // Multiple VCD components need to run operations on the Edge Gateway, as + // the edge gatway will throw back an error if it is already performing an + // operation we must wait until we can aquire a lock on the client + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + portString := getPortString(d.Get("port").(int)) + + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + + if err != nil { + return fmt.Errorf("Unable to find edge gateway: %#v", err) + } + + // Creating a loop to offer further protection from the edge gateway erroring + // due to being busy eg another person is using another client so wouldn't be + // constrained by out lock. If the edge gateway reurns with a busy error, wait + // 3 seconds and then try again. Continue until a non-busy error or success + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := edgeGateway.AddNATMapping("DNAT", d.Get("external_ip").(string), + d.Get("internal_ip").(string), + portString) + if err != nil { + return fmt.Errorf("Error setting DNAT rules: %#v", err) + } + + return task.WaitTaskCompletion() + }) + + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + + d.SetId(d.Get("external_ip").(string) + "_" + portString) + return nil +} + +func resourceVcdDNATRead(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + e, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + + if err != nil { + return fmt.Errorf("Unable to find edge gateway: %#v", err) + } + + var found bool + + for _, r := range e.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.NatService.NatRule { + if r.RuleType == "DNAT" && + r.GatewayNatRule.OriginalIP == d.Get("external_ip").(string) && + r.GatewayNatRule.OriginalPort == getPortString(d.Get("port").(int)) { + found = true + d.Set("internal_ip", r.GatewayNatRule.TranslatedIP) + } + } + + if !found { + d.SetId("") + } + + return nil +} + +func resourceVcdDNATDelete(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + // Multiple VCD components need to run operations on the Edge Gateway, as + // the edge gatway will throw back an error if it is already performing an + // operation we must wait until we can aquire a lock on the client + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + portString := getPortString(d.Get("port").(int)) + + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + + if err != nil { + return fmt.Errorf("Unable to find edge gateway: %#v", err) + } + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := edgeGateway.RemoveNATMapping("DNAT", d.Get("external_ip").(string), + d.Get("internal_ip").(string), + portString) + if err != nil { + return fmt.Errorf("Error setting DNAT rules: %#v", err) + } + + return task.WaitTaskCompletion() + }) + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + return nil +} diff --git a/builtin/providers/vcd/resource_vcd_dnat_test.go b/builtin/providers/vcd/resource_vcd_dnat_test.go new file mode 100644 index 0000000000..759d9d16b8 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_dnat_test.go @@ -0,0 +1,120 @@ +package vcd + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/hmrc/vmware-govcd" +) + +func TestAccVcdDNAT_Basic(t *testing.T) { + if v := os.Getenv("VCD_EXTERNAL_IP"); v == "" { + t.Skip("Environment variable VCD_EXTERNAL_IP must be set to run DNAT tests") + return + } + + var e govcd.EdgeGateway + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVcdDNATDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckVcdDnat_basic, os.Getenv("VCD_EDGE_GATWEWAY"), os.Getenv("VCD_EXTERNAL_IP")), + Check: resource.ComposeTestCheckFunc( + testAccCheckVcdDNATExists("vcd_dnat.bar", &e), + resource.TestCheckResourceAttr( + "vcd_dnat.bar", "external_ip", os.Getenv("VCD_EXTERNAL_IP")), + resource.TestCheckResourceAttr( + "vcd_dnat.bar", "port", "77"), + resource.TestCheckResourceAttr( + "vcd_dnat.bar", "internal_ip", "10.10.102.60"), + ), + }, + }, + }) +} + +func testAccCheckVcdDNATExists(n string, gateway *govcd.EdgeGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No DNAT ID is set") + } + + conn := testAccProvider.Meta().(*VCDClient) + + gatewayName := rs.Primary.Attributes["edge_gateway"] + edgeGateway, err := conn.OrgVdc.FindEdgeGateway(gatewayName) + + if err != nil { + return fmt.Errorf("Could not find edge gateway") + } + + var found bool + for _, v := range edgeGateway.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.NatService.NatRule { + if v.RuleType == "DNAT" && + v.GatewayNatRule.OriginalIP == os.Getenv("VCD_EXTERNAL_IP") && + v.GatewayNatRule.OriginalPort == "77" && + v.GatewayNatRule.TranslatedIP == "10.10.102.60" { + found = true + } + } + if !found { + return fmt.Errorf("DNAT rule was not found") + } + + *gateway = edgeGateway + + return nil + } +} + +func testAccCheckVcdDNATDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*VCDClient) + for _, rs := range s.RootModule().Resources { + if rs.Type != "vcd_dnat" { + continue + } + + gatewayName := rs.Primary.Attributes["edge_gateway"] + edgeGateway, err := conn.OrgVdc.FindEdgeGateway(gatewayName) + + if err != nil { + return fmt.Errorf("Could not find edge gateway") + } + + var found bool + for _, v := range edgeGateway.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.NatService.NatRule { + if v.RuleType == "DNAT" && + v.GatewayNatRule.OriginalIP == os.Getenv("VCD_EXTERNAL_IP") && + v.GatewayNatRule.OriginalPort == "77" && + v.GatewayNatRule.TranslatedIP == "10.10.102.60" { + found = true + } + } + + if found { + return fmt.Errorf("DNAT rule still exists.") + } + } + + return nil +} + +const testAccCheckVcdDnat_basic = ` +resource "vcd_dnat" "bar" { + edge_gateway = "%s" + external_ip = "%s" + port = 77 + internal_ip = "10.10.102.60" +} +` diff --git a/builtin/providers/vcd/resource_vcd_firewall_rules.go b/builtin/providers/vcd/resource_vcd_firewall_rules.go new file mode 100644 index 0000000000..325af24cd3 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_firewall_rules.go @@ -0,0 +1,198 @@ +package vcd + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + types "github.com/hmrc/vmware-govcd/types/v56" +) + +func resourceVcdFirewallRules() *schema.Resource { + return &schema.Resource{ + Create: resourceVcdFirewallRulesCreate, + Delete: resourceFirewallRulesDelete, + Read: resourceFirewallRulesRead, + + Schema: map[string]*schema.Schema{ + "edge_gateway": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "default_action": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "rule": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "description": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "policy": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "destination_port": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "destination_ip": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "source_port": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "source_ip": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceVcdFirewallRulesCreate(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + if err != nil { + return fmt.Errorf("Unable to find edge gateway: %s", err) + } + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + edgeGateway.Refresh() + firewallRules, _ := expandFirewallRules(d, edgeGateway.EdgeGateway) + task, err := edgeGateway.CreateFirewallRules(d.Get("default_action").(string), firewallRules) + if err != nil { + log.Printf("[INFO] Error setting firewall rules: %s", err) + return fmt.Errorf("Error setting firewall rules: %#v", err) + } + + return task.WaitTaskCompletion() + }) + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + + d.SetId(d.Get("edge_gateway").(string)) + + return resourceFirewallRulesRead(d, meta) +} + +func resourceFirewallRulesDelete(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + + firewallRules := deleteFirewallRules(d, edgeGateway.EdgeGateway) + defaultAction := edgeGateway.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService.DefaultAction + task, err := edgeGateway.CreateFirewallRules(defaultAction, firewallRules) + if err != nil { + return fmt.Errorf("Error deleting firewall rules: %#v", err) + } + + err = task.WaitTaskCompletion() + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + + return nil +} + +func resourceFirewallRulesRead(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + if err != nil { + return fmt.Errorf("Error finding edge gateway: %#v", err) + } + ruleList := d.Get("rule").([]interface{}) + firewallRules := *edgeGateway.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService + rulesCount := d.Get("rule.#").(int) + for i := 0; i < rulesCount; i++ { + prefix := fmt.Sprintf("rule.%d", i) + if d.Get(prefix+".id").(string) == "" { + log.Printf("[INFO] Rule %d has no id. Searching...", i) + ruleid, err := matchFirewallRule(d, prefix, firewallRules.FirewallRule) + if err == nil { + currentRule := ruleList[i].(map[string]interface{}) + currentRule["id"] = ruleid + ruleList[i] = currentRule + } + } + } + d.Set("rule", ruleList) + d.Set("default_action", firewallRules.DefaultAction) + + return nil +} + +func deleteFirewallRules(d *schema.ResourceData, gateway *types.EdgeGateway) []*types.FirewallRule { + firewallRules := gateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService.FirewallRule + rulesCount := d.Get("rule.#").(int) + fwrules := make([]*types.FirewallRule, 0, len(firewallRules)-rulesCount) + + for _, f := range firewallRules { + keep := true + for i := 0; i < rulesCount; i++ { + if d.Get(fmt.Sprintf("rule.%d.id", i)).(string) != f.ID { + continue + } + keep = false + } + if keep { + fwrules = append(fwrules, f) + } + } + return fwrules +} + +func matchFirewallRule(d *schema.ResourceData, prefix string, rules []*types.FirewallRule) (string, error) { + + for _, m := range rules { + if d.Get(prefix+".description").(string) == m.Description && + d.Get(prefix+".policy").(string) == m.Policy && + strings.ToLower(d.Get(prefix+".protocol").(string)) == getProtocol(*m.Protocols) && + strings.ToLower(d.Get(prefix+".destination_port").(string)) == getPortString(m.Port) && + strings.ToLower(d.Get(prefix+".destination_ip").(string)) == strings.ToLower(m.DestinationIP) && + strings.ToLower(d.Get(prefix+".source_port").(string)) == getPortString(m.SourcePort) && + strings.ToLower(d.Get(prefix+".source_ip").(string)) == strings.ToLower(m.SourceIP) { + return m.ID, nil + } + } + return "", fmt.Errorf("Unable to find rule") +} diff --git a/builtin/providers/vcd/resource_vcd_firewall_rules_test.go b/builtin/providers/vcd/resource_vcd_firewall_rules_test.go new file mode 100644 index 0000000000..1cb2d1e3ad --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_firewall_rules_test.go @@ -0,0 +1,108 @@ +package vcd + +import ( + "fmt" + "log" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/hmrc/vmware-govcd" +) + +func TestAccVcdFirewallRules_basic(t *testing.T) { + + var existingRules, fwRules govcd.EdgeGateway + newConfig := createFirewallRulesConfigs(&existingRules) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: newConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckVcdFirewallRulesExists("vcd_firewall_rules.bar", &fwRules), + testAccCheckVcdFirewallRulesAttributes(&fwRules, &existingRules), + ), + }, + }, + }) + +} + +func testAccCheckVcdFirewallRulesExists(n string, gateway *govcd.EdgeGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + conn := testAccProvider.Meta().(*VCDClient) + + resp, err := conn.OrgVdc.FindEdgeGateway(rs.Primary.ID) + if err != nil { + return fmt.Errorf("Edge Gateway does not exist.") + } + + *gateway = resp + + return nil + } +} + +func testAccCheckVcdFirewallRulesAttributes(newRules, existingRules *govcd.EdgeGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if len(newRules.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService.FirewallRule) != len(existingRules.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService.FirewallRule)+1 { + return fmt.Errorf("New firewall rule not added: %d != %d", + len(newRules.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService.FirewallRule), + len(existingRules.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService.FirewallRule)+1) + } + + return nil + } +} + +func createFirewallRulesConfigs(existingRules *govcd.EdgeGateway) string { + config := Config{ + User: os.Getenv("VCD_USER"), + Password: os.Getenv("VCD_PASSWORD"), + Org: os.Getenv("VCD_ORG"), + Href: os.Getenv("VCD_URL"), + VDC: os.Getenv("VCD_VDC"), + MaxRetryTimeout: 240, + } + conn, err := config.Client() + if err != nil { + return fmt.Sprintf(testAccCheckVcdFirewallRules_add, "", "") + } + edgeGateway, _ := conn.OrgVdc.FindEdgeGateway(os.Getenv("VCD_EDGE_GATWEWAY")) + *existingRules = edgeGateway + log.Printf("[DEBUG] Edge gateway: %#v", edgeGateway) + firewallRules := *edgeGateway.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService + return fmt.Sprintf(testAccCheckVcdFirewallRules_add, os.Getenv("VCD_EDGE_GATEWAY"), firewallRules.DefaultAction) +} + +const testAccCheckVcdFirewallRules_add = ` +resource "vcd_firewall_rules" "bar" { + edge_gateway = "%s" + default_action = "%s" + + rule { + description = "Test rule" + policy = "allow" + protocol = "any" + destination_port = "any" + destination_ip = "any" + source_port = "any" + source_ip = "any" + } +} +` diff --git a/builtin/providers/vcd/resource_vcd_network.go b/builtin/providers/vcd/resource_vcd_network.go new file mode 100644 index 0000000000..389f37b6a0 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_network.go @@ -0,0 +1,265 @@ +package vcd + +import ( + "log" + + "bytes" + "fmt" + "strings" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + types "github.com/hmrc/vmware-govcd/types/v56" +) + +func resourceVcdNetwork() *schema.Resource { + return &schema.Resource{ + Create: resourceVcdNetworkCreate, + Read: resourceVcdNetworkRead, + Delete: resourceVcdNetworkDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "fence_mode": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "natRouted", + }, + + "edge_gateway": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "netmask": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "255.255.255.0", + }, + + "gateway": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "dns1": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "8.8.8.8", + }, + + "dns2": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "8.8.4.4", + }, + + "dns_suffix": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "href": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "dhcp_pool": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "start_address": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "end_address": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + Set: resourceVcdNetworkIPAddressHash, + }, + "static_ip_pool": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "start_address": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "end_address": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + Set: resourceVcdNetworkIPAddressHash, + }, + }, + } +} + +func resourceVcdNetworkCreate(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + log.Printf("[TRACE] CLIENT: %#v", vcdClient) + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + + ipRanges := expandIPRange(d.Get("static_ip_pool").(*schema.Set).List()) + + newnetwork := &types.OrgVDCNetwork{ + Xmlns: "http://www.vmware.com/vcloud/v1.5", + Name: d.Get("name").(string), + Configuration: &types.NetworkConfiguration{ + FenceMode: d.Get("fence_mode").(string), + IPScopes: &types.IPScopes{ + IPScope: types.IPScope{ + IsInherited: false, + Gateway: d.Get("gateway").(string), + Netmask: d.Get("netmask").(string), + DNS1: d.Get("dns1").(string), + DNS2: d.Get("dns2").(string), + DNSSuffix: d.Get("dns_suffix").(string), + IPRanges: &ipRanges, + }, + }, + BackwardCompatibilityMode: true, + }, + EdgeGateway: &types.Reference{ + HREF: edgeGateway.EdgeGateway.HREF, + }, + IsShared: false, + } + + log.Printf("[INFO] NETWORK: %#v", newnetwork) + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + return vcdClient.OrgVdc.CreateOrgVDCNetwork(newnetwork) + }) + if err != nil { + return fmt.Errorf("Error: %#v", err) + } + + err = vcdClient.OrgVdc.Refresh() + if err != nil { + return fmt.Errorf("Error refreshing vdc: %#v", err) + } + + network, err := vcdClient.OrgVdc.FindVDCNetwork(d.Get("name").(string)) + if err != nil { + return fmt.Errorf("Error finding network: %#v", err) + } + + if dhcp, ok := d.GetOk("dhcp_pool"); ok { + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := edgeGateway.AddDhcpPool(network.OrgVDCNetwork, dhcp.(*schema.Set).List()) + if err != nil { + return fmt.Errorf("Error adding DHCP pool: %#v", err) + } + + return task.WaitTaskCompletion() + }) + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + + } + + d.SetId(d.Get("name").(string)) + + return resourceVcdNetworkRead(d, meta) +} + +func resourceVcdNetworkRead(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + log.Printf("[DEBUG] VCD Client configuration: %#v", vcdClient) + log.Printf("[DEBUG] VCD Client configuration: %#v", vcdClient.OrgVdc) + + err := vcdClient.OrgVdc.Refresh() + if err != nil { + return fmt.Errorf("Error refreshing vdc: %#v", err) + } + + network, err := vcdClient.OrgVdc.FindVDCNetwork(d.Id()) + if err != nil { + log.Printf("[DEBUG] Network no longer exists. Removing from tfstate") + d.SetId("") + return nil + } + + d.Set("name", network.OrgVDCNetwork.Name) + d.Set("href", network.OrgVDCNetwork.HREF) + if c := network.OrgVDCNetwork.Configuration; c != nil { + d.Set("fence_mode", c.FenceMode) + if c.IPScopes != nil { + d.Set("gateway", c.IPScopes.IPScope.Gateway) + d.Set("netmask", c.IPScopes.IPScope.Netmask) + d.Set("dns1", c.IPScopes.IPScope.DNS1) + d.Set("dns2", c.IPScopes.IPScope.DNS2) + } + } + + return nil +} + +func resourceVcdNetworkDelete(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + err := vcdClient.OrgVdc.Refresh() + if err != nil { + return fmt.Errorf("Error refreshing vdc: %#v", err) + } + + network, err := vcdClient.OrgVdc.FindVDCNetwork(d.Id()) + if err != nil { + return fmt.Errorf("Error finding network: %#v", err) + } + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := network.Delete() + if err != nil { + return fmt.Errorf("Error Deleting Network: %#v", err) + } + return task.WaitTaskCompletion() + }) + if err != nil { + return err + } + + return nil +} + +func resourceVcdNetworkIPAddressHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", + strings.ToLower(m["start_address"].(string)))) + buf.WriteString(fmt.Sprintf("%s-", + strings.ToLower(m["end_address"].(string)))) + + return hashcode.String(buf.String()) +} diff --git a/builtin/providers/vcd/resource_vcd_network_test.go b/builtin/providers/vcd/resource_vcd_network_test.go new file mode 100644 index 0000000000..fa59d177b7 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_network_test.go @@ -0,0 +1,107 @@ +package vcd + +import ( + "fmt" + "os" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/hmrc/vmware-govcd" +) + +func TestAccVcdNetwork_Basic(t *testing.T) { + var network govcd.OrgVDCNetwork + generatedHrefRegexp := regexp.MustCompile("^https://") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVcdNetworkDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckVcdNetwork_basic, os.Getenv("VCD_EDGE_GATWEWAY")), + Check: resource.ComposeTestCheckFunc( + testAccCheckVcdNetworkExists("vcd_network.foonet", &network), + testAccCheckVcdNetworkAttributes(&network), + resource.TestCheckResourceAttr( + "vcd_network.foonet", "name", "foonet"), + resource.TestCheckResourceAttr( + "vcd_network.foonet", "static_ip_pool.#", "1"), + resource.TestCheckResourceAttr( + "vcd_network.foonet", "gateway", "10.10.102.1"), + resource.TestMatchResourceAttr( + "vcd_network.foonet", "href", generatedHrefRegexp), + ), + }, + }, + }) +} + +func testAccCheckVcdNetworkExists(n string, network *govcd.OrgVDCNetwork) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VAPP ID is set") + } + + conn := testAccProvider.Meta().(*VCDClient) + + resp, err := conn.OrgVdc.FindVDCNetwork(rs.Primary.ID) + if err != nil { + return fmt.Errorf("Network does not exist.") + } + + *network = resp + + return nil + } +} + +func testAccCheckVcdNetworkDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*VCDClient) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "vcd_network" { + continue + } + + _, err := conn.OrgVdc.FindVDCNetwork(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("Network still exists.") + } + + return nil + } + + return nil +} + +func testAccCheckVcdNetworkAttributes(network *govcd.OrgVDCNetwork) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if network.OrgVDCNetwork.Name != "foonet" { + return fmt.Errorf("Bad name: %s", network.OrgVDCNetwork.Name) + } + + return nil + } +} + +const testAccCheckVcdNetwork_basic = ` +resource "vcd_network" "foonet" { + name = "foonet" + edge_gateway = "%s" + gateway = "10.10.102.1" + static_ip_pool { + start_address = "10.10.102.2" + end_address = "10.10.102.254" + } +} +` diff --git a/builtin/providers/vcd/resource_vcd_snat.go b/builtin/providers/vcd/resource_vcd_snat.go new file mode 100644 index 0000000000..4ad018c863 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_snat.go @@ -0,0 +1,123 @@ +package vcd + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceVcdSNAT() *schema.Resource { + return &schema.Resource{ + Create: resourceVcdSNATCreate, + Delete: resourceVcdSNATDelete, + Read: resourceVcdSNATRead, + + Schema: map[string]*schema.Schema{ + "edge_gateway": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "external_ip": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "internal_ip": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceVcdSNATCreate(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + // Multiple VCD components need to run operations on the Edge Gateway, as + // the edge gatway will throw back an error if it is already performing an + // operation we must wait until we can aquire a lock on the client + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + + // Creating a loop to offer further protection from the edge gateway erroring + // due to being busy eg another person is using another client so wouldn't be + // constrained by out lock. If the edge gateway reurns with a busy error, wait + // 3 seconds and then try again. Continue until a non-busy error or success + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + if err != nil { + return fmt.Errorf("Unable to find edge gateway: %#v", err) + } + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := edgeGateway.AddNATMapping("SNAT", d.Get("internal_ip").(string), + d.Get("external_ip").(string), + "any") + if err != nil { + return fmt.Errorf("Error setting SNAT rules: %#v", err) + } + return task.WaitTaskCompletion() + }) + if err != nil { + return err + } + + d.SetId(d.Get("internal_ip").(string)) + return nil +} + +func resourceVcdSNATRead(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + e, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + + if err != nil { + return fmt.Errorf("Unable to find edge gateway: %#v", err) + } + + var found bool + + for _, r := range e.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.NatService.NatRule { + if r.RuleType == "SNAT" && + r.GatewayNatRule.OriginalIP == d.Id() { + found = true + d.Set("external_ip", r.GatewayNatRule.TranslatedIP) + } + } + + if !found { + d.SetId("") + } + + return nil +} + +func resourceVcdSNATDelete(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + // Multiple VCD components need to run operations on the Edge Gateway, as + // the edge gatway will throw back an error if it is already performing an + // operation we must wait until we can aquire a lock on the client + vcdClient.Mutex.Lock() + defer vcdClient.Mutex.Unlock() + + edgeGateway, err := vcdClient.OrgVdc.FindEdgeGateway(d.Get("edge_gateway").(string)) + if err != nil { + return fmt.Errorf("Unable to find edge gateway: %#v", err) + } + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := edgeGateway.RemoveNATMapping("SNAT", d.Get("internal_ip").(string), + d.Get("external_ip").(string), + "") + if err != nil { + return fmt.Errorf("Error setting SNAT rules: %#v", err) + } + return task.WaitTaskCompletion() + }) + if err != nil { + return err + } + + return nil +} diff --git a/builtin/providers/vcd/resource_vcd_snat_test.go b/builtin/providers/vcd/resource_vcd_snat_test.go new file mode 100644 index 0000000000..87c2702a31 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_snat_test.go @@ -0,0 +1,119 @@ +package vcd + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/hmrc/vmware-govcd" +) + +func TestAccVcdSNAT_Basic(t *testing.T) { + if v := os.Getenv("VCD_EXTERNAL_IP"); v == "" { + t.Skip("Environment variable VCD_EXTERNAL_IP must be set to run SNAT tests") + return + } + + var e govcd.EdgeGateway + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVcdSNATDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckVcdSnat_basic, os.Getenv("VCD_EDGE_GATWEWAY"), os.Getenv("VCD_EXTERNAL_IP")), + Check: resource.ComposeTestCheckFunc( + testAccCheckVcdSNATExists("vcd_snat.bar", &e), + resource.TestCheckResourceAttr( + "vcd_snat.bar", "external_ip", os.Getenv("VCD_EXTERNAL_IP")), + resource.TestCheckResourceAttr( + "vcd_snat.bar", "internal_ip", "10.10.102.0/24"), + ), + }, + }, + }) +} + +func testAccCheckVcdSNATExists(n string, gateway *govcd.EdgeGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + //return fmt.Errorf("Check this: %#v", rs.Primary) + + if rs.Primary.ID == "" { + return fmt.Errorf("No SNAT ID is set") + } + + conn := testAccProvider.Meta().(*VCDClient) + + gatewayName := rs.Primary.Attributes["edge_gateway"] + edgeGateway, err := conn.OrgVdc.FindEdgeGateway(gatewayName) + + if err != nil { + return fmt.Errorf("Could not find edge gateway") + } + + var found bool + for _, v := range edgeGateway.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.NatService.NatRule { + if v.RuleType == "SNAT" && + v.GatewayNatRule.OriginalIP == "10.10.102.0/24" && + v.GatewayNatRule.OriginalPort == "" && + v.GatewayNatRule.TranslatedIP == os.Getenv("VCD_EXTERNAL_IP") { + found = true + } + } + if !found { + return fmt.Errorf("SNAT rule was not found") + } + + *gateway = edgeGateway + + return nil + } +} + +func testAccCheckVcdSNATDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*VCDClient) + for _, rs := range s.RootModule().Resources { + if rs.Type != "vcd_snat" { + continue + } + + gatewayName := rs.Primary.Attributes["edge_gateway"] + edgeGateway, err := conn.OrgVdc.FindEdgeGateway(gatewayName) + + if err != nil { + return fmt.Errorf("Could not find edge gateway") + } + + var found bool + for _, v := range edgeGateway.EdgeGateway.Configuration.EdgeGatewayServiceConfiguration.NatService.NatRule { + if v.RuleType == "SNAT" && + v.GatewayNatRule.OriginalIP == "10.10.102.0/24" && + v.GatewayNatRule.OriginalPort == "" && + v.GatewayNatRule.TranslatedIP == os.Getenv("VCD_EXTERNAL_IP") { + found = true + } + } + + if found { + return fmt.Errorf("SNAT rule still exists.") + } + } + + return nil +} + +const testAccCheckVcdSnat_basic = ` +resource "vcd_snat" "bar" { + edge_gateway = "%s" + external_ip = "%s" + internal_ip = "10.10.102.0/24" +} +` diff --git a/builtin/providers/vcd/resource_vcd_vapp.go b/builtin/providers/vcd/resource_vcd_vapp.go new file mode 100644 index 0000000000..8c98ecf21e --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_vapp.go @@ -0,0 +1,342 @@ +package vcd + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + types "github.com/hmrc/vmware-govcd/types/v56" +) + +func resourceVcdVApp() *schema.Resource { + return &schema.Resource{ + Create: resourceVcdVAppCreate, + Update: resourceVcdVAppUpdate, + Read: resourceVcdVAppRead, + Delete: resourceVcdVAppDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "template_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "catalog_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "network_href": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "network_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "memory": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "cpus": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "initscript": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "metadata": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + }, + "href": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "power_on": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + }, + } +} + +func resourceVcdVAppCreate(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + + catalog, err := vcdClient.Org.FindCatalog(d.Get("catalog_name").(string)) + if err != nil { + return fmt.Errorf("Error finding catalog: %#v", err) + } + + catalogitem, err := catalog.FindCatalogItem(d.Get("template_name").(string)) + if err != nil { + return fmt.Errorf("Error finding catelog item: %#v", err) + } + + vapptemplate, err := catalogitem.GetVAppTemplate() + if err != nil { + return fmt.Errorf("Error finding VAppTemplate: %#v", err) + } + + log.Printf("[DEBUG] VAppTemplate: %#v", vapptemplate) + var networkHref string + net, err := vcdClient.OrgVdc.FindVDCNetwork(d.Get("network_name").(string)) + if err != nil { + return fmt.Errorf("Error finding OrgVCD Network: %#v", err) + } + if attr, ok := d.GetOk("network_href"); ok { + networkHref = attr.(string) + } else { + networkHref = net.OrgVDCNetwork.HREF + } + // vapptemplate := govcd.NewVAppTemplate(&vcdClient.Client) + // + createvapp := &types.InstantiateVAppTemplateParams{ + Ovf: "http://schemas.dmtf.org/ovf/envelope/1", + Xmlns: "http://www.vmware.com/vcloud/v1.5", + Name: d.Get("name").(string), + InstantiationParams: &types.InstantiationParams{ + NetworkConfigSection: &types.NetworkConfigSection{ + Info: "Configuration parameters for logical networks", + NetworkConfig: &types.VAppNetworkConfiguration{ + NetworkName: d.Get("network_name").(string), + Configuration: &types.NetworkConfiguration{ + ParentNetwork: &types.Reference{ + HREF: networkHref, + }, + FenceMode: "bridged", + }, + }, + }, + }, + Source: &types.Reference{ + HREF: vapptemplate.VAppTemplate.HREF, + }, + } + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + e := vcdClient.OrgVdc.InstantiateVAppTemplate(createvapp) + + if e != nil { + return fmt.Errorf("Error: %#v", e) + } + + e = vcdClient.OrgVdc.Refresh() + if e != nil { + return fmt.Errorf("Error: %#v", e) + } + return nil + }) + if err != nil { + return err + } + + vapp, err := vcdClient.OrgVdc.FindVAppByName(d.Get("name").(string)) + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := vapp.ChangeVMName(d.Get("name").(string)) + if err != nil { + return fmt.Errorf("Error with vm name change: %#v", err) + } + + return task.WaitTaskCompletion() + }) + if err != nil { + return fmt.Errorf("Error changing vmname: %#v", err) + } + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := vapp.ChangeNetworkConfig(d.Get("network_name").(string), d.Get("ip").(string)) + if err != nil { + return fmt.Errorf("Error with Networking change: %#v", err) + } + return task.WaitTaskCompletion() + }) + if err != nil { + return fmt.Errorf("Error changing network: %#v", err) + } + + if initscript, ok := d.GetOk("initscript"); ok { + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := vapp.RunCustomizationScript(d.Get("name").(string), initscript.(string)) + if err != nil { + return fmt.Errorf("Error with setting init script: %#v", err) + } + return task.WaitTaskCompletion() + }) + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + } + + d.SetId(d.Get("name").(string)) + + return resourceVcdVAppUpdate(d, meta) +} + +func resourceVcdVAppUpdate(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + vapp, err := vcdClient.OrgVdc.FindVAppByName(d.Id()) + + if err != nil { + return fmt.Errorf("Error finding VApp: %#v", err) + } + + status, err := vapp.GetStatus() + if err != nil { + return fmt.Errorf("Error getting VApp status: %#v", err) + } + + if d.HasChange("metadata") { + oraw, nraw := d.GetChange("metadata") + metadata := oraw.(map[string]interface{}) + for k := range metadata { + task, err := vapp.DeleteMetadata(k) + if err != nil { + return fmt.Errorf("Error deleting metadata: %#v", err) + } + err = task.WaitTaskCompletion() + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + } + metadata = nraw.(map[string]interface{}) + for k, v := range metadata { + task, err := vapp.AddMetadata(k, v.(string)) + if err != nil { + return fmt.Errorf("Error adding metadata: %#v", err) + } + err = task.WaitTaskCompletion() + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + } + + } + + if d.HasChange("memory") || d.HasChange("cpus") || d.HasChange("power_on") { + if status != "POWERED_OFF" { + task, err := vapp.PowerOff() + if err != nil { + return fmt.Errorf("Error Powering Off: %#v", err) + } + err = task.WaitTaskCompletion() + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + } + + if d.HasChange("memory") { + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := vapp.ChangeMemorySize(d.Get("memory").(int)) + if err != nil { + return fmt.Errorf("Error changing memory size: %#v", err) + } + + return task.WaitTaskCompletion() + }) + if err != nil { + return err + } + } + + if d.HasChange("cpus") { + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := vapp.ChangeCPUcount(d.Get("cpus").(int)) + if err != nil { + return fmt.Errorf("Error changing cpu count: %#v", err) + } + + return task.WaitTaskCompletion() + }) + if err != nil { + return fmt.Errorf("Error completing task: %#v", err) + } + } + + if d.Get("power_on").(bool) { + task, err := vapp.PowerOn() + if err != nil { + return fmt.Errorf("Error Powering Up: %#v", err) + } + err = task.WaitTaskCompletion() + if err != nil { + return fmt.Errorf("Error completing tasks: %#v", err) + } + } + + } + + return resourceVcdVAppRead(d, meta) +} + +func resourceVcdVAppRead(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + + err := vcdClient.OrgVdc.Refresh() + if err != nil { + return fmt.Errorf("Error refreshing vdc: %#v", err) + } + + vapp, err := vcdClient.OrgVdc.FindVAppByName(d.Id()) + if err != nil { + log.Printf("[DEBUG] Unable to find vapp. Removing from tfstate") + d.SetId("") + return nil + } + d.Set("ip", vapp.VApp.Children.VM[0].NetworkConnectionSection.NetworkConnection.IPAddress) + + return nil +} + +func resourceVcdVAppDelete(d *schema.ResourceData, meta interface{}) error { + vcdClient := meta.(*VCDClient) + vapp, err := vcdClient.OrgVdc.FindVAppByName(d.Id()) + + if err != nil { + return fmt.Errorf("error finding vapp: %s", err) + } + + if err != nil { + return fmt.Errorf("Error getting VApp status: %#v", err) + } + + _ = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := vapp.Undeploy() + if err != nil { + return fmt.Errorf("Error undeploying: %#v", err) + } + + return task.WaitTaskCompletion() + }) + + err = retryCall(vcdClient.MaxRetryTimeout, func() error { + task, err := vapp.Delete() + if err != nil { + return fmt.Errorf("Error deleting: %#v", err) + } + + return task.WaitTaskCompletion() + }) + + return err +} diff --git a/builtin/providers/vcd/resource_vcd_vapp_test.go b/builtin/providers/vcd/resource_vcd_vapp_test.go new file mode 100644 index 0000000000..38162a64a2 --- /dev/null +++ b/builtin/providers/vcd/resource_vcd_vapp_test.go @@ -0,0 +1,180 @@ +package vcd + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/hmrc/vmware-govcd" +) + +func TestAccVcdVApp_PowerOff(t *testing.T) { + var vapp govcd.VApp + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVcdVAppDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckVcdVApp_basic, os.Getenv("VCD_EDGE_GATWEWAY")), + Check: resource.ComposeTestCheckFunc( + testAccCheckVcdVAppExists("vcd_vapp.foobar", &vapp), + testAccCheckVcdVAppAttributes(&vapp), + resource.TestCheckResourceAttr( + "vcd_vapp.foobar", "name", "foobar"), + resource.TestCheckResourceAttr( + "vcd_vapp.foobar", "ip", "10.10.102.160"), + resource.TestCheckResourceAttr( + "vcd_vapp.foobar", "power_on", "true"), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckVcdVApp_powerOff, os.Getenv("VCD_EDGE_GATWEWAY")), + Check: resource.ComposeTestCheckFunc( + testAccCheckVcdVAppExists("vcd_vapp.foobar", &vapp), + testAccCheckVcdVAppAttributes_off(&vapp), + resource.TestCheckResourceAttr( + "vcd_vapp.foobar", "name", "foobar"), + resource.TestCheckResourceAttr( + "vcd_vapp.foobar", "ip", "10.10.102.160"), + resource.TestCheckResourceAttr( + "vcd_vapp.foobar", "power_on", "false"), + ), + }, + }, + }) +} + +func testAccCheckVcdVAppExists(n string, vapp *govcd.VApp) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VAPP ID is set") + } + + conn := testAccProvider.Meta().(*VCDClient) + + resp, err := conn.OrgVdc.FindVAppByName(rs.Primary.ID) + if err != nil { + return err + } + + *vapp = resp + + return nil + } +} + +func testAccCheckVcdVAppDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*VCDClient) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "vcd_vapp" { + continue + } + + _, err := conn.OrgVdc.FindVAppByName(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("VPCs still exist") + } + + return nil + } + + return nil +} + +func testAccCheckVcdVAppAttributes(vapp *govcd.VApp) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if vapp.VApp.Name != "foobar" { + return fmt.Errorf("Bad name: %s", vapp.VApp.Name) + } + + if vapp.VApp.Name != vapp.VApp.Children.VM[0].Name { + return fmt.Errorf("VApp and VM names do not match. %s != %s", + vapp.VApp.Name, vapp.VApp.Children.VM[0].Name) + } + + status, _ := vapp.GetStatus() + if status != "POWERED_ON" { + return fmt.Errorf("VApp is not powered on") + } + + return nil + } +} + +func testAccCheckVcdVAppAttributes_off(vapp *govcd.VApp) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if vapp.VApp.Name != "foobar" { + return fmt.Errorf("Bad name: %s", vapp.VApp.Name) + } + + if vapp.VApp.Name != vapp.VApp.Children.VM[0].Name { + return fmt.Errorf("VApp and VM names do not match. %s != %s", + vapp.VApp.Name, vapp.VApp.Children.VM[0].Name) + } + + status, _ := vapp.GetStatus() + if status != "POWERED_OFF" { + return fmt.Errorf("VApp is still powered on") + } + + return nil + } +} + +const testAccCheckVcdVApp_basic = ` +resource "vcd_network" "foonet" { + name = "foonet" + edge_gateway = "%s" + gateway = "10.10.102.1" + static_ip_pool { + start_address = "10.10.102.2" + end_address = "10.10.102.254" + } +} + +resource "vcd_vapp" "foobar" { + name = "foobar" + template_name = "base-centos-7.0-x86_64_v-0.1_b-74" + catalog_name = "NubesLab" + network_name = "${vcd_network.foonet.name}" + memory = 1024 + cpus = 1 + ip = "10.10.102.160" +} +` + +const testAccCheckVcdVApp_powerOff = ` +resource "vcd_network" "foonet" { + name = "foonet" + edge_gateway = "%s" + gateway = "10.10.102.1" + static_ip_pool { + start_address = "10.10.102.2" + end_address = "10.10.102.254" + } +} + +resource "vcd_vapp" "foobar" { + name = "foobar" + template_name = "base-centos-7.0-x86_64_v-0.1_b-74" + catalog_name = "NubesLab" + network_name = "${vcd_network.foonet.name}" + memory = 1024 + cpus = 1 + ip = "10.10.102.160" + power_on = false +} +` diff --git a/builtin/providers/vcd/structure.go b/builtin/providers/vcd/structure.go new file mode 100644 index 0000000000..6a15f0c65b --- /dev/null +++ b/builtin/providers/vcd/structure.go @@ -0,0 +1,113 @@ +package vcd + +import ( + "fmt" + "strconv" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + types "github.com/hmrc/vmware-govcd/types/v56" +) + +func expandIPRange(configured []interface{}) types.IPRanges { + ipRange := make([]*types.IPRange, 0, len(configured)) + + for _, ipRaw := range configured { + data := ipRaw.(map[string]interface{}) + + ip := types.IPRange{ + StartAddress: data["start_address"].(string), + EndAddress: data["end_address"].(string), + } + + ipRange = append(ipRange, &ip) + } + + ipRanges := types.IPRanges{ + IPRange: ipRange, + } + + return ipRanges +} + +func expandFirewallRules(d *schema.ResourceData, gateway *types.EdgeGateway) ([]*types.FirewallRule, error) { + //firewallRules := make([]*types.FirewallRule, 0, len(configured)) + firewallRules := gateway.Configuration.EdgeGatewayServiceConfiguration.FirewallService.FirewallRule + + rulesCount := d.Get("rule.#").(int) + for i := 0; i < rulesCount; i++ { + prefix := fmt.Sprintf("rule.%d", i) + + var protocol *types.FirewallRuleProtocols + switch d.Get(prefix + ".protocol").(string) { + case "tcp": + protocol = &types.FirewallRuleProtocols{ + TCP: true, + } + case "udp": + protocol = &types.FirewallRuleProtocols{ + UDP: true, + } + case "icmp": + protocol = &types.FirewallRuleProtocols{ + ICMP: true, + } + default: + protocol = &types.FirewallRuleProtocols{ + Any: true, + } + } + rule := &types.FirewallRule{ + //ID: strconv.Itoa(len(configured) - i), + IsEnabled: true, + MatchOnTranslate: false, + Description: d.Get(prefix + ".description").(string), + Policy: d.Get(prefix + ".policy").(string), + Protocols: protocol, + Port: getNumericPort(d.Get(prefix + ".destination_port")), + DestinationPortRange: d.Get(prefix + ".destination_port").(string), + DestinationIP: d.Get(prefix + ".destination_ip").(string), + SourcePort: getNumericPort(d.Get(prefix + ".source_port")), + SourcePortRange: d.Get(prefix + ".source_port").(string), + SourceIP: d.Get(prefix + ".source_ip").(string), + EnableLogging: false, + } + firewallRules = append(firewallRules, rule) + } + + return firewallRules, nil +} + +func getProtocol(protocol types.FirewallRuleProtocols) string { + if protocol.TCP { + return "tcp" + } + if protocol.UDP { + return "udp" + } + if protocol.ICMP { + return "icmp" + } + return "any" +} + +func getNumericPort(portrange interface{}) int { + i, err := strconv.Atoi(portrange.(string)) + if err != nil { + return -1 + } + return i +} + +func getPortString(port int) string { + if port == -1 { + return "any" + } + portstring := strconv.Itoa(port) + return portstring +} + +func retryCall(seconds int, f resource.RetryFunc) error { + return resource.Retry(time.Duration(seconds)*time.Second, f) +} diff --git a/builtin/providers/vsphere/config.go b/builtin/providers/vsphere/config.go index 1f6af7ffd6..07ec95d002 100644 --- a/builtin/providers/vsphere/config.go +++ b/builtin/providers/vsphere/config.go @@ -9,26 +9,23 @@ import ( "golang.org/x/net/context" ) -const ( - defaultInsecureFlag = true -) - type Config struct { User string Password string - VCenterServer string + VSphereServer string + InsecureFlag bool } // Client() returns a new client for accessing VMWare vSphere. func (c *Config) Client() (*govmomi.Client, error) { - u, err := url.Parse("https://" + c.VCenterServer + "/sdk") + u, err := url.Parse("https://" + c.VSphereServer + "/sdk") if err != nil { return nil, fmt.Errorf("Error parse url: %s", err) } u.User = url.UserPassword(c.User, c.Password) - client, err := govmomi.NewClient(context.TODO(), u, defaultInsecureFlag) + client, err := govmomi.NewClient(context.TODO(), u, c.InsecureFlag) if err != nil { return nil, fmt.Errorf("Error setting up client: %s", err) } diff --git a/builtin/providers/vsphere/provider.go b/builtin/providers/vsphere/provider.go index 4dce81a9d6..5c98d31c01 100644 --- a/builtin/providers/vsphere/provider.go +++ b/builtin/providers/vsphere/provider.go @@ -1,6 +1,8 @@ package vsphere import ( + "fmt" + "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" ) @@ -23,15 +25,28 @@ func Provider() terraform.ResourceProvider { Description: "The user password for vSphere API operations.", }, + "vsphere_server": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_SERVER", nil), + Description: "The vSphere Server name for vSphere API operations.", + }, + "allow_unverified_ssl": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("VSPHERE_ALLOW_UNVERIFIED_SSL", false), + Description: "If set, VMware vSphere client will permit unverifiable SSL certificates.", + }, "vcenter_server": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, DefaultFunc: schema.EnvDefaultFunc("VSPHERE_VCENTER", nil), - Description: "The vCenter Server name for vSphere API operations.", + Deprecated: "This field has been renamed to vsphere_server.", }, }, ResourcesMap: map[string]*schema.Resource{ + "vsphere_folder": resourceVSphereFolder(), "vsphere_virtual_machine": resourceVSphereVirtualMachine(), }, @@ -40,10 +55,25 @@ func Provider() terraform.ResourceProvider { } func providerConfigure(d *schema.ResourceData) (interface{}, error) { + // Handle backcompat support for vcenter_server; once that is removed, + // vsphere_server can just become a Required field that is referenced inline + // in Config below. + server := d.Get("vsphere_server").(string) + + if server == "" { + server = d.Get("vcenter_server").(string) + } + + if server == "" { + return nil, fmt.Errorf( + "One of vsphere_server or [deprecated] vcenter_server must be provided.") + } + config := Config{ User: d.Get("user").(string), Password: d.Get("password").(string), - VCenterServer: d.Get("vcenter_server").(string), + InsecureFlag: d.Get("allow_unverified_ssl").(bool), + VSphereServer: server, } return config.Client() diff --git a/builtin/providers/vsphere/provider_test.go b/builtin/providers/vsphere/provider_test.go index bb8e4dc55f..ee6995ed87 100644 --- a/builtin/providers/vsphere/provider_test.go +++ b/builtin/providers/vsphere/provider_test.go @@ -37,7 +37,7 @@ func testAccPreCheck(t *testing.T) { t.Fatal("VSPHERE_PASSWORD must be set for acceptance tests") } - if v := os.Getenv("VSPHERE_VCENTER"); v == "" { - t.Fatal("VSPHERE_VCENTER must be set for acceptance tests") + if v := os.Getenv("VSPHERE_SERVER"); v == "" { + t.Fatal("VSPHERE_SERVER must be set for acceptance tests") } } diff --git a/builtin/providers/vsphere/resource_vsphere_folder.go b/builtin/providers/vsphere/resource_vsphere_folder.go new file mode 100644 index 0000000000..82289f3cfb --- /dev/null +++ b/builtin/providers/vsphere/resource_vsphere_folder.go @@ -0,0 +1,231 @@ +package vsphere + +import ( + "fmt" + "log" + "path" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/vmware/govmomi" + "github.com/vmware/govmomi/find" + "github.com/vmware/govmomi/object" + "golang.org/x/net/context" +) + +type folder struct { + datacenter string + existingPath string + path string +} + +func resourceVSphereFolder() *schema.Resource { + return &schema.Resource{ + Create: resourceVSphereFolderCreate, + Read: resourceVSphereFolderRead, + Delete: resourceVSphereFolderDelete, + + Schema: map[string]*schema.Schema{ + "datacenter": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "path": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "existing_path": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceVSphereFolderCreate(d *schema.ResourceData, meta interface{}) error { + + client := meta.(*govmomi.Client) + + f := folder{ + path: strings.TrimRight(d.Get("path").(string), "/"), + } + + if v, ok := d.GetOk("datacenter"); ok { + f.datacenter = v.(string) + } + + createFolder(client, &f) + + d.Set("existing_path", f.existingPath) + d.SetId(fmt.Sprintf("%v/%v", f.datacenter, f.path)) + log.Printf("[INFO] Created folder: %s", f.path) + + return resourceVSphereFolderRead(d, meta) +} + +func createFolder(client *govmomi.Client, f *folder) error { + + finder := find.NewFinder(client.Client, true) + + dc, err := finder.Datacenter(context.TODO(), f.datacenter) + if err != nil { + return fmt.Errorf("error %s", err) + } + finder = finder.SetDatacenter(dc) + si := object.NewSearchIndex(client.Client) + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + folder := dcFolders.VmFolder + var workingPath string + + pathParts := strings.Split(f.path, "/") + for _, pathPart := range pathParts { + if len(workingPath) > 0 { + workingPath += "/" + } + workingPath += pathPart + subfolder, err := si.FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", f.datacenter, workingPath)) + + if err != nil { + return fmt.Errorf("error %s", err) + } else if subfolder == nil { + log.Printf("[DEBUG] folder not found; creating: %s", workingPath) + folder, err = folder.CreateFolder(context.TODO(), pathPart) + if err != nil { + return fmt.Errorf("Failed to create folder at %s; %s", workingPath, err) + } + } else { + log.Printf("[DEBUG] folder already exists: %s", workingPath) + f.existingPath = workingPath + folder = subfolder.(*object.Folder) + } + } + return nil +} + +func resourceVSphereFolderRead(d *schema.ResourceData, meta interface{}) error { + + log.Printf("[DEBUG] reading folder: %#v", d) + client := meta.(*govmomi.Client) + + dc, err := getDatacenter(client, d.Get("datacenter").(string)) + if err != nil { + return err + } + + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + + folder, err := object.NewSearchIndex(client.Client).FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", d.Get("datacenter").(string), + d.Get("path").(string))) + + if err != nil { + return err + } + + if folder == nil { + d.SetId("") + } + + return nil +} + +func resourceVSphereFolderDelete(d *schema.ResourceData, meta interface{}) error { + + f := folder{ + path: strings.TrimRight(d.Get("path").(string), "/"), + existingPath: d.Get("existing_path").(string), + } + + if v, ok := d.GetOk("datacenter"); ok { + f.datacenter = v.(string) + } + + client := meta.(*govmomi.Client) + + deleteFolder(client, &f) + + d.SetId("") + return nil +} + +func deleteFolder(client *govmomi.Client, f *folder) error { + dc, err := getDatacenter(client, f.datacenter) + if err != nil { + return err + } + var folder *object.Folder + currentPath := f.path + + finder := find.NewFinder(client.Client, true) + finder = finder.SetDatacenter(dc) + si := object.NewSearchIndex(client.Client) + + folderRef, err := si.FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", f.datacenter, f.path)) + + if err != nil { + return fmt.Errorf("[ERROR] Could not locate folder %s: %v", f.path, err) + } else { + folder = folderRef.(*object.Folder) + } + + log.Printf("[INFO] Deleting empty sub-folders of existing path: %s", f.existingPath) + for currentPath != f.existingPath { + log.Printf("[INFO] Deleting folder: %s", currentPath) + children, err := folder.Children(context.TODO()) + if err != nil { + return err + } + + if len(children) > 0 { + return fmt.Errorf("Folder %s is non-empty and will not be deleted", currentPath) + } else { + log.Printf("[DEBUG] current folder: %#v", folder) + currentPath = path.Dir(currentPath) + if currentPath == "." { + currentPath = "" + } + log.Printf("[INFO] parent path of %s is calculated as %s", f.path, currentPath) + task, err := folder.Destroy(context.TODO()) + if err != nil { + return err + } + err = task.Wait(context.TODO()) + if err != nil { + return err + } + folderRef, err = si.FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", f.datacenter, currentPath)) + + if err != nil { + return err + } else if folderRef != nil { + folder = folderRef.(*object.Folder) + } + } + } + return nil +} + +// getDatacenter gets datacenter object +func getDatacenter(c *govmomi.Client, dc string) (*object.Datacenter, error) { + finder := find.NewFinder(c.Client, true) + if dc != "" { + d, err := finder.Datacenter(context.TODO(), dc) + return d, err + } else { + d, err := finder.DefaultDatacenter(context.TODO()) + return d, err + } +} diff --git a/builtin/providers/vsphere/resource_vsphere_folder_test.go b/builtin/providers/vsphere/resource_vsphere_folder_test.go new file mode 100644 index 0000000000..dfd81bbcce --- /dev/null +++ b/builtin/providers/vsphere/resource_vsphere_folder_test.go @@ -0,0 +1,276 @@ +package vsphere + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/vmware/govmomi" + "github.com/vmware/govmomi/find" + "github.com/vmware/govmomi/object" + "golang.org/x/net/context" +) + +// Basic top-level folder creation +func TestAccVSphereFolder_basic(t *testing.T) { + var f folder + datacenter := os.Getenv("VSPHERE_DATACENTER") + testMethod := "basic" + resourceName := "vsphere_folder." + testMethod + path := "tf_test_basic" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVSphereFolderDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereFolderConfig, + testMethod, + path, + datacenter, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereFolderExists(resourceName, &f), + resource.TestCheckResourceAttr( + resourceName, "path", path), + resource.TestCheckResourceAttr( + resourceName, "existing_path", ""), + ), + }, + }, + }) +} + +func TestAccVSphereFolder_nested(t *testing.T) { + + var f folder + datacenter := os.Getenv("VSPHERE_DATACENTER") + testMethod := "nested" + resourceName := "vsphere_folder." + testMethod + path := "tf_test_nested/tf_test_folder" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVSphereFolderDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereFolderConfig, + testMethod, + path, + datacenter, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereFolderExists(resourceName, &f), + resource.TestCheckResourceAttr( + resourceName, "path", path), + resource.TestCheckResourceAttr( + resourceName, "existing_path", ""), + ), + }, + }, + }) +} + +func TestAccVSphereFolder_dontDeleteExisting(t *testing.T) { + + var f folder + datacenter := os.Getenv("VSPHERE_DATACENTER") + testMethod := "dontDeleteExisting" + resourceName := "vsphere_folder." + testMethod + existingPath := "tf_test_dontDeleteExisting/tf_existing" + path := existingPath + "/tf_nested/tf_test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: resource.ComposeTestCheckFunc( + assertVSphereFolderExists(datacenter, existingPath), + removeVSphereFolder(datacenter, existingPath, ""), + ), + Steps: []resource.TestStep{ + resource.TestStep{ + PreConfig: func() { + createVSphereFolder(datacenter, existingPath) + }, + Config: fmt.Sprintf( + testAccCheckVSphereFolderConfig, + testMethod, + path, + datacenter, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereFolderExistingPathExists(resourceName, &f), + resource.TestCheckResourceAttr( + resourceName, "path", path), + resource.TestCheckResourceAttr( + resourceName, "existing_path", existingPath), + ), + }, + }, + }) +} + +func testAccCheckVSphereFolderDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "vsphere_folder" { + continue + } + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["path"]) + if err == nil { + return fmt.Errorf("Record still exists") + } + } + + return nil +} + +func testAccCheckVSphereFolderExists(n string, f *folder) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Resource not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["path"]) + + *f = folder{ + path: rs.Primary.Attributes["path"], + } + + return nil + } +} + +func testAccCheckVSphereFolderExistingPathExists(n string, f *folder) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Resource %s not found in %#v", n, s.RootModule().Resources) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["existing_path"]) + + *f = folder{ + path: rs.Primary.Attributes["path"], + } + + return nil + } +} + +func assertVSphereFolderExists(datacenter string, folder_name string) resource.TestCheckFunc { + + return func(s *terraform.State) error { + client := testAccProvider.Meta().(*govmomi.Client) + folder, err := object.NewSearchIndex(client.Client).FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", datacenter, folder_name)) + if err != nil { + return fmt.Errorf("Error: %s", err) + } else if folder == nil { + return fmt.Errorf("Folder %s does not exist!", folder_name) + } + + return nil + } +} + +func createVSphereFolder(datacenter string, folder_name string) error { + + client := testAccProvider.Meta().(*govmomi.Client) + + f := folder{path: folder_name, datacenter: datacenter} + + folder, err := object.NewSearchIndex(client.Client).FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", datacenter, folder_name)) + if err != nil { + return fmt.Errorf("error %s", err) + } + + if folder == nil { + createFolder(client, &f) + } else { + return fmt.Errorf("Folder %s already exists", folder_name) + } + + return nil +} + +func removeVSphereFolder(datacenter string, folder_name string, existing_path string) resource.TestCheckFunc { + + f := folder{path: folder_name, datacenter: datacenter, existingPath: existing_path} + + return func(s *terraform.State) error { + + client := testAccProvider.Meta().(*govmomi.Client) + // finder := find.NewFinder(client.Client, true) + + folder, _ := object.NewSearchIndex(client.Client).FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", datacenter, folder_name)) + if folder != nil { + deleteFolder(client, &f) + } + + return nil + } +} + +const testAccCheckVSphereFolderConfig = ` +resource "vsphere_folder" "%s" { + path = "%s" + datacenter = "%s" +} +` diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go index 98a5234883..c89c0b8ddd 100644 --- a/builtin/providers/vsphere/resource_vsphere_virtual_machine.go +++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go @@ -28,11 +28,13 @@ var DefaultDNSServers = []string{ } type networkInterface struct { - deviceName string - label string - ipAddress string - subnetMask string - adapterType string // TODO: Make "adapter_type" argument + deviceName string + label string + ipv4Address string + ipv4PrefixLength int + ipv6Address string + ipv6PrefixLength int + adapterType string // TODO: Make "adapter_type" argument } type hardDisk struct { @@ -41,21 +43,35 @@ type hardDisk struct { } type virtualMachine struct { - name string - datacenter string - cluster string - resourcePool string - datastore string - vcpu int - memoryMb int64 - template string - networkInterfaces []networkInterface - hardDisks []hardDisk - gateway string - domain string - timeZone string - dnsSuffixes []string - dnsServers []string + name string + folder string + datacenter string + cluster string + resourcePool string + datastore string + vcpu int + memoryMb int64 + template string + networkInterfaces []networkInterface + hardDisks []hardDisk + gateway string + domain string + timeZone string + dnsSuffixes []string + dnsServers []string + customConfigurations map[string](types.AnyType) +} + +func (v virtualMachine) Path() string { + return vmPath(v.folder, v.name) +} + +func vmPath(folder string, name string) string { + var path string + if len(folder) > 0 { + path += folder + "/" + } + return path + name } func resourceVSphereVirtualMachine() *schema.Resource { @@ -71,6 +87,12 @@ func resourceVSphereVirtualMachine() *schema.Resource { ForceNew: true, }, + "folder": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "vcpu": &schema.Schema{ Type: schema.TypeInt, Required: true, @@ -135,6 +157,12 @@ func resourceVSphereVirtualMachine() *schema.Resource { ForceNew: true, }, + "custom_configuration_parameters": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + "network_interface": &schema.Schema{ Type: schema.TypeList, Required: true, @@ -148,15 +176,40 @@ func resourceVSphereVirtualMachine() *schema.Resource { }, "ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + Deprecated: "Please use ipv4_address", + }, + + "subnet_mask": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + Deprecated: "Please use ipv4_prefix_length", + }, + + "ipv4_address": &schema.Schema{ Type: schema.TypeString, Optional: true, Computed: true, + }, + + "ipv4_prefix_length": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + + // TODO: Imprement ipv6 parameters to be optional + "ipv6_address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, ForceNew: true, }, - "subnet_mask": &schema.Schema{ - Type: schema.TypeString, - Optional: true, + "ipv6_prefix_length": &schema.Schema{ + Type: schema.TypeInt, Computed: true, ForceNew: true, }, @@ -221,6 +274,10 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{ memoryMb: int64(d.Get("memory").(int)), } + if v, ok := d.GetOk("folder"); ok { + vm.folder = v.(string) + } + if v, ok := d.GetOk("datacenter"); ok { vm.datacenter = v.(string) } @@ -261,16 +318,40 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{ vm.dnsServers = DefaultDNSServers } + if vL, ok := d.GetOk("custom_configuration_parameters"); ok { + if custom_configs, ok := vL.(map[string]interface{}); ok { + custom := make(map[string]types.AnyType) + for k, v := range custom_configs { + custom[k] = v + } + vm.customConfigurations = custom + log.Printf("[DEBUG] custom_configuration_parameters init: %v", vm.customConfigurations) + } + } + if vL, ok := d.GetOk("network_interface"); ok { networks := make([]networkInterface, len(vL.([]interface{}))) for i, v := range vL.([]interface{}) { network := v.(map[string]interface{}) networks[i].label = network["label"].(string) if v, ok := network["ip_address"].(string); ok && v != "" { - networks[i].ipAddress = v + networks[i].ipv4Address = v } if v, ok := network["subnet_mask"].(string); ok && v != "" { - networks[i].subnetMask = v + ip := net.ParseIP(v).To4() + if ip != nil { + mask := net.IPv4Mask(ip[0], ip[1], ip[2], ip[3]) + pl, _ := mask.Size() + networks[i].ipv4PrefixLength = pl + } else { + return fmt.Errorf("subnet_mask parameter is invalid.") + } + } + if v, ok := network["ipv4_address"].(string); ok && v != "" { + networks[i].ipv4Address = v + } + if v, ok := network["ipv4_prefix_length"].(int); ok && v != 0 { + networks[i].ipv4PrefixLength = v } } vm.networkInterfaces = networks @@ -321,12 +402,12 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{ } } - if _, ok := d.GetOk("network_interface.0.ip_address"); !ok { + if _, ok := d.GetOk("network_interface.0.ipv4_address"); !ok { if v, ok := d.GetOk("boot_delay"); ok { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, - Target: "active", - Refresh: waitForNetworkingActive(client, vm.datacenter, vm.name), + Target: []string{"active"}, + Refresh: waitForNetworkingActive(client, vm.datacenter, vm.Path()), Timeout: 600 * time.Second, Delay: time.Duration(v.(int)) * time.Second, MinTimeout: 2 * time.Second, @@ -338,13 +419,15 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{ } } } - d.SetId(vm.name) + d.SetId(vm.Path()) log.Printf("[INFO] Created virtual machine: %s", d.Id()) return resourceVSphereVirtualMachineRead(d, meta) } func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{}) error { + + log.Printf("[DEBUG] reading virtual machine: %#v", d) client := meta.(*govmomi.Client) dc, err := getDatacenter(client, d.Get("datacenter").(string)) if err != nil { @@ -353,9 +436,8 @@ func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{}) finder := find.NewFinder(client.Client, true) finder = finder.SetDatacenter(dc) - vm, err := finder.VirtualMachine(context.TODO(), d.Get("name").(string)) + vm, err := finder.VirtualMachine(context.TODO(), d.Id()) if err != nil { - log.Printf("[ERROR] Virtual machine not found: %s", d.Get("name").(string)) d.SetId("") return nil } @@ -377,15 +459,22 @@ func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] %#v", v.Network) networkInterface := make(map[string]interface{}) networkInterface["label"] = v.Network - if len(v.IpAddress) > 0 { - log.Printf("[DEBUG] %#v", v.IpAddress[0]) - networkInterface["ip_address"] = v.IpAddress[0] - - m := net.CIDRMask(v.IpConfig.IpAddress[0].PrefixLength, 32) - subnetMask := net.IPv4(m[0], m[1], m[2], m[3]) - networkInterface["subnet_mask"] = subnetMask.String() - log.Printf("[DEBUG] %#v", subnetMask.String()) + for _, ip := range v.IpConfig.IpAddress { + p := net.ParseIP(ip.IpAddress) + if p.To4() != nil { + log.Printf("[DEBUG] %#v", p.String()) + log.Printf("[DEBUG] %#v", ip.PrefixLength) + networkInterface["ipv4_address"] = p.String() + networkInterface["ipv4_prefix_length"] = ip.PrefixLength + } else if p.To16() != nil { + log.Printf("[DEBUG] %#v", p.String()) + log.Printf("[DEBUG] %#v", ip.PrefixLength) + networkInterface["ipv6_address"] = p.String() + networkInterface["ipv6_prefix_length"] = ip.PrefixLength + } + log.Printf("[DEBUG] networkInterface: %#v", networkInterface) } + log.Printf("[DEBUG] networkInterface: %#v", networkInterface) networkInterfaces = append(networkInterfaces, networkInterface) } } @@ -420,14 +509,6 @@ func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{}) d.Set("cpu", mvm.Summary.Config.NumCpu) d.Set("datastore", rootDatastore) - // Initialize the connection info - if len(networkInterfaces) > 0 { - d.SetConnInfo(map[string]string{ - "type": "ssh", - "host": networkInterfaces[0]["ip_address"].(string), - }) - } - return nil } @@ -440,7 +521,7 @@ func resourceVSphereVirtualMachineDelete(d *schema.ResourceData, meta interface{ finder := find.NewFinder(client.Client, true) finder = finder.SetDatacenter(dc) - vm, err := finder.VirtualMachine(context.TODO(), d.Get("name").(string)) + vm, err := finder.VirtualMachine(context.TODO(), vmPath(d.Get("folder").(string), d.Get("name").(string))) if err != nil { return err } @@ -504,18 +585,6 @@ func waitForNetworkingActive(client *govmomi.Client, datacenter, name string) re } } -// getDatacenter gets datacenter object -func getDatacenter(c *govmomi.Client, dc string) (*object.Datacenter, error) { - finder := find.NewFinder(c.Client, true) - if dc != "" { - d, err := finder.Datacenter(context.TODO(), dc) - return d, err - } else { - d, err := finder.DefaultDatacenter(context.TODO()) - return d, err - } -} - // addHardDisk adds a new Hard Disk to the VirtualMachine. func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string) error { devices, err := vm.Device(context.TODO()) @@ -745,9 +814,10 @@ func findDatastore(c *govmomi.Client, sps types.StoragePlacementSpec) (*object.D return datastore, nil } -// createVirtualMchine creates a new VirtualMachine. +// createVirtualMachine creates a new VirtualMachine. func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { dc, err := getDatacenter(c, vm.datacenter) + if err != nil { return err } @@ -780,6 +850,21 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { return err } + log.Printf("[DEBUG] folder: %#v", vm.folder) + folder := dcFolders.VmFolder + if len(vm.folder) > 0 { + si := object.NewSearchIndex(c.Client) + folderRef, err := si.FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", vm.datacenter, vm.folder)) + if err != nil { + return fmt.Errorf("Error reading folder %s: %s", vm.folder, err) + } else if folderRef == nil { + return fmt.Errorf("Cannot find folder %s", vm.folder) + } else { + folder = folderRef.(*object.Folder) + } + } + // network networkDevices := []types.BaseVirtualDeviceConfigSpec{} for _, network := range vm.networkInterfaces { @@ -802,6 +887,24 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { } log.Printf("[DEBUG] virtual machine config spec: %v", configSpec) + // make ExtraConfig + log.Printf("[DEBUG] virtual machine Extra Config spec start") + if len(vm.customConfigurations) > 0 { + var ov []types.BaseOptionValue + for k, v := range vm.customConfigurations { + key := k + value := v + o := types.OptionValue{ + Key: key, + Value: &value, + } + log.Printf("[DEBUG] virtual machine Extra Config spec: %s,%s", k, v) + ov = append(ov, &o) + } + configSpec.ExtraConfig = ov + log.Printf("[DEBUG] virtual machine Extra Config spec: %v", configSpec.ExtraConfig) + } + var datastore *object.Datastore if vm.datastore == "" { datastore, err = finder.DefaultDatastore(context.TODO()) @@ -850,7 +953,7 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { }) configSpec.Files = &types.VirtualMachineFileInfo{VmPathName: fmt.Sprintf("[%s]", mds.Name)} - task, err := dcFolders.VmFolder.CreateVM(context.TODO(), configSpec, resourcePool, nil) + task, err := folder.CreateVM(context.TODO(), configSpec, resourcePool, nil) if err != nil { log.Printf("[ERROR] %s", err) } @@ -860,7 +963,7 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { log.Printf("[ERROR] %s", err) } - newVM, err := finder.VirtualMachine(context.TODO(), vm.name) + newVM, err := finder.VirtualMachine(context.TODO(), vm.Path()) if err != nil { return err } @@ -878,7 +981,7 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error { return nil } -// deployVirtualMchine deploys a new VirtualMachine. +// deployVirtualMachine deploys a new VirtualMachine. func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { dc, err := getDatacenter(c, vm.datacenter) if err != nil { @@ -919,6 +1022,21 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { return err } + log.Printf("[DEBUG] folder: %#v", vm.folder) + folder := dcFolders.VmFolder + if len(vm.folder) > 0 { + si := object.NewSearchIndex(c.Client) + folderRef, err := si.FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", vm.datacenter, vm.folder)) + if err != nil { + return fmt.Errorf("Error reading folder %s: %s", vm.folder, err) + } else if folderRef == nil { + return fmt.Errorf("Cannot find folder %s", vm.folder) + } else { + folder = folderRef.(*object.Folder) + } + } + var datastore *object.Datastore if vm.datastore == "" { datastore, err = finder.DefaultDatastore(context.TODO()) @@ -967,23 +1085,31 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { } networkDevices = append(networkDevices, nd) + // TODO: IPv6 support var ipSetting types.CustomizationIPSettings - if network.ipAddress == "" { + if network.ipv4Address == "" { ipSetting = types.CustomizationIPSettings{ Ip: &types.CustomizationDhcpIpGenerator{}, } } else { + if network.ipv4PrefixLength == 0 { + return fmt.Errorf("Error: ipv4_prefix_length argument is empty.") + } + m := net.CIDRMask(network.ipv4PrefixLength, 32) + sm := net.IPv4(m[0], m[1], m[2], m[3]) + subnetMask := sm.String() log.Printf("[DEBUG] gateway: %v", vm.gateway) - log.Printf("[DEBUG] ip address: %v", network.ipAddress) - log.Printf("[DEBUG] subnet mask: %v", network.subnetMask) + log.Printf("[DEBUG] ipv4 address: %v", network.ipv4Address) + log.Printf("[DEBUG] ipv4 prefix length: %v", network.ipv4PrefixLength) + log.Printf("[DEBUG] ipv4 subnet mask: %v", subnetMask) ipSetting = types.CustomizationIPSettings{ Gateway: []string{ vm.gateway, }, Ip: &types.CustomizationFixedIp{ - IpAddress: network.ipAddress, + IpAddress: network.ipv4Address, }, - SubnetMask: network.subnetMask, + SubnetMask: subnetMask, } } @@ -1003,7 +1129,25 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { } log.Printf("[DEBUG] virtual machine config spec: %v", configSpec) - // build CustomizationSpec + log.Printf("[DEBUG] starting extra custom config spec: %v", vm.customConfigurations) + + // make ExtraConfig + if len(vm.customConfigurations) > 0 { + var ov []types.BaseOptionValue + for k, v := range vm.customConfigurations { + key := k + value := v + o := types.OptionValue{ + Key: key, + Value: &value, + } + ov = append(ov, &o) + } + configSpec.ExtraConfig = ov + log.Printf("[DEBUG] virtual machine Extra Config spec: %v", configSpec.ExtraConfig) + } + + // create CustomizationSpec customSpec := types.CustomizationSpec{ Identity: &types.CustomizationLinuxPrep{ HostName: &types.CustomizationFixedName{ @@ -1030,7 +1174,7 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { } log.Printf("[DEBUG] clone spec: %v", cloneSpec) - task, err := template.Clone(context.TODO(), dcFolders.VmFolder, vm.name, cloneSpec) + task, err := template.Clone(context.TODO(), folder, vm.name, cloneSpec) if err != nil { return err } @@ -1040,7 +1184,7 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { return err } - newVM, err := finder.VirtualMachine(context.TODO(), vm.name) + newVM, err := finder.VirtualMachine(context.TODO(), vm.Path()) if err != nil { return err } @@ -1081,6 +1225,14 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { } log.Printf("[DEBUG]VM customization finished") + for i := 1; i < len(vm.hardDisks); i++ { + err = addHardDisk(newVM, vm.hardDisks[i].size, vm.hardDisks[i].iops, "eager_zeroed") + if err != nil { + return err + } + } + log.Printf("[DEBUG] virtual machine config spec: %v", configSpec) + newVM.PowerOn(context.TODO()) ip, err := newVM.WaitForIP(context.TODO()) @@ -1089,11 +1241,5 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error { } log.Printf("[DEBUG] ip address: %v", ip) - for i := 1; i < len(vm.hardDisks); i++ { - err = addHardDisk(newVM, vm.hardDisks[i].size, vm.hardDisks[i].iops, "eager_zeroed") - if err != nil { - return err - } - } return nil } diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go index 66d6ea44f8..97973efb52 100644 --- a/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go +++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go @@ -10,6 +10,9 @@ import ( "github.com/vmware/govmomi" "github.com/vmware/govmomi/find" "github.com/vmware/govmomi/object" + "github.com/vmware/govmomi/property" + "github.com/vmware/govmomi/vim25/mo" + "github.com/vmware/govmomi/vim25/types" "golang.org/x/net/context" ) @@ -127,6 +130,201 @@ func TestAccVSphereVirtualMachine_dhcp(t *testing.T) { }) } +func TestAccVSphereVirtualMachine_custom_configs(t *testing.T) { + var vm virtualMachine + var locationOpt string + var datastoreOpt string + + if v := os.Getenv("VSPHERE_DATACENTER"); v != "" { + locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v) + } + if v := os.Getenv("VSPHERE_CLUSTER"); v != "" { + locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v) + } + if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" { + locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v) + } + if v := os.Getenv("VSPHERE_DATASTORE"); v != "" { + datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v) + } + template := os.Getenv("VSPHERE_TEMPLATE") + label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVSphereVirtualMachineDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereVirtualMachineConfig_custom_configs, + locationOpt, + label, + datastoreOpt, + template, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineExistsHasCustomConfig("vsphere_virtual_machine.car", &vm), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "name", "terraform-test-custom"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "vcpu", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "memory", "4096"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "disk.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "disk.0.template", template), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "network_interface.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "custom_configuration_parameters.foo", "bar"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "custom_configuration_parameters.car", "ferrari"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "custom_configuration_parameters.num", "42"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.car", "network_interface.0.label", label), + ), + }, + }, + }) +} + +func TestAccVSphereVirtualMachine_createInExistingFolder(t *testing.T) { + var vm virtualMachine + var locationOpt string + var datastoreOpt string + var datacenter string + + folder := "tf_test_createInExistingFolder" + + if v := os.Getenv("VSPHERE_DATACENTER"); v != "" { + locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v) + datacenter = v + } + if v := os.Getenv("VSPHERE_CLUSTER"); v != "" { + locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v) + } + if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" { + locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v) + } + if v := os.Getenv("VSPHERE_DATASTORE"); v != "" { + datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v) + } + template := os.Getenv("VSPHERE_TEMPLATE") + label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineDestroy, + removeVSphereFolder(datacenter, folder, ""), + ), + Steps: []resource.TestStep{ + resource.TestStep{ + PreConfig: func() { createVSphereFolder(datacenter, folder) }, + Config: fmt.Sprintf( + testAccCheckVSphereVirtualMachineConfig_createInFolder, + folder, + locationOpt, + label, + datastoreOpt, + template, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.folder", &vm), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "name", "terraform-test-folder"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "folder", folder), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "vcpu", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "memory", "4096"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "disk.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "disk.0.template", template), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "network_interface.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.folder", "network_interface.0.label", label), + ), + }, + }, + }) +} + +func TestAccVSphereVirtualMachine_createWithFolder(t *testing.T) { + var vm virtualMachine + var f folder + var locationOpt string + var folderLocationOpt string + var datastoreOpt string + + folder := "tf_test_createWithFolder" + + if v := os.Getenv("VSPHERE_DATACENTER"); v != "" { + folderLocationOpt = fmt.Sprintf(" datacenter = \"%s\"\n", v) + locationOpt += folderLocationOpt + } + if v := os.Getenv("VSPHERE_CLUSTER"); v != "" { + locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v) + } + if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" { + locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v) + } + if v := os.Getenv("VSPHERE_DATASTORE"); v != "" { + datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v) + } + template := os.Getenv("VSPHERE_TEMPLATE") + label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineDestroy, + testAccCheckVSphereFolderDestroy, + ), + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf( + testAccCheckVSphereVirtualMachineConfig_createWithFolder, + folder, + folderLocationOpt, + locationOpt, + label, + datastoreOpt, + template, + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.with_folder", &vm), + testAccCheckVSphereFolderExists("vsphere_folder.with_folder", &f), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.with_folder", "name", "terraform-test-with-folder"), + // resource.TestCheckResourceAttr( + // "vsphere_virtual_machine.with_folder", "folder", folder), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.with_folder", "vcpu", "2"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.with_folder", "memory", "4096"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.with_folder", "disk.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.with_folder", "disk.0.template", template), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.with_folder", "network_interface.#", "1"), + resource.TestCheckResourceAttr( + "vsphere_virtual_machine.with_folder", "network_interface.0.label", label), + ), + }, + }, + }) +} + func testAccCheckVSphereVirtualMachineDestroy(s *terraform.State) error { client := testAccProvider.Meta().(*govmomi.Client) finder := find.NewFinder(client.Client, true) @@ -146,7 +344,20 @@ func testAccCheckVSphereVirtualMachineDestroy(s *terraform.State) error { return fmt.Errorf("error %s", err) } - _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"]) + folder := dcFolders.VmFolder + if len(rs.Primary.Attributes["folder"]) > 0 { + si := object.NewSearchIndex(client.Client) + folderRef, err := si.FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", rs.Primary.Attributes["datacenter"], rs.Primary.Attributes["folder"])) + if err != nil { + return err + } else if folderRef != nil { + folder = folderRef.(*object.Folder) + } + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), folder, rs.Primary.Attributes["name"]) + if err == nil { return fmt.Errorf("Record still exists") } @@ -155,6 +366,92 @@ func testAccCheckVSphereVirtualMachineDestroy(s *terraform.State) error { return nil } +func testAccCheckVSphereVirtualMachineExistsHasCustomConfig(n string, vm *virtualMachine) resource.TestCheckFunc { + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*govmomi.Client) + finder := find.NewFinder(client.Client, true) + + dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + dcFolders, err := dc.Folders(context.TODO()) + if err != nil { + return fmt.Errorf("error %s", err) + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + finder = finder.SetDatacenter(dc) + instance, err := finder.VirtualMachine(context.TODO(), rs.Primary.Attributes["name"]) + if err != nil { + return fmt.Errorf("error %s", err) + } + + var mvm mo.VirtualMachine + + collector := property.DefaultCollector(client.Client) + + if err := collector.RetrieveOne(context.TODO(), instance.Reference(), []string{"config.extraConfig"}, &mvm); err != nil { + return fmt.Errorf("error %s", err) + } + + var configMap = make(map[string]types.AnyType) + if mvm.Config != nil && mvm.Config.ExtraConfig != nil && len(mvm.Config.ExtraConfig) > 0 { + for _, v := range mvm.Config.ExtraConfig { + value := v.GetOptionValue() + configMap[value.Key] = value.Value + } + } else { + return fmt.Errorf("error no ExtraConfig") + } + + if configMap["foo"] == nil { + return fmt.Errorf("error no ExtraConfig for 'foo'") + } + + if configMap["foo"] != "bar" { + return fmt.Errorf("error ExtraConfig 'foo' != bar") + } + + if configMap["car"] == nil { + return fmt.Errorf("error no ExtraConfig for 'car'") + } + + if configMap["car"] != "ferrari" { + return fmt.Errorf("error ExtraConfig 'car' != ferrari") + } + + if configMap["num"] == nil { + return fmt.Errorf("error no ExtraConfig for 'num'") + } + + // todo this should be an int, getting back a string + if configMap["num"] != "42" { + return fmt.Errorf("error ExtraConfig 'num' != 42") + } + *vm = virtualMachine{ + name: rs.Primary.ID, + } + + return nil + } +} + func testAccCheckVSphereVirtualMachineExists(n string, vm *virtualMachine) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -179,7 +476,19 @@ func testAccCheckVSphereVirtualMachineExists(n string, vm *virtualMachine) resou return fmt.Errorf("error %s", err) } - _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), dcFolders.VmFolder, rs.Primary.Attributes["name"]) + folder := dcFolders.VmFolder + if len(rs.Primary.Attributes["folder"]) > 0 { + si := object.NewSearchIndex(client.Client) + folderRef, err := si.FindByInventoryPath( + context.TODO(), fmt.Sprintf("%v/vm/%v", rs.Primary.Attributes["datacenter"], rs.Primary.Attributes["folder"])) + if err != nil { + return err + } else if folderRef != nil { + folder = folderRef.(*object.Folder) + } + } + + _, err = object.NewSearchIndex(client.Client).FindChild(context.TODO(), folder, rs.Primary.Attributes["name"]) *vm = virtualMachine{ name: rs.Primary.ID, @@ -198,8 +507,8 @@ resource "vsphere_virtual_machine" "foo" { gateway = "%s" network_interface { label = "%s" - ip_address = "%s" - subnet_mask = "255.255.255.0" + ipv4_address = "%s" + ipv4_prefix_length = 24 } disk { %s @@ -212,7 +521,6 @@ resource "vsphere_virtual_machine" "foo" { } } ` - const testAccCheckVSphereVirtualMachineConfig_dhcp = ` resource "vsphere_virtual_machine" "bar" { name = "terraform-test" @@ -228,3 +536,62 @@ resource "vsphere_virtual_machine" "bar" { } } ` + +const testAccCheckVSphereVirtualMachineConfig_custom_configs = ` +resource "vsphere_virtual_machine" "car" { + name = "terraform-test-custom" +%s + vcpu = 2 + memory = 4096 + network_interface { + label = "%s" + } + custom_configuration_parameters { + "foo" = "bar" + "car" = "ferrari" + "num" = 42 + } + disk { +%s + template = "%s" + } +} +` + +const testAccCheckVSphereVirtualMachineConfig_createInFolder = ` +resource "vsphere_virtual_machine" "folder" { + name = "terraform-test-folder" + folder = "%s" +%s + vcpu = 2 + memory = 4096 + network_interface { + label = "%s" + } + disk { +%s + template = "%s" + } +} +` + +const testAccCheckVSphereVirtualMachineConfig_createWithFolder = ` +resource "vsphere_folder" "with_folder" { + path = "%s" +%s +} +resource "vsphere_virtual_machine" "with_folder" { + name = "terraform-test-with-folder" + folder = "${vsphere_folder.with_folder.path}" +%s + vcpu = 2 + memory = 4096 + network_interface { + label = "%s" + } + disk { +%s + template = "%s" + } +} +` diff --git a/builtin/provisioners/chef/linux_provisioner.go b/builtin/provisioners/chef/linux_provisioner.go index b7a3b2813a..ebfe729795 100644 --- a/builtin/provisioners/chef/linux_provisioner.go +++ b/builtin/provisioners/chef/linux_provisioner.go @@ -10,6 +10,7 @@ import ( ) const ( + chmod = "find %s -maxdepth 1 -type f -exec /bin/chmod %d {} +" installURL = "https://www.chef.io/chef/install.sh" ) @@ -58,6 +59,9 @@ func (p *Provisioner) linuxCreateConfigFiles( if err := p.runCommand(o, comm, "chmod 777 "+linuxConfDir); err != nil { return err } + if err := p.runCommand(o, comm, fmt.Sprintf(chmod, linuxConfDir, 666)); err != nil { + return err + } } if err := p.deployConfigFiles(o, comm, linuxConfDir); err != nil { @@ -76,6 +80,9 @@ func (p *Provisioner) linuxCreateConfigFiles( if err := p.runCommand(o, comm, "chmod 777 "+hintsDir); err != nil { return err } + if err := p.runCommand(o, comm, fmt.Sprintf(chmod, hintsDir, 666)); err != nil { + return err + } } if err := p.deployOhaiHints(o, comm, hintsDir); err != nil { @@ -87,6 +94,9 @@ func (p *Provisioner) linuxCreateConfigFiles( if err := p.runCommand(o, comm, "chmod 755 "+hintsDir); err != nil { return err } + if err := p.runCommand(o, comm, fmt.Sprintf(chmod, hintsDir, 600)); err != nil { + return err + } if err := p.runCommand(o, comm, "chown -R root.root "+hintsDir); err != nil { return err } @@ -98,6 +108,9 @@ func (p *Provisioner) linuxCreateConfigFiles( if err := p.runCommand(o, comm, "chmod 755 "+linuxConfDir); err != nil { return err } + if err := p.runCommand(o, comm, fmt.Sprintf(chmod, linuxConfDir, 600)); err != nil { + return err + } if err := p.runCommand(o, comm, "chown -R root.root "+linuxConfDir); err != nil { return err } diff --git a/builtin/provisioners/chef/linux_provisioner_test.go b/builtin/provisioners/chef/linux_provisioner_test.go index 6c57bfef0c..89fae2b3f0 100644 --- a/builtin/provisioners/chef/linux_provisioner_test.go +++ b/builtin/provisioners/chef/linux_provisioner_test.go @@ -1,6 +1,7 @@ package chef import ( + "fmt" "path" "testing" @@ -163,14 +164,18 @@ func TestResourceProvider_linuxCreateConfigFiles(t *testing.T) { }), Commands: map[string]bool{ - "sudo mkdir -p " + linuxConfDir: true, - "sudo chmod 777 " + linuxConfDir: true, - "sudo mkdir -p " + path.Join(linuxConfDir, "ohai/hints"): true, - "sudo chmod 777 " + path.Join(linuxConfDir, "ohai/hints"): true, - "sudo chmod 755 " + path.Join(linuxConfDir, "ohai/hints"): true, - "sudo chown -R root.root " + path.Join(linuxConfDir, "ohai/hints"): true, - "sudo chmod 755 " + linuxConfDir: true, - "sudo chown -R root.root " + linuxConfDir: true, + "sudo mkdir -p " + linuxConfDir: true, + "sudo chmod 777 " + linuxConfDir: true, + "sudo " + fmt.Sprintf(chmod, linuxConfDir, 666): true, + "sudo mkdir -p " + path.Join(linuxConfDir, "ohai/hints"): true, + "sudo chmod 777 " + path.Join(linuxConfDir, "ohai/hints"): true, + "sudo " + fmt.Sprintf(chmod, path.Join(linuxConfDir, "ohai/hints"), 666): true, + "sudo chmod 755 " + path.Join(linuxConfDir, "ohai/hints"): true, + "sudo " + fmt.Sprintf(chmod, path.Join(linuxConfDir, "ohai/hints"), 600): true, + "sudo chown -R root.root " + path.Join(linuxConfDir, "ohai/hints"): true, + "sudo chmod 755 " + linuxConfDir: true, + "sudo " + fmt.Sprintf(chmod, linuxConfDir, 600): true, + "sudo chown -R root.root " + linuxConfDir: true, }, Uploads: map[string]string{ @@ -323,4 +328,6 @@ ENV['https_proxy'] = "https://proxy.local" ENV['HTTPS_PROXY'] = "https://proxy.local" -no_proxy "http://local.local,https://local.local"` + +no_proxy "http://local.local,https://local.local" +ENV['no_proxy'] = "http://local.local,https://local.local"` diff --git a/builtin/provisioners/chef/resource_provisioner.go b/builtin/provisioners/chef/resource_provisioner.go index 68ae6256a4..34c6a5f1f0 100644 --- a/builtin/provisioners/chef/resource_provisioner.go +++ b/builtin/provisioners/chef/resource_provisioner.go @@ -60,13 +60,23 @@ ENV['https_proxy'] = "{{ .HTTPSProxy }}" ENV['HTTPS_PROXY'] = "{{ .HTTPSProxy }}" {{ end }} -{{ if .NOProxy }}no_proxy "{{ join .NOProxy "," }}"{{ end }} +{{ if .NOProxy }} +no_proxy "{{ join .NOProxy "," }}" +ENV['no_proxy'] = "{{ join .NOProxy "," }}" +{{ end }} + {{ if .SSLVerifyMode }}ssl_verify_mode {{ .SSLVerifyMode }}{{ end }} + +{{ if .DisableReporting }}enable_reporting false{{ end }} + +{{ if .ClientOptions }}{{ join .ClientOptions "\n" }}{{ end }} ` // Provisioner represents a specificly configured chef provisioner type Provisioner struct { Attributes interface{} `mapstructure:"attributes"` + ClientOptions []string `mapstructure:"client_options"` + DisableReporting bool `mapstructure:"disable_reporting"` Environment string `mapstructure:"environment"` LogToFile bool `mapstructure:"log_to_file"` UsePolicyfile bool `mapstructure:"use_policyfile"` diff --git a/builtin/provisioners/chef/windows_provisioner_test.go b/builtin/provisioners/chef/windows_provisioner_test.go index 11e61d8883..13604d6c92 100644 --- a/builtin/provisioners/chef/windows_provisioner_test.go +++ b/builtin/provisioners/chef/windows_provisioner_test.go @@ -355,4 +355,6 @@ ENV['https_proxy'] = "https://proxy.local" ENV['HTTPS_PROXY'] = "https://proxy.local" -no_proxy "http://local.local,https://local.local"` + +no_proxy "http://local.local,https://local.local" +ENV['no_proxy'] = "http://local.local,https://local.local"` diff --git a/command/graph.go b/command/graph.go index 5719a8900e..6c7bfae044 100644 --- a/command/graph.go +++ b/command/graph.go @@ -99,7 +99,7 @@ Options: This helps when diagnosing cycle errors. -module-depth=n The maximum depth to expand modules. By default this is - zero, which will not expand modules at all. + -1, which will expand resources within all modules. -verbose Generate a verbose, "worst-case" graph, with all nodes for potential operations in place. diff --git a/command/init.go b/command/init.go index 1b92c0806c..198456dd20 100644 --- a/command/init.go +++ b/command/init.go @@ -4,6 +4,7 @@ import ( "flag" "fmt" "os" + "path/filepath" "strings" "github.com/hashicorp/go-getter" @@ -52,6 +53,11 @@ func (c *InitCommand) Run(args []string) int { } } + // Set the state out path to be the path requested for the module + // to be copied. This ensures any remote states gets setup in the + // proper directory. + c.Meta.dataDir = filepath.Join(path, DefaultDataDirectory) + source := args[0] // Get our pwd since we need it diff --git a/command/init_test.go b/command/init_test.go index 304c040094..d12501707a 100644 --- a/command/init_test.go +++ b/command/init_test.go @@ -179,6 +179,42 @@ func TestInit_remoteState(t *testing.T) { } } +func TestInit_remoteStateSubdir(t *testing.T) { + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + subdir := filepath.Join(tmp, "subdir") + + s := terraform.NewState() + conf, srv := testRemoteState(t, s, 200) + defer srv.Close() + + ui := new(cli.MockUi) + c := &InitCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + } + + args := []string{ + "-backend", "http", + "-backend-config", "address=" + conf.Config["address"], + testFixturePath("init"), + subdir, + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } + + if _, err := os.Stat(filepath.Join(subdir, "hello.tf")); err != nil { + t.Fatalf("err: %s", err) + } + + if _, err := os.Stat(filepath.Join(subdir, DefaultDataDir, DefaultStateFilename)); err != nil { + t.Fatalf("missing state: %s", err) + } +} + func TestInit_remoteStateWithLocal(t *testing.T) { tmp, cwd := testCwd(t) defer testFixCwd(t, tmp, cwd) diff --git a/command/meta.go b/command/meta.go index 3a12de02f7..bd4855964c 100644 --- a/command/meta.go +++ b/command/meta.go @@ -407,12 +407,17 @@ func (m *Meta) uiHook() *UiHook { } const ( - // The name of the environment variable that can be used to set module depth. + // ModuleDepthDefault is the default value for + // module depth, which can be overridden by flag + // or env var + ModuleDepthDefault = -1 + + // ModuleDepthEnvVar is the name of the environment variable that can be used to set module depth. ModuleDepthEnvVar = "TF_MODULE_DEPTH" ) func (m *Meta) addModuleDepthFlag(flags *flag.FlagSet, moduleDepth *int) { - flags.IntVar(moduleDepth, "module-depth", 0, "module-depth") + flags.IntVar(moduleDepth, "module-depth", ModuleDepthDefault, "module-depth") if envVar := os.Getenv(ModuleDepthEnvVar); envVar != "" { if md, err := strconv.Atoi(envVar); err == nil { *moduleDepth = md diff --git a/command/meta_test.go b/command/meta_test.go index b3022c64a1..781c664dc3 100644 --- a/command/meta_test.go +++ b/command/meta_test.go @@ -244,12 +244,12 @@ func TestMeta_addModuleDepthFlag(t *testing.T) { "invalid envvar is ignored": { EnvVar: "-#", Args: []string{}, - Expected: 0, + Expected: ModuleDepthDefault, }, "empty envvar is okay too": { EnvVar: "", Args: []string{}, - Expected: 0, + Expected: ModuleDepthDefault, }, } diff --git a/command/plan.go b/command/plan.go index 8c5fda5cc9..5582ddf4f4 100644 --- a/command/plan.go +++ b/command/plan.go @@ -102,15 +102,6 @@ func (c *PlanCommand) Run(args []string) int { return 1 } - if plan.Diff.Empty() { - c.Ui.Output( - "No changes. Infrastructure is up-to-date. This means that Terraform\n" + - "could not detect any differences between your configuration and\n" + - "the real physical resources that exist. As a result, Terraform\n" + - "doesn't need to do anything.") - return 0 - } - if outPath != "" { log.Printf("[INFO] Writing plan output to: %s", outPath) f, err := os.Create(outPath) @@ -124,6 +115,15 @@ func (c *PlanCommand) Run(args []string) int { } } + if plan.Diff.Empty() { + c.Ui.Output( + "No changes. Infrastructure is up-to-date. This means that Terraform\n" + + "could not detect any differences between your configuration and\n" + + "the real physical resources that exist. As a result, Terraform\n" + + "doesn't need to do anything.") + return 0 + } + if outPath == "" { c.Ui.Output(strings.TrimSpace(planHeaderNoOutput) + "\n") } else { @@ -181,7 +181,7 @@ Options: -module-depth=n Specifies the depth of modules to show in the output. This does not affect the plan itself, only the output - shown. By default, this is zero. -1 will expand all. + shown. By default, this is -1, which will expand all. -no-color If specified, output won't contain any color. diff --git a/command/plan_test.go b/command/plan_test.go index d0d14bc567..9b89018bf4 100644 --- a/command/plan_test.go +++ b/command/plan_test.go @@ -177,6 +177,55 @@ func TestPlan_outPath(t *testing.T) { } } +func TestPlan_outPathNoChange(t *testing.T) { + originalState := &terraform.State{ + Modules: []*terraform.ModuleState{ + &terraform.ModuleState{ + Path: []string{"root"}, + Resources: map[string]*terraform.ResourceState{ + "test_instance.foo": &terraform.ResourceState{ + Type: "test_instance", + Primary: &terraform.InstanceState{ + ID: "bar", + }, + }, + }, + }, + }, + } + statePath := testStateFile(t, originalState) + + tf, err := ioutil.TempFile("", "tf") + if err != nil { + t.Fatalf("err: %s", err) + } + outPath := tf.Name() + os.Remove(tf.Name()) + + p := testProvider() + ui := new(cli.MockUi) + c := &PlanCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + args := []string{ + "-out", outPath, + "-state", statePath, + testFixturePath("plan"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + plan := testReadPlan(t, outPath) + if !plan.Diff.Empty() { + t.Fatalf("Expected empty plan to be written to plan file, got: %s", plan) + } +} + func TestPlan_refresh(t *testing.T) { p := testProvider() ui := new(cli.MockUi) diff --git a/command/show.go b/command/show.go index 624ee659db..8a32c4a8d5 100644 --- a/command/show.go +++ b/command/show.go @@ -118,7 +118,7 @@ Usage: terraform show [options] [path] Options: -module-depth=n Specifies the depth of modules to show in the output. - By default this is zero. -1 will expand all. + By default this is -1, which will expand all. -no-color If specified, output won't contain any color. diff --git a/communicator/ssh/provisioner.go b/communicator/ssh/provisioner.go index f9f889037e..48eaafe388 100644 --- a/communicator/ssh/provisioner.go +++ b/communicator/ssh/provisioner.go @@ -11,6 +11,7 @@ import ( "github.com/hashicorp/terraform/helper/pathorcontents" "github.com/hashicorp/terraform/terraform" "github.com/mitchellh/mapstructure" + "github.com/xanzy/ssh-agent" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) @@ -245,22 +246,17 @@ func connectToAgent(connInfo *connectionInfo) (*sshAgent, error) { return nil, nil } - sshAuthSock := os.Getenv("SSH_AUTH_SOCK") - - if sshAuthSock == "" { - return nil, fmt.Errorf("SSH Requested but SSH_AUTH_SOCK not-specified") - } - - conn, err := net.Dial("unix", sshAuthSock) + agent, conn, err := sshagent.New() if err != nil { - return nil, fmt.Errorf("Error connecting to SSH_AUTH_SOCK: %v", err) + return nil, err } // connection close is handled over in Communicator return &sshAgent{ - agent: agent.NewClient(conn), + agent: agent, conn: conn, }, nil + } // A tiny wrapper around an agent.Agent to expose the ability to close its @@ -271,6 +267,10 @@ type sshAgent struct { } func (a *sshAgent) Close() error { + if a.conn == nil { + return nil + } + return a.conn.Close() } diff --git a/communicator/winrm/communicator.go b/communicator/winrm/communicator.go index 27ba31a833..30998e0b14 100644 --- a/communicator/winrm/communicator.go +++ b/communicator/winrm/communicator.go @@ -7,6 +7,7 @@ import ( "math/rand" "strconv" "strings" + "sync" "time" "github.com/hashicorp/terraform/communicator/remote" @@ -148,10 +149,20 @@ func (c *Communicator) Start(rc *remote.Cmd) error { func runCommand(shell *winrm.Shell, cmd *winrm.Command, rc *remote.Cmd) { defer shell.Close() - go io.Copy(rc.Stdout, cmd.Stdout) - go io.Copy(rc.Stderr, cmd.Stderr) + var wg sync.WaitGroup + go func() { + wg.Add(1) + io.Copy(rc.Stdout, cmd.Stdout) + wg.Done() + }() + go func() { + wg.Add(1) + io.Copy(rc.Stderr, cmd.Stderr) + wg.Done() + }() cmd.Wait() + wg.Wait() rc.SetExited(cmd.ExitCode()) } diff --git a/config/config.go b/config/config.go index d31777f6e8..d0177c9550 100644 --- a/config/config.go +++ b/config/config.go @@ -500,16 +500,23 @@ func (c *Config) Validate() error { // Check that all outputs are valid for _, o := range c.Outputs { - invalid := false - for k, _ := range o.RawConfig.Raw { - if k != "value" { - invalid = true - break + var invalidKeys []string + valueKeyFound := false + for k := range o.RawConfig.Raw { + if k == "value" { + valueKeyFound = true + } else { + invalidKeys = append(invalidKeys, k) } } - if invalid { + if len(invalidKeys) > 0 { errs = append(errs, fmt.Errorf( - "%s: output should only have 'value' field", o.Name)) + "%s: output has invalid keys: %s", + o.Name, strings.Join(invalidKeys, ", "))) + } + if !valueKeyFound { + errs = append(errs, fmt.Errorf( + "%s: output is missing required 'value' key", o.Name)) } for _, v := range o.RawConfig.Variables { diff --git a/config/config_test.go b/config/config_test.go index 2c97918773..a2a15d4bcc 100644 --- a/config/config_test.go +++ b/config/config_test.go @@ -196,6 +196,13 @@ func TestConfigValidate_outputBadField(t *testing.T) { } } +func TestConfigValidate_outputMissingEquals(t *testing.T) { + c := testConfig(t, "validate-output-missing-equals") + if err := c.Validate(); err == nil { + t.Fatal("should not be valid") + } +} + func TestConfigValidate_pathVar(t *testing.T) { c := testConfig(t, "validate-path-var") if err := c.Validate(); err != nil { diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go index 5538763c0c..a0ceeeec4a 100644 --- a/config/interpolate_funcs.go +++ b/config/interpolate_funcs.go @@ -2,7 +2,10 @@ package config import ( "bytes" + "crypto/sha1" + "crypto/sha256" "encoding/base64" + "encoding/hex" "errors" "fmt" "io/ioutil" @@ -18,10 +21,8 @@ import ( ) // Funcs is the mapping of built-in functions for configuration. -var Funcs map[string]ast.Function - -func init() { - Funcs = map[string]ast.Function{ +func Funcs() map[string]ast.Function { + return map[string]ast.Function{ "cidrhost": interpolationFuncCidrHost(), "cidrnetmask": interpolationFuncCidrNetmask(), "cidrsubnet": interpolationFuncCidrSubnet(), @@ -38,6 +39,8 @@ func init() { "lower": interpolationFuncLower(), "replace": interpolationFuncReplace(), "split": interpolationFuncSplit(), + "sha1": interpolationFuncSha1(), + "sha256": interpolationFuncSha256(), "base64encode": interpolationFuncBase64Encode(), "base64decode": interpolationFuncBase64Decode(), "upper": interpolationFuncUpper(), @@ -586,3 +589,31 @@ func interpolationFuncUpper() ast.Function { }, } } + +func interpolationFuncSha1() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + s := args[0].(string) + h := sha1.New() + h.Write([]byte(s)) + hash := hex.EncodeToString(h.Sum(nil)) + return hash, nil + }, + } +} + +func interpolationFuncSha256() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + s := args[0].(string) + h := sha256.New() + h.Write([]byte(s)) + hash := hex.EncodeToString(h.Sum(nil)) + return hash, nil + }, + } +} diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go index 3aeb50db17..78139aa2da 100644 --- a/config/interpolate_funcs_test.go +++ b/config/interpolate_funcs_test.go @@ -834,6 +834,30 @@ func TestInterpolateFuncUpper(t *testing.T) { }) } +func TestInterpolateFuncSha1(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${sha1("test")}`, + "a94a8fe5ccb19ba61c4c0873d391e987982fbbd3", + false, + }, + }, + }) +} + +func TestInterpolateFuncSha256(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${sha256("test")}`, + "9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08", + false, + }, + }, + }) +} + type testFunctionConfig struct { Cases []testFunctionCase Vars map[string]ast.Variable diff --git a/config/interpolate_walk.go b/config/interpolate_walk.go index adcb5e32cf..0753558317 100644 --- a/config/interpolate_walk.go +++ b/config/interpolate_walk.go @@ -118,9 +118,15 @@ func (w *interpolationWalker) Primitive(v reflect.Value) error { return err } - // If the AST we got is just a literal string value, then we ignore it - if _, ok := astRoot.(*ast.LiteralNode); ok { - return nil + // If the AST we got is just a literal string value with the same + // value then we ignore it. We have to check if its the same value + // because it is possible to input a string, get out a string, and + // have it be different. For example: "foo-$${bar}" turns into + // "foo-${bar}" + if n, ok := astRoot.(*ast.LiteralNode); ok { + if s, ok := n.Value.(string); ok && s == v.String() { + return nil + } } if w.ContextF != nil { diff --git a/config/interpolate_walk_test.go b/config/interpolate_walk_test.go index fc7c8b5492..b5c9a3a923 100644 --- a/config/interpolate_walk_test.go +++ b/config/interpolate_walk_test.go @@ -18,7 +18,9 @@ func TestInterpolationWalker_detect(t *testing.T) { Input: map[string]interface{}{ "foo": "$${var.foo}", }, - Result: nil, + Result: []string{ + "Literal(TypeString, ${var.foo})", + }, }, { @@ -114,7 +116,7 @@ func TestInterpolationWalker_replace(t *testing.T) { "foo": "$${var.foo}", }, Output: map[string]interface{}{ - "foo": "$${var.foo}", + "foo": "bar", }, Value: "bar", }, diff --git a/config/lang/ast/unary_arithmetic.go b/config/lang/ast/unary_arithmetic.go new file mode 100644 index 0000000000..d6b65b3652 --- /dev/null +++ b/config/lang/ast/unary_arithmetic.go @@ -0,0 +1,42 @@ +package ast + +import ( + "fmt" +) + +// UnaryArithmetic represents a node where the result is arithmetic of +// one operands +type UnaryArithmetic struct { + Op ArithmeticOp + Expr Node + Posx Pos +} + +func (n *UnaryArithmetic) Accept(v Visitor) Node { + n.Expr = n.Expr.Accept(v) + + return v(n) +} + +func (n *UnaryArithmetic) Pos() Pos { + return n.Posx +} + +func (n *UnaryArithmetic) GoString() string { + return fmt.Sprintf("*%#v", *n) +} + +func (n *UnaryArithmetic) String() string { + var sign rune + switch n.Op { + case ArithmeticOpAdd: + sign = '+' + case ArithmeticOpSub: + sign = '-' + } + return fmt.Sprintf("%c%s", sign, n.Expr) +} + +func (n *UnaryArithmetic) Type(Scope) (Type, error) { + return TypeInt, nil +} diff --git a/config/lang/builtins.go b/config/lang/builtins.go index bf918c9c75..457a5ef372 100644 --- a/config/lang/builtins.go +++ b/config/lang/builtins.go @@ -24,11 +24,53 @@ func registerBuiltins(scope *ast.BasicScope) *ast.BasicScope { scope.FuncMap["__builtin_StringToInt"] = builtinStringToInt() // Math operations + scope.FuncMap["__builtin_UnaryIntMath"] = builtinUnaryIntMath() + scope.FuncMap["__builtin_UnaryFloatMath"] = builtinUnaryFloatMath() scope.FuncMap["__builtin_IntMath"] = builtinIntMath() scope.FuncMap["__builtin_FloatMath"] = builtinFloatMath() return scope } +func builtinUnaryIntMath() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeInt}, + Variadic: false, + ReturnType: ast.TypeInt, + Callback: func(args []interface{}) (interface{}, error) { + op := args[0].(ast.ArithmeticOp) + result := args[1].(int) + switch op { + case ast.ArithmeticOpAdd: + result = result + case ast.ArithmeticOpSub: + result = -result + } + + return result, nil + }, + } +} + +func builtinUnaryFloatMath() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeFloat}, + Variadic: false, + ReturnType: ast.TypeFloat, + Callback: func(args []interface{}) (interface{}, error) { + op := args[0].(ast.ArithmeticOp) + result := args[1].(float64) + switch op { + case ast.ArithmeticOpAdd: + result = result + case ast.ArithmeticOpSub: + result = -result + } + + return result, nil + }, + } +} + func builtinFloatMath() ast.Function { return ast.Function{ ArgTypes: []ast.Type{ast.TypeInt}, diff --git a/config/lang/check_types.go b/config/lang/check_types.go index 4fbcd731ad..0ff6ac93ba 100644 --- a/config/lang/check_types.go +++ b/config/lang/check_types.go @@ -55,6 +55,9 @@ func (v *TypeCheck) visit(raw ast.Node) ast.Node { var result ast.Node var err error switch n := raw.(type) { + case *ast.UnaryArithmetic: + tc := &typeCheckUnaryArithmetic{n} + result, err = tc.TypeCheck(v) case *ast.Arithmetic: tc := &typeCheckArithmetic{n} result, err = tc.TypeCheck(v) @@ -89,6 +92,48 @@ func (v *TypeCheck) visit(raw ast.Node) ast.Node { return result } +type typeCheckUnaryArithmetic struct { + n *ast.UnaryArithmetic +} + +func (tc *typeCheckUnaryArithmetic) TypeCheck(v *TypeCheck) (ast.Node, error) { + // Only support + or - as unary op + if tc.n.Op != ast.ArithmeticOpAdd && tc.n.Op != ast.ArithmeticOpSub { + fmt.Printf("%+v\n", tc.n.Op) + return nil, fmt.Errorf("only + or - supported as unary operator") + } + expr := v.StackPop() + + mathFunc := "__builtin_UnaryIntMath" + mathType := ast.TypeInt + switch expr { + case ast.TypeInt: + mathFunc = "__builtin_UnaryIntMath" + mathType = expr + case ast.TypeFloat: + mathFunc = "__builtin_UnaryFloatMath" + mathType = expr + } + + // Return type + v.StackPush(mathType) + + args := make([]ast.Node, 2) + args[0] = &ast.LiteralNode{ + Value: tc.n.Op, + Typex: ast.TypeInt, + Posx: tc.n.Pos(), + } + args[1] = tc.n.Expr + // Replace our node with a call to the proper function. This isn't + // type checked but we already verified types. + return &ast.Call{ + Func: mathFunc, + Args: args, + Posx: tc.n.Pos(), + }, nil +} + type typeCheckArithmetic struct { n *ast.Arithmetic } diff --git a/config/lang/eval_test.go b/config/lang/eval_test.go index 122f44d1f4..001963014f 100644 --- a/config/lang/eval_test.go +++ b/config/lang/eval_test.go @@ -24,6 +24,14 @@ func TestEval(t *testing.T) { ast.TypeString, }, + { + "foo $${bar}", + nil, + false, + "foo ${bar}", + ast.TypeString, + }, + { "foo ${bar}", &ast.BasicScope{ @@ -251,6 +259,60 @@ func TestEval(t *testing.T) { "foo 43", ast.TypeString, }, + + { + "foo ${-46}", + nil, + false, + "foo -46", + ast.TypeString, + }, + + { + "foo ${-46 + 5}", + nil, + false, + "foo -41", + ast.TypeString, + }, + + { + "foo ${46 + -5}", + nil, + false, + "foo 41", + ast.TypeString, + }, + + { + "foo ${-bar}", + &ast.BasicScope{ + VarMap: map[string]ast.Variable{ + "bar": ast.Variable{ + Value: 41, + Type: ast.TypeInt, + }, + }, + }, + false, + "foo -41", + ast.TypeString, + }, + + { + "foo ${5 + -bar}", + &ast.BasicScope{ + VarMap: map[string]ast.Variable{ + "bar": ast.Variable{ + Value: 41, + Type: ast.TypeInt, + }, + }, + }, + false, + "foo -36", + ast.TypeString, + }, } for _, tc := range cases { diff --git a/config/lang/lang.y b/config/lang/lang.y index c531860e51..f55f7bf982 100644 --- a/config/lang/lang.y +++ b/config/lang/lang.y @@ -130,6 +130,14 @@ expr: Posx: $1.Pos(), } } +| ARITH_OP expr + { + $$ = &ast.UnaryArithmetic{ + Op: $1.Value.(ast.ArithmeticOp), + Expr: $2, + Posx: $1.Pos, + } + } | IDENTIFIER { $$ = &ast.VariableAccess{Name: $1.Value.(string), Posx: $1.Pos} diff --git a/config/lang/lex_test.go b/config/lang/lex_test.go index 5341e594a6..572aa0f532 100644 --- a/config/lang/lex_test.go +++ b/config/lang/lex_test.go @@ -63,6 +63,20 @@ func TestLex(t *testing.T) { PROGRAM_BRACKET_RIGHT, lexEOF}, }, + { + "${bar(-42)}", + []int{PROGRAM_BRACKET_LEFT, + IDENTIFIER, PAREN_LEFT, ARITH_OP, INTEGER, PAREN_RIGHT, + PROGRAM_BRACKET_RIGHT, lexEOF}, + }, + + { + "${bar(-42.0)}", + []int{PROGRAM_BRACKET_LEFT, + IDENTIFIER, PAREN_LEFT, ARITH_OP, FLOAT, PAREN_RIGHT, + PROGRAM_BRACKET_RIGHT, lexEOF}, + }, + { "${bar(42+1)}", []int{PROGRAM_BRACKET_LEFT, @@ -72,6 +86,15 @@ func TestLex(t *testing.T) { PROGRAM_BRACKET_RIGHT, lexEOF}, }, + { + "${bar(42+-1)}", + []int{PROGRAM_BRACKET_LEFT, + IDENTIFIER, PAREN_LEFT, + INTEGER, ARITH_OP, ARITH_OP, INTEGER, + PAREN_RIGHT, + PROGRAM_BRACKET_RIGHT, lexEOF}, + }, + { "${bar(3.14159)}", []int{PROGRAM_BRACKET_LEFT, diff --git a/config/lang/y.go b/config/lang/y.go index fd0693f151..faffd55d31 100644 --- a/config/lang/y.go +++ b/config/lang/y.go @@ -53,7 +53,7 @@ const parserEofCode = 1 const parserErrCode = 2 const parserMaxDepth = 200 -//line lang.y:165 +//line lang.y:173 //line yacctab:1 var parserExca = [...]int{ @@ -62,51 +62,52 @@ var parserExca = [...]int{ -2, 0, } -const parserNprod = 19 +const parserNprod = 20 const parserPrivate = 57344 var parserTokenNames []string var parserStates []string -const parserLast = 30 +const parserLast = 34 var parserAct = [...]int{ - 9, 20, 16, 16, 7, 7, 3, 18, 10, 8, - 1, 17, 14, 12, 13, 6, 6, 19, 8, 22, - 15, 23, 24, 11, 2, 25, 16, 21, 4, 5, + 9, 7, 3, 16, 22, 8, 17, 17, 20, 17, + 1, 18, 6, 23, 8, 19, 25, 26, 21, 11, + 2, 24, 7, 4, 5, 0, 10, 27, 0, 14, + 15, 12, 13, 6, } var parserPact = [...]int{ - 1, -1000, 1, -1000, -1000, -1000, -1000, 0, -1000, 15, - 0, 1, -1000, -1000, -1, -1000, 0, -8, 0, -1000, - -1000, 12, -9, -1000, 0, -9, + -3, -1000, -3, -1000, -1000, -1000, -1000, 18, -1000, -2, + 18, -3, -1000, -1000, 18, 0, -1000, 18, -5, -1000, + 18, -1000, -1000, 7, -4, -1000, 18, -4, } var parserPgo = [...]int{ - 0, 0, 29, 28, 23, 6, 27, 10, + 0, 0, 24, 23, 19, 2, 13, 10, } var parserR1 = [...]int{ 0, 7, 7, 4, 4, 5, 5, 2, 1, 1, - 1, 1, 1, 1, 1, 6, 6, 6, 3, + 1, 1, 1, 1, 1, 1, 6, 6, 6, 3, } var parserR2 = [...]int{ 0, 0, 1, 1, 2, 1, 1, 3, 3, 1, - 1, 1, 3, 1, 4, 0, 3, 1, 1, + 1, 1, 3, 2, 1, 4, 0, 3, 1, 1, } var parserChk = [...]int{ -1000, -7, -4, -5, -3, -2, 15, 4, -5, -1, - 8, -4, 13, 14, 12, 5, 11, -1, 8, -1, - 9, -6, -1, 9, 10, -1, + 8, -4, 13, 14, 11, 12, 5, 11, -1, -1, + 8, -1, 9, -6, -1, 9, 10, -1, } var parserDef = [...]int{ - 1, -2, 2, 3, 5, 6, 18, 0, 4, 0, - 0, 9, 10, 11, 13, 7, 0, 0, 15, 12, - 8, 0, 17, 14, 0, 16, + 1, -2, 2, 3, 5, 6, 19, 0, 4, 0, + 0, 9, 10, 11, 0, 14, 7, 0, 0, 13, + 16, 12, 8, 0, 18, 15, 0, 17, } var parserTok1 = [...]int{ @@ -577,38 +578,48 @@ parserdefault: } } case 13: - parserDollar = parserS[parserpt-1 : parserpt+1] + parserDollar = parserS[parserpt-2 : parserpt+1] //line lang.y:134 + { + parserVAL.node = &ast.UnaryArithmetic{ + Op: parserDollar[1].token.Value.(ast.ArithmeticOp), + Expr: parserDollar[2].node, + Posx: parserDollar[1].token.Pos, + } + } + case 14: + parserDollar = parserS[parserpt-1 : parserpt+1] + //line lang.y:142 { parserVAL.node = &ast.VariableAccess{Name: parserDollar[1].token.Value.(string), Posx: parserDollar[1].token.Pos} } - case 14: + case 15: parserDollar = parserS[parserpt-4 : parserpt+1] - //line lang.y:138 + //line lang.y:146 { parserVAL.node = &ast.Call{Func: parserDollar[1].token.Value.(string), Args: parserDollar[3].nodeList, Posx: parserDollar[1].token.Pos} } - case 15: + case 16: parserDollar = parserS[parserpt-0 : parserpt+1] - //line lang.y:143 + //line lang.y:151 { parserVAL.nodeList = nil } - case 16: + case 17: parserDollar = parserS[parserpt-3 : parserpt+1] - //line lang.y:147 + //line lang.y:155 { parserVAL.nodeList = append(parserDollar[1].nodeList, parserDollar[3].node) } - case 17: + case 18: parserDollar = parserS[parserpt-1 : parserpt+1] - //line lang.y:151 + //line lang.y:159 { parserVAL.nodeList = append(parserVAL.nodeList, parserDollar[1].node) } - case 18: + case 19: parserDollar = parserS[parserpt-1 : parserpt+1] - //line lang.y:157 + //line lang.y:165 { parserVAL.node = &ast.LiteralNode{ Value: parserDollar[1].token.Value.(string), diff --git a/config/lang/y.output b/config/lang/y.output index 17352390dd..998d2673cc 100644 --- a/config/lang/y.output +++ b/config/lang/y.output @@ -51,9 +51,9 @@ state 5 state 6 - literal: STRING. (18) + literal: STRING. (19) - . reduce 18 (src line 155) + . reduce 19 (src line 163) state 7 @@ -61,7 +61,8 @@ state 7 PROGRAM_BRACKET_LEFT shift 7 PAREN_LEFT shift 10 - IDENTIFIER shift 14 + ARITH_OP shift 14 + IDENTIFIER shift 15 INTEGER shift 12 FLOAT shift 13 STRING shift 6 @@ -83,8 +84,8 @@ state 9 interpolation: PROGRAM_BRACKET_LEFT expr.PROGRAM_BRACKET_RIGHT expr: expr.ARITH_OP expr - PROGRAM_BRACKET_RIGHT shift 15 - ARITH_OP shift 16 + PROGRAM_BRACKET_RIGHT shift 16 + ARITH_OP shift 17 . error @@ -93,13 +94,14 @@ state 10 PROGRAM_BRACKET_LEFT shift 7 PAREN_LEFT shift 10 - IDENTIFIER shift 14 + ARITH_OP shift 14 + IDENTIFIER shift 15 INTEGER shift 12 FLOAT shift 13 STRING shift 6 . error - expr goto 17 + expr goto 18 interpolation goto 5 literal goto 4 literalModeTop goto 11 @@ -130,25 +132,12 @@ state 13 state 14 - expr: IDENTIFIER. (13) - expr: IDENTIFIER.PAREN_LEFT args PAREN_RIGHT - - PAREN_LEFT shift 18 - . reduce 13 (src line 133) - - -state 15 - interpolation: PROGRAM_BRACKET_LEFT expr PROGRAM_BRACKET_RIGHT. (7) - - . reduce 7 (src line 94) - - -state 16 - expr: expr ARITH_OP.expr + expr: ARITH_OP.expr PROGRAM_BRACKET_LEFT shift 7 PAREN_LEFT shift 10 - IDENTIFIER shift 14 + ARITH_OP shift 14 + IDENTIFIER shift 15 INTEGER shift 12 FLOAT shift 13 STRING shift 6 @@ -160,104 +149,145 @@ state 16 literalModeTop goto 11 literalModeValue goto 3 +state 15 + expr: IDENTIFIER. (14) + expr: IDENTIFIER.PAREN_LEFT args PAREN_RIGHT + + PAREN_LEFT shift 20 + . reduce 14 (src line 141) + + +state 16 + interpolation: PROGRAM_BRACKET_LEFT expr PROGRAM_BRACKET_RIGHT. (7) + + . reduce 7 (src line 94) + + state 17 - expr: PAREN_LEFT expr.PAREN_RIGHT - expr: expr.ARITH_OP expr - - PAREN_RIGHT shift 20 - ARITH_OP shift 16 - . error - - -state 18 - expr: IDENTIFIER PAREN_LEFT.args PAREN_RIGHT - args: . (15) + expr: expr ARITH_OP.expr PROGRAM_BRACKET_LEFT shift 7 PAREN_LEFT shift 10 - IDENTIFIER shift 14 + ARITH_OP shift 14 + IDENTIFIER shift 15 INTEGER shift 12 FLOAT shift 13 STRING shift 6 - . reduce 15 (src line 142) + . error - expr goto 22 + expr goto 21 interpolation goto 5 literal goto 4 literalModeTop goto 11 literalModeValue goto 3 - args goto 21 + +state 18 + expr: PAREN_LEFT expr.PAREN_RIGHT + expr: expr.ARITH_OP expr + + PAREN_RIGHT shift 22 + ARITH_OP shift 17 + . error + state 19 + expr: expr.ARITH_OP expr + expr: ARITH_OP expr. (13) + + . reduce 13 (src line 133) + + +state 20 + expr: IDENTIFIER PAREN_LEFT.args PAREN_RIGHT + args: . (16) + + PROGRAM_BRACKET_LEFT shift 7 + PAREN_LEFT shift 10 + ARITH_OP shift 14 + IDENTIFIER shift 15 + INTEGER shift 12 + FLOAT shift 13 + STRING shift 6 + . reduce 16 (src line 150) + + expr goto 24 + interpolation goto 5 + literal goto 4 + literalModeTop goto 11 + literalModeValue goto 3 + args goto 23 + +state 21 expr: expr.ARITH_OP expr expr: expr ARITH_OP expr. (12) . reduce 12 (src line 125) -state 20 +state 22 expr: PAREN_LEFT expr PAREN_RIGHT. (8) . reduce 8 (src line 100) -state 21 +state 23 expr: IDENTIFIER PAREN_LEFT args.PAREN_RIGHT args: args.COMMA expr - PAREN_RIGHT shift 23 - COMMA shift 24 + PAREN_RIGHT shift 25 + COMMA shift 26 . error -state 22 - expr: expr.ARITH_OP expr - args: expr. (17) - - ARITH_OP shift 16 - . reduce 17 (src line 150) - - -state 23 - expr: IDENTIFIER PAREN_LEFT args PAREN_RIGHT. (14) - - . reduce 14 (src line 137) - - state 24 + expr: expr.ARITH_OP expr + args: expr. (18) + + ARITH_OP shift 17 + . reduce 18 (src line 158) + + +state 25 + expr: IDENTIFIER PAREN_LEFT args PAREN_RIGHT. (15) + + . reduce 15 (src line 145) + + +state 26 args: args COMMA.expr PROGRAM_BRACKET_LEFT shift 7 PAREN_LEFT shift 10 - IDENTIFIER shift 14 + ARITH_OP shift 14 + IDENTIFIER shift 15 INTEGER shift 12 FLOAT shift 13 STRING shift 6 . error - expr goto 25 + expr goto 27 interpolation goto 5 literal goto 4 literalModeTop goto 11 literalModeValue goto 3 -state 25 +state 27 expr: expr.ARITH_OP expr - args: args COMMA expr. (16) + args: args COMMA expr. (17) - ARITH_OP shift 16 - . reduce 16 (src line 146) + ARITH_OP shift 17 + . reduce 17 (src line 154) 15 terminals, 8 nonterminals -19 grammar rules, 26/2000 states +20 grammar rules, 28/2000 states 0 shift/reduce, 0 reduce/reduce conflicts reported 57 working sets used -memory: parser 35/30000 -21 extra closures -45 shift entries, 1 exceptions -14 goto entries -23 entries saved by goto default -Optimizer space used: output 30/30000 -30 table entries, 0 zero -maximum spread: 15, maximum offset: 24 +memory: parser 40/30000 +23 extra closures +57 shift entries, 1 exceptions +15 goto entries +27 entries saved by goto default +Optimizer space used: output 34/30000 +34 table entries, 2 zero +maximum spread: 15, maximum offset: 26 diff --git a/config/loader_hcl.go b/config/loader_hcl.go index c62ca37314..7868cc8ecf 100644 --- a/config/loader_hcl.go +++ b/config/loader_hcl.go @@ -4,6 +4,7 @@ import ( "fmt" "io/ioutil" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/hcl" "github.com/hashicorp/hcl/hcl/ast" "github.com/mitchellh/mapstructure" @@ -405,9 +406,19 @@ func loadResourcesHcl(list *ast.ObjectList) ([]*Resource, error) { // Now go over all the types and their children in order to get // all of the actual resources. for _, item := range list.Items { + // GH-4385: We detect a pure provisioner resource and give the user + // an error about how to do it cleanly. + if len(item.Keys) == 4 && item.Keys[2].Token.Value().(string) == "provisioner" { + return nil, fmt.Errorf( + "position %s: provisioners in a resource should be wrapped in a list\n\n"+ + "Example: \"provisioner\": [ { \"local-exec\": ... } ]", + item.Pos()) + } + if len(item.Keys) != 2 { - // TODO: bad error message - return nil, fmt.Errorf("resource needs exactly 2 names") + return nil, fmt.Errorf( + "position %s: resource must be followed by exactly two strings, a type and a name", + item.Pos()) } t := item.Keys[0].Token.Value().(string) @@ -523,6 +534,13 @@ func loadResourcesHcl(list *ast.ObjectList) ([]*Resource, error) { // destroying the existing instance var lifecycle ResourceLifecycle if o := listVal.Filter("lifecycle"); len(o.Items) > 0 { + // Check for invalid keys + valid := []string{"create_before_destroy", "ignore_changes", "prevent_destroy"} + if err := checkHCLKeys(o.Items[0].Val, valid); err != nil { + return nil, multierror.Prefix(err, fmt.Sprintf( + "%s[%s]:", t, k)) + } + var raw map[string]interface{} if err = hcl.DecodeObject(&raw, o.Items[0].Val); err != nil { return nil, fmt.Errorf( @@ -644,3 +662,31 @@ func hclObjectMap(os *hclobj.Object) map[string]ast.ListNode { return objects } */ + +func checkHCLKeys(node ast.Node, valid []string) error { + var list *ast.ObjectList + switch n := node.(type) { + case *ast.ObjectList: + list = n + case *ast.ObjectType: + list = n.List + default: + return fmt.Errorf("cannot check HCL keys of type %T", n) + } + + validMap := make(map[string]struct{}, len(valid)) + for _, v := range valid { + validMap[v] = struct{}{} + } + + var result error + for _, item := range list.Items { + key := item.Keys[0].Token.Value().(string) + if _, ok := validMap[key]; !ok { + result = multierror.Append(result, fmt.Errorf( + "invalid key: %s", key)) + } + } + + return result +} diff --git a/config/loader_test.go b/config/loader_test.go index 19745adaf6..bfe21cfb2d 100644 --- a/config/loader_test.go +++ b/config/loader_test.go @@ -45,6 +45,56 @@ func TestLoadFile_badType(t *testing.T) { } } +func TestLoadFile_lifecycleKeyCheck(t *testing.T) { + _, err := LoadFile(filepath.Join(fixtureDir, "lifecycle_cbd_typo.tf")) + if err == nil { + t.Fatal("should have error") + } + + t.Logf("err: %s", err) +} + +func TestLoadFile_resourceArityMistake(t *testing.T) { + _, err := LoadFile(filepath.Join(fixtureDir, "resource-arity-mistake.tf")) + if err == nil { + t.Fatal("should have error") + } + expected := "Error loading test-fixtures/resource-arity-mistake.tf: position 2:10: resource must be followed by exactly two strings, a type and a name" + if err.Error() != expected { + t.Fatalf("expected:\n%s\ngot:\n%s", expected, err) + } +} + +func TestLoadFileWindowsLineEndings(t *testing.T) { + testFile := filepath.Join(fixtureDir, "windows-line-endings.tf") + + contents, err := ioutil.ReadFile(testFile) + if err != nil { + t.Fatalf("err: %s", err) + } + if !strings.Contains(string(contents), "\r\n") { + t.Fatalf("Windows line endings test file %s contains no windows line endings - this may be an autocrlf related issue.", testFile) + } + + c, err := LoadFile(testFile) + if err != nil { + t.Fatalf("err: %s", err) + } + + if c == nil { + t.Fatal("config should not be nil") + } + + if c.Dir != "" { + t.Fatalf("bad: %#v", c.Dir) + } + + actual := resourcesStr(c.Resources) + if actual != strings.TrimSpace(windowsHeredocResourcesStr) { + t.Fatalf("bad:\n%s", actual) + } +} + func TestLoadFileHeredoc(t *testing.T) { c, err := LoadFile(filepath.Join(fixtureDir, "heredoc.tf")) if err != nil { @@ -673,6 +723,11 @@ cloudstack_firewall[test] (x1) rule ` +const windowsHeredocResourcesStr = ` +aws_instance[test] (x1) + user_data +` + const heredocProvidersStr = ` aws access_key @@ -685,6 +740,11 @@ aws_iam_policy[policy] (x1) name path policy +aws_instance[heredocwithnumbers] (x1) + ami + provisioners + local-exec + command aws_instance[test] (x1) ami provisioners diff --git a/config/module/test-fixtures/validate-child-good/child/main.tf b/config/module/test-fixtures/validate-child-good/child/main.tf index 2cfd2a80f5..09f869acad 100644 --- a/config/module/test-fixtures/validate-child-good/child/main.tf +++ b/config/module/test-fixtures/validate-child-good/child/main.tf @@ -1,3 +1,3 @@ variable "memory" {} -output "result" {} +output "result" { value = "foo" } diff --git a/config/raw_config.go b/config/raw_config.go index ebb9f18dc6..2a66288d1d 100644 --- a/config/raw_config.go +++ b/config/raw_config.go @@ -300,7 +300,7 @@ type gobRawConfig struct { // langEvalConfig returns the evaluation configuration we use to execute. func langEvalConfig(vs map[string]ast.Variable) *lang.EvalConfig { funcMap := make(map[string]ast.Function) - for k, v := range Funcs { + for k, v := range Funcs() { funcMap[k] = v } funcMap["lookup"] = interpolationFuncLookup(vs) diff --git a/config/raw_config_test.go b/config/raw_config_test.go index 1b5de3e16a..81d3f20c1e 100644 --- a/config/raw_config_test.go +++ b/config/raw_config_test.go @@ -114,6 +114,38 @@ func TestRawConfig_double(t *testing.T) { } } +func TestRawConfigInterpolate_escaped(t *testing.T) { + raw := map[string]interface{}{ + "foo": "bar-$${baz}", + } + + rc, err := NewRawConfig(raw) + if err != nil { + t.Fatalf("err: %s", err) + } + + // Before interpolate, Config() should be the raw + if !reflect.DeepEqual(rc.Config(), raw) { + t.Fatalf("bad: %#v", rc.Config()) + } + + if err := rc.Interpolate(nil); err != nil { + t.Fatalf("err: %s", err) + } + + actual := rc.Config() + expected := map[string]interface{}{ + "foo": "bar-${baz}", + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %#v", actual) + } + if len(rc.UnknownKeys()) != 0 { + t.Fatalf("bad: %#v", rc.UnknownKeys()) + } +} + func TestRawConfig_merge(t *testing.T) { raw1 := map[string]interface{}{ "foo": "${var.foo}", diff --git a/config/test-fixtures/.gitattributes b/config/test-fixtures/.gitattributes new file mode 100644 index 0000000000..23c56cad51 --- /dev/null +++ b/config/test-fixtures/.gitattributes @@ -0,0 +1 @@ +windows-line-endings.tf eol=crlf diff --git a/config/test-fixtures/heredoc.tf b/config/test-fixtures/heredoc.tf index 323d1d4e06..c43fd08106 100644 --- a/config/test-fixtures/heredoc.tf +++ b/config/test-fixtures/heredoc.tf @@ -37,3 +37,15 @@ EOT ] } } + +resource "aws_instance" "heredocwithnumbers" { + ami = "foo" + + provisioner "local-exec" { + command = < Checking that code complies with gofmt requirements..." +gofmt_files=$(gofmt -l .) +if [[ -n ${gofmt_files} ]]; then + echo 'gofmt needs running on the following files:' + echo "${gofmt_files}" + echo "You can use the command: \`make fmt\` to reformat code." + exit 1 +fi + +exit 0 diff --git a/scripts/travis.sh b/scripts/travis.sh new file mode 100755 index 0000000000..90e5cb9fff --- /dev/null +++ b/scripts/travis.sh @@ -0,0 +1,14 @@ +#!/bin/bash + + +# Consistent output so travis does not think we're dead during long running +# tests. +export PING_SLEEP=30 +bash -c "while true; do echo \$(date) - building ...; sleep $PING_SLEEP; done" & +PING_LOOP_PID=$! + +make testacc +TEST_OUTPUT=$? + +kill $PING_LOOP_PID +exit $TEST_OUTPUT diff --git a/scripts/website_push.sh b/scripts/website_push.sh deleted file mode 100755 index 53ed59777c..0000000000 --- a/scripts/website_push.sh +++ /dev/null @@ -1,43 +0,0 @@ -#!/bin/bash - -# Switch to the stable-website branch -git checkout stable-website - -# Set the tmpdir -if [ -z "$TMPDIR" ]; then - TMPDIR="/tmp" -fi - -# Create a temporary build dir and make sure we clean it up. For -# debugging, comment out the trap line. -DEPLOY=`mktemp -d $TMPDIR/terraform-www-XXXXXX` -trap "rm -rf $DEPLOY" INT TERM EXIT - -# Get the parent directory of where this script is. -SOURCE="${BASH_SOURCE[0]}" -while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done -DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )" - -# Copy into tmpdir -shopt -s dotglob -cp -r $DIR/website/* $DEPLOY/ - -# Change into that directory -pushd $DEPLOY &>/dev/null - -# Ignore some stuff -touch .gitignore -echo ".sass-cache" >> .gitignore -echo "build" >> .gitignore -echo "vendor" >> .gitignore - -# Add everything -git init -q . -git add . -git commit -q -m "Deploy by $USER" - -git remote add heroku git@heroku.com:terraform-www.git -git push -f heroku master - -# Go back to our root -popd &>/dev/null diff --git a/state/remote/artifactory.go b/state/remote/artifactory.go new file mode 100644 index 0000000000..727e9faf03 --- /dev/null +++ b/state/remote/artifactory.go @@ -0,0 +1,117 @@ +package remote + +import ( + "crypto/md5" + "fmt" + "os" + "strings" + + artifactory "github.com/lusis/go-artifactory/src/artifactory.v401" +) + +const ARTIF_TFSTATE_NAME = "terraform.tfstate" + +func artifactoryFactory(conf map[string]string) (Client, error) { + userName, ok := conf["username"] + if !ok { + userName = os.Getenv("ARTIFACTORY_USERNAME") + if userName == "" { + return nil, fmt.Errorf( + "missing 'username' configuration or ARTIFACTORY_USERNAME environment variable") + } + } + password, ok := conf["password"] + if !ok { + password = os.Getenv("ARTIFACTORY_PASSWORD") + if password == "" { + return nil, fmt.Errorf( + "missing 'password' configuration or ARTIFACTORY_PASSWORD environment variable") + } + } + url, ok := conf["url"] + if !ok { + url = os.Getenv("ARTIFACTORY_URL") + if url == "" { + return nil, fmt.Errorf( + "missing 'url' configuration or ARTIFACTORY_URL environment variable") + } + } + repo, ok := conf["repo"] + if !ok { + return nil, fmt.Errorf( + "missing 'repo' configuration") + } + subpath, ok := conf["subpath"] + if !ok { + return nil, fmt.Errorf( + "missing 'subpath' configuration") + } + + clientConf := &artifactory.ClientConfig{ + BaseURL: url, + Username: userName, + Password: password, + } + nativeClient := artifactory.NewClient(clientConf) + + return &ArtifactoryClient{ + nativeClient: &nativeClient, + userName: userName, + password: password, + url: url, + repo: repo, + subpath: subpath, + }, nil + +} + +type ArtifactoryClient struct { + nativeClient *artifactory.ArtifactoryClient + userName string + password string + url string + repo string + subpath string +} + +func (c *ArtifactoryClient) Get() (*Payload, error) { + p := fmt.Sprintf("%s/%s/%s", c.repo, c.subpath, ARTIF_TFSTATE_NAME) + output, err := c.nativeClient.Get(p, make(map[string]string)) + if err != nil { + if strings.Contains(err.Error(), "404") { + return nil, nil + } + return nil, err + } + + // TODO: migrate to using X-Checksum-Md5 header from artifactory + // needs to be exposed by go-artifactory first + + hash := md5.Sum(output) + payload := &Payload{ + Data: output, + MD5: hash[:md5.Size], + } + + // If there was no data, then return nil + if len(payload.Data) == 0 { + return nil, nil + } + + return payload, nil +} + +func (c *ArtifactoryClient) Put(data []byte) error { + p := fmt.Sprintf("%s/%s/%s", c.repo, c.subpath, ARTIF_TFSTATE_NAME) + if _, err := c.nativeClient.Put(p, string(data), make(map[string]string)); err == nil { + return nil + } else { + return fmt.Errorf("Failed to upload state: %v", err) + } +} + +func (c *ArtifactoryClient) Delete() error { + p := fmt.Sprintf("%s/%s/%s", c.repo, c.subpath, ARTIF_TFSTATE_NAME) + err := c.nativeClient.Delete(p) + return err +} diff --git a/state/remote/artifactory_test.go b/state/remote/artifactory_test.go new file mode 100644 index 0000000000..74197fa916 --- /dev/null +++ b/state/remote/artifactory_test.go @@ -0,0 +1,55 @@ +package remote + +import ( + "testing" +) + +func TestArtifactoryClient_impl(t *testing.T) { + var _ Client = new(ArtifactoryClient) +} + +func TestArtifactoryFactory(t *testing.T) { + // This test just instantiates the client. Shouldn't make any actual + // requests nor incur any costs. + + config := make(map[string]string) + + // Empty config is an error + _, err := artifactoryFactory(config) + if err == nil { + t.Fatalf("Empty config should be error") + } + + config["url"] = "http://artifactory.local:8081/artifactory" + config["repo"] = "terraform-repo" + config["subpath"] = "myproject" + + // For this test we'll provide the credentials as config. The + // acceptance tests implicitly test passing credentials as + // environment variables. + config["username"] = "test" + config["password"] = "testpass" + + client, err := artifactoryFactory(config) + if err != nil { + t.Fatalf("Error for valid config") + } + + artifactoryClient := client.(*ArtifactoryClient) + + if artifactoryClient.nativeClient.Config.BaseURL != "http://artifactory.local:8081/artifactory" { + t.Fatalf("Incorrect url was populated") + } + if artifactoryClient.nativeClient.Config.Username != "test" { + t.Fatalf("Incorrect username was populated") + } + if artifactoryClient.nativeClient.Config.Password != "testpass" { + t.Fatalf("Incorrect password was populated") + } + if artifactoryClient.repo != "terraform-repo" { + t.Fatalf("Incorrect repo was populated") + } + if artifactoryClient.subpath != "myproject" { + t.Fatalf("Incorrect subpath was populated") + } +} diff --git a/state/remote/atlas.go b/state/remote/atlas.go index f33f407cec..82b57d6c57 100644 --- a/state/remote/atlas.go +++ b/state/remote/atlas.go @@ -13,7 +13,7 @@ import ( "path" "strings" - "github.com/hashicorp/go-cleanhttp" + "github.com/hashicorp/go-retryablehttp" "github.com/hashicorp/terraform/terraform" ) @@ -77,14 +77,14 @@ type AtlasClient struct { Name string AccessToken string RunId string - HTTPClient *http.Client + HTTPClient *retryablehttp.Client conflictHandlingAttempted bool } func (c *AtlasClient) Get() (*Payload, error) { // Make the HTTP request - req, err := http.NewRequest("GET", c.url().String(), nil) + req, err := retryablehttp.NewRequest("GET", c.url().String(), nil) if err != nil { return nil, fmt.Errorf("Failed to make HTTP request: %v", err) } @@ -158,7 +158,7 @@ func (c *AtlasClient) Put(state []byte) error { b64 := base64.StdEncoding.EncodeToString(hash[:]) // Make the HTTP client and request - req, err := http.NewRequest("PUT", base.String(), bytes.NewReader(state)) + req, err := retryablehttp.NewRequest("PUT", base.String(), bytes.NewReader(state)) if err != nil { return fmt.Errorf("Failed to make HTTP request: %v", err) } @@ -191,7 +191,7 @@ func (c *AtlasClient) Put(state []byte) error { func (c *AtlasClient) Delete() error { // Make the HTTP request - req, err := http.NewRequest("DELETE", c.url().String(), nil) + req, err := retryablehttp.NewRequest("DELETE", c.url().String(), nil) if err != nil { return fmt.Errorf("Failed to make HTTP request: %v", err) } @@ -249,11 +249,11 @@ func (c *AtlasClient) url() *url.URL { } } -func (c *AtlasClient) http() *http.Client { +func (c *AtlasClient) http() *retryablehttp.Client { if c.HTTPClient != nil { return c.HTTPClient } - return cleanhttp.DefaultClient() + return retryablehttp.NewClient() } // Atlas returns an HTTP 409 - Conflict if the pushed state reports the same diff --git a/state/remote/consul.go b/state/remote/consul.go index 791f4dca37..6a3894b686 100644 --- a/state/remote/consul.go +++ b/state/remote/consul.go @@ -3,6 +3,7 @@ package remote import ( "crypto/md5" "fmt" + "strings" consulapi "github.com/hashicorp/consul/api" ) @@ -23,6 +24,17 @@ func consulFactory(conf map[string]string) (Client, error) { if scheme, ok := conf["scheme"]; ok && scheme != "" { config.Scheme = scheme } + if auth, ok := conf["http_auth"]; ok && auth != "" { + var username, password string + if strings.Contains(auth, ":") { + split := strings.SplitN(auth, ":", 2) + username = split[0] + password = split[1] + } else { + username = auth + } + config.HttpAuth = &consulapi.HttpBasicAuth{username, password} + } client, err := consulapi.NewClient(config) if err != nil { diff --git a/state/remote/remote.go b/state/remote/remote.go index 5337ad7b7b..4074c2c64e 100644 --- a/state/remote/remote.go +++ b/state/remote/remote.go @@ -36,12 +36,13 @@ func NewClient(t string, conf map[string]string) (Client, error) { // BuiltinClients is the list of built-in clients that can be used with // NewClient. var BuiltinClients = map[string]Factory{ - "atlas": atlasFactory, - "consul": consulFactory, - "etcd": etcdFactory, - "http": httpFactory, - "s3": s3Factory, - "swift": swiftFactory, + "atlas": atlasFactory, + "consul": consulFactory, + "etcd": etcdFactory, + "http": httpFactory, + "s3": s3Factory, + "swift": swiftFactory, + "artifactory": artifactoryFactory, // This is used for development purposes only. "_local": fileFactory, diff --git a/state/remote/s3.go b/state/remote/s3.go index 28bb7b5f9e..af322cb6f1 100644 --- a/state/remote/s3.go +++ b/state/remote/s3.go @@ -29,6 +29,11 @@ func s3Factory(conf map[string]string) (Client, error) { return nil, fmt.Errorf("missing 'key' configuration") } + endpoint, ok := conf["endpoint"] + if !ok { + endpoint = os.Getenv("AWS_S3_ENDPOINT") + } + regionName, ok := conf["region"] if !ok { regionName = os.Getenv("AWS_DEFAULT_REGION") @@ -53,6 +58,7 @@ func s3Factory(conf map[string]string) (Client, error) { if raw, ok := conf["acl"]; ok { acl = raw } + kmsKeyID := conf["kms_key_id"] accessKeyId := conf["access_key"] secretAccessKey := conf["secret_key"] @@ -77,6 +83,7 @@ func s3Factory(conf map[string]string) (Client, error) { awsConfig := &aws.Config{ Credentials: credentialsProvider, + Endpoint: aws.String(endpoint), Region: aws.String(regionName), HTTPClient: cleanhttp.DefaultClient(), } @@ -89,6 +96,7 @@ func s3Factory(conf map[string]string) (Client, error) { keyName: keyName, serverSideEncryption: serverSideEncryption, acl: acl, + kmsKeyID: kmsKeyID, }, nil } @@ -98,6 +106,7 @@ type S3Client struct { keyName string serverSideEncryption bool acl string + kmsKeyID string } func (c *S3Client) Get() (*Payload, error) { @@ -150,7 +159,12 @@ func (c *S3Client) Put(data []byte) error { } if c.serverSideEncryption { - i.ServerSideEncryption = aws.String("AES256") + if c.kmsKeyID != "" { + i.SSEKMSKeyId = &c.kmsKeyID + i.ServerSideEncryption = aws.String("aws:kms") + } else { + i.ServerSideEncryption = aws.String("AES256") + } } if c.acl != "" { diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go index b6fea26fc2..d88be4c7d9 100644 --- a/terraform/context_apply_test.go +++ b/terraform/context_apply_test.go @@ -3359,6 +3359,60 @@ aws_instance.bar: `) } +// https://github.com/hashicorp/terraform/issues/4462 +func TestContext2Apply_targetedDestroyModule(t *testing.T) { + m := testModule(t, "apply-targeted-module") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo": resourceState("aws_instance", "i-bcd345"), + "aws_instance.bar": resourceState("aws_instance", "i-abc123"), + }, + }, + &ModuleState{ + Path: []string{"root", "child"}, + Resources: map[string]*ResourceState{ + "aws_instance.foo": resourceState("aws_instance", "i-bcd345"), + "aws_instance.bar": resourceState("aws_instance", "i-abc123"), + }, + }, + }, + }, + Targets: []string{"module.child.aws_instance.foo"}, + Destroy: true, + }) + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + checkStateString(t, state, ` +aws_instance.bar: + ID = i-abc123 +aws_instance.foo: + ID = i-bcd345 + +module.child: + aws_instance.bar: + ID = i-abc123 + `) +} + func TestContext2Apply_targetedDestroyCountIndex(t *testing.T) { m := testModule(t, "apply-targeted-count") p := testProvider("aws") diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go index e91fc77472..c667109be6 100644 --- a/terraform/context_plan_test.go +++ b/terraform/context_plan_test.go @@ -104,6 +104,29 @@ func TestContext2Plan_emptyDiff(t *testing.T) { } } +func TestContext2Plan_escapedVar(t *testing.T) { + m := testModule(t, "plan-escaped-var") + p := testProvider("aws") + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(plan.String()) + expected := strings.TrimSpace(testTerraformPlanEscapedVarStr) + if actual != expected { + t.Fatalf("bad:\n%s", actual) + } +} + func TestContext2Plan_minimal(t *testing.T) { m := testModule(t, "plan-empty") p := testProvider("aws") @@ -1647,6 +1670,12 @@ func TestContext2Plan_targetedOrphan(t *testing.T) { ID: "i-789xyz", }, }, + "aws_instance.nottargeted": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "i-abc123", + }, + }, }, }, }, @@ -1667,8 +1696,150 @@ DESTROY: aws_instance.orphan STATE: +aws_instance.nottargeted: + ID = i-abc123 aws_instance.orphan: - ID = i-789xyz`) + ID = i-789xyz +`) + if actual != expected { + t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + } +} + +// https://github.com/hashicorp/terraform/issues/2538 +func TestContext2Plan_targetedModuleOrphan(t *testing.T) { + m := testModule(t, "plan-targeted-module-orphan") + p := testProvider("aws") + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: []string{"root", "child"}, + Resources: map[string]*ResourceState{ + "aws_instance.orphan": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "i-789xyz", + }, + }, + "aws_instance.nottargeted": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "i-abc123", + }, + }, + }, + }, + }, + }, + Destroy: true, + Targets: []string{"module.child.aws_instance.orphan"}, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(plan.String()) + expected := strings.TrimSpace(`DIFF: + +module.child: + DESTROY: aws_instance.orphan + +STATE: + +module.child: + aws_instance.nottargeted: + ID = i-abc123 + aws_instance.orphan: + ID = i-789xyz +`) + if actual != expected { + t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + } +} + +// https://github.com/hashicorp/terraform/issues/4515 +func TestContext2Plan_targetedOverTen(t *testing.T) { + m := testModule(t, "plan-targeted-over-ten") + p := testProvider("aws") + p.DiffFn = testDiffFn + + resources := make(map[string]*ResourceState) + var expectedState []string + for i := 0; i < 13; i++ { + key := fmt.Sprintf("aws_instance.foo.%d", i) + id := fmt.Sprintf("i-abc%d", i) + resources[key] = &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ID: id}, + } + expectedState = append(expectedState, + fmt.Sprintf("%s:\n ID = %s\n", key, id)) + } + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: resources, + }, + }, + }, + Targets: []string{"aws_instance.foo[1]"}, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(plan.String()) + sort.Strings(expectedState) + expected := strings.TrimSpace(` +DIFF: + + + +STATE: + +aws_instance.foo.0: + ID = i-abc0 +aws_instance.foo.1: + ID = i-abc1 +aws_instance.foo.10: + ID = i-abc10 +aws_instance.foo.11: + ID = i-abc11 +aws_instance.foo.12: + ID = i-abc12 +aws_instance.foo.2: + ID = i-abc2 +aws_instance.foo.3: + ID = i-abc3 +aws_instance.foo.4: + ID = i-abc4 +aws_instance.foo.5: + ID = i-abc5 +aws_instance.foo.6: + ID = i-abc6 +aws_instance.foo.7: + ID = i-abc7 +aws_instance.foo.8: + ID = i-abc8 +aws_instance.foo.9: + ID = i-abc9 + `) if actual != expected { t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) } diff --git a/terraform/diff.go b/terraform/diff.go index 87ae84ccf6..f1a41efb2e 100644 --- a/terraform/diff.go +++ b/terraform/diff.go @@ -466,6 +466,13 @@ func (d *InstanceDiff) Same(d2 *InstanceDiff) (bool, string) { ok = true } + // Similarly, in a RequiresNew scenario, a list that shows up in the plan + // diff can disappear from the apply diff, which is calculated from an + // empty state. + if d.RequiresNew() && strings.HasSuffix(k, ".#") { + ok = true + } + if !ok { return false, fmt.Sprintf("attribute mismatch: %s", k) } diff --git a/terraform/diff_test.go b/terraform/diff_test.go index 4eeb8d3879..6dbdd505e8 100644 --- a/terraform/diff_test.go +++ b/terraform/diff_test.go @@ -553,6 +553,43 @@ func TestInstanceDiffSame(t *testing.T) { true, "", }, + + // Another thing that can occur in DESTROY/CREATE scenarios is that list + // values that are going to zero have diffs that show up at plan time but + // are gone at apply time. The NewRemoved handling catches the fields and + // treats them as OK, but it also needs to treat the .# field itself as + // okay to be present in the old diff but not in the new one. + { + &InstanceDiff{ + Attributes: map[string]*ResourceAttrDiff{ + "reqnew": &ResourceAttrDiff{ + Old: "old", + New: "new", + RequiresNew: true, + }, + "somemap.#": &ResourceAttrDiff{ + Old: "1", + New: "0", + }, + "somemap.oldkey": &ResourceAttrDiff{ + Old: "long ago", + New: "", + NewRemoved: true, + }, + }, + }, + &InstanceDiff{ + Attributes: map[string]*ResourceAttrDiff{ + "reqnew": &ResourceAttrDiff{ + Old: "", + New: "new", + RequiresNew: true, + }, + }, + }, + true, + "", + }, } for i, tc := range cases { diff --git a/terraform/graph_config_node_resource.go b/terraform/graph_config_node_resource.go index 9fc696c2a3..1d36274282 100644 --- a/terraform/graph_config_node_resource.go +++ b/terraform/graph_config_node_resource.go @@ -163,9 +163,8 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) // expand orphans, which have all the same semantics in a destroy // as a primary. steps = append(steps, &OrphanTransformer{ - State: state, - View: n.Resource.Id(), - Targets: n.Targets, + State: state, + View: n.Resource.Id(), }) steps = append(steps, &DeposedTransformer{ @@ -181,6 +180,12 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) }) } + // We always want to apply targeting + steps = append(steps, &TargetsTransformer{ + ParsedTargets: n.Targets, + Destroy: n.DestroyMode != DestroyNone, + }) + // Always end with the root being added steps = append(steps, &RootTransformer{}) diff --git a/terraform/graph_walk_context.go b/terraform/graph_walk_context.go index 50f119be69..c6197223f0 100644 --- a/terraform/graph_walk_context.go +++ b/terraform/graph_walk_context.go @@ -96,7 +96,7 @@ func (w *ContextGraphWalker) EnterPath(path []string) EvalContext { } func (w *ContextGraphWalker) EnterEvalTree(v dag.Vertex, n EvalNode) EvalNode { - log.Printf("[INFO] Entering eval tree: %s", dag.VertexName(v)) + log.Printf("[TRACE] Entering eval tree: %s", dag.VertexName(v)) // Acquire a lock on the semaphore w.Context.parallelSem.Acquire() @@ -108,7 +108,7 @@ func (w *ContextGraphWalker) EnterEvalTree(v dag.Vertex, n EvalNode) EvalNode { func (w *ContextGraphWalker) ExitEvalTree( v dag.Vertex, output interface{}, err error) error { - log.Printf("[INFO] Exiting eval tree: %s", dag.VertexName(v)) + log.Printf("[TRACE] Exiting eval tree: %s", dag.VertexName(v)) // Release the semaphore w.Context.parallelSem.Release() diff --git a/terraform/resource_address_test.go b/terraform/resource_address_test.go index 0b10a24fde..03e762a742 100644 --- a/terraform/resource_address_test.go +++ b/terraform/resource_address_test.go @@ -28,6 +28,15 @@ func TestParseResourceAddress(t *testing.T) { Index: 2, }, }, + "implicit primary, explicit index over ten": { + Input: "aws_instance.foo[12]", + Expected: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 12, + }, + }, "explicit primary, explicit index": { Input: "aws_instance.foo.primary[2]", Expected: &ResourceAddress{ @@ -184,6 +193,21 @@ func TestResourceAddressEquals(t *testing.T) { }, Expect: true, }, + "index over ten": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 1, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 13, + }, + Expect: false, + }, "different type": { Address: &ResourceAddress{ Type: "aws_instance", diff --git a/terraform/state.go b/terraform/state.go index 8734cfc178..a20fd7ba49 100644 --- a/terraform/state.go +++ b/terraform/state.go @@ -9,6 +9,7 @@ import ( "log" "reflect" "sort" + "strconv" "strings" "github.com/hashicorp/terraform/config" @@ -661,6 +662,65 @@ func (m *ModuleState) String() string { return buf.String() } +// ResourceStateKey is a structured representation of the key used for the +// ModuleState.Resources mapping +type ResourceStateKey struct { + Name string + Type string + Index int +} + +// Equal determines whether two ResourceStateKeys are the same +func (rsk *ResourceStateKey) Equal(other *ResourceStateKey) bool { + if rsk == nil || other == nil { + return false + } + if rsk.Type != other.Type { + return false + } + if rsk.Name != other.Name { + return false + } + if rsk.Index != other.Index { + return false + } + return true +} + +func (rsk *ResourceStateKey) String() string { + if rsk == nil { + return "" + } + if rsk.Index == -1 { + return fmt.Sprintf("%s.%s", rsk.Type, rsk.Name) + } + return fmt.Sprintf("%s.%s.%d", rsk.Type, rsk.Name, rsk.Index) +} + +// ParseResourceStateKey accepts a key in the format used by +// ModuleState.Resources and returns a resource name and resource index. In the +// state, a resource has the format "type.name.index" or "type.name". In the +// latter case, the index is returned as -1. +func ParseResourceStateKey(k string) (*ResourceStateKey, error) { + parts := strings.Split(k, ".") + if len(parts) < 2 || len(parts) > 3 { + return nil, fmt.Errorf("Malformed resource state key: %s", k) + } + rsk := &ResourceStateKey{ + Type: parts[0], + Name: parts[1], + Index: -1, + } + if len(parts) == 3 { + index, err := strconv.Atoi(parts[2]) + if err != nil { + return nil, fmt.Errorf("Malformed resource state key index: %s", k) + } + rsk.Index = index + } + return rsk, nil +} + // ResourceState holds the state of a resource that is used so that // a provider can find and manage an existing resource as well as for // storing attributes that are used to populate variables of child diff --git a/terraform/state_test.go b/terraform/state_test.go index 8d24a8e75c..c3bfb18df3 100644 --- a/terraform/state_test.go +++ b/terraform/state_test.go @@ -895,3 +895,57 @@ func TestUpgradeV1State(t *testing.T) { t.Fatalf("bad: %#v", bt) } } + +func TestParseResourceStateKey(t *testing.T) { + cases := []struct { + Input string + Expected *ResourceStateKey + ExpectedErr bool + }{ + { + Input: "aws_instance.foo.3", + Expected: &ResourceStateKey{ + Type: "aws_instance", + Name: "foo", + Index: 3, + }, + }, + { + Input: "aws_instance.foo.0", + Expected: &ResourceStateKey{ + Type: "aws_instance", + Name: "foo", + Index: 0, + }, + }, + { + Input: "aws_instance.foo", + Expected: &ResourceStateKey{ + Type: "aws_instance", + Name: "foo", + Index: -1, + }, + }, + { + Input: "aws_instance.foo.malformed", + ExpectedErr: true, + }, + { + Input: "aws_instance.foo.malformedwithnumber.123", + ExpectedErr: true, + }, + { + Input: "malformed", + ExpectedErr: true, + }, + } + for _, tc := range cases { + rsk, err := ParseResourceStateKey(tc.Input) + if rsk != nil && tc.Expected != nil && !rsk.Equal(tc.Expected) { + t.Fatalf("%s: expected %s, got %s", tc.Input, tc.Expected, rsk) + } + if (err != nil) != tc.ExpectedErr { + t.Fatalf("%s: expected err: %t, got %s", tc.Input, tc.ExpectedErr, err) + } + } +} diff --git a/terraform/terraform_test.go b/terraform/terraform_test.go index 3b1653f431..0fc0b71fec 100644 --- a/terraform/terraform_test.go +++ b/terraform/terraform_test.go @@ -983,6 +983,18 @@ STATE: ` +const testTerraformPlanEscapedVarStr = ` +DIFF: + +CREATE: aws_instance.foo + foo: "" => "bar-${baz}" + type: "" => "aws_instance" + +STATE: + + +` + const testTerraformPlanModulesStr = ` DIFF: diff --git a/terraform/test-fixtures/plan-escaped-var/main.tf b/terraform/test-fixtures/plan-escaped-var/main.tf new file mode 100644 index 0000000000..5a017207cc --- /dev/null +++ b/terraform/test-fixtures/plan-escaped-var/main.tf @@ -0,0 +1,3 @@ +resource "aws_instance" "foo" { + foo = "bar-$${baz}" +} diff --git a/terraform/test-fixtures/plan-targeted-module-orphan/main.tf b/terraform/test-fixtures/plan-targeted-module-orphan/main.tf new file mode 100644 index 0000000000..2b33fedaed --- /dev/null +++ b/terraform/test-fixtures/plan-targeted-module-orphan/main.tf @@ -0,0 +1,6 @@ +# Once opon a time, there was a child module here +/* +module "child" { + source = "./child" +} +*/ diff --git a/terraform/test-fixtures/plan-targeted-over-ten/main.tf b/terraform/test-fixtures/plan-targeted-over-ten/main.tf new file mode 100644 index 0000000000..1c7bc8769e --- /dev/null +++ b/terraform/test-fixtures/plan-targeted-over-ten/main.tf @@ -0,0 +1,3 @@ +resource "aws_instance" "foo" { + count = 13 +} diff --git a/terraform/transform_orphan.go b/terraform/transform_orphan.go index 13e8fbf941..d221e57db5 100644 --- a/terraform/transform_orphan.go +++ b/terraform/transform_orphan.go @@ -2,7 +2,6 @@ package terraform import ( "fmt" - "strings" "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config/module" @@ -26,11 +25,6 @@ type OrphanTransformer struct { // using the graph path. Module *module.Tree - // Targets are user-specified resources to target. We need to be aware of - // these so we don't improperly identify orphans when they've just been - // filtered out of the graph via targeting. - Targets []ResourceAddress - // View, if non-nil will set a view on the module state. View string } @@ -68,22 +62,6 @@ func (t *OrphanTransformer) Transform(g *Graph) error { } resourceOrphans := state.Orphans(config) - if len(t.Targets) > 0 { - var targetedOrphans []string - for _, o := range resourceOrphans { - targeted := false - for _, t := range t.Targets { - prefix := fmt.Sprintf("%s.%s.%d", t.Type, t.Name, t.Index) - if strings.HasPrefix(o, prefix) { - targeted = true - } - } - if targeted { - targetedOrphans = append(targetedOrphans, o) - } - } - resourceOrphans = targetedOrphans - } resourceVertexes = make([]dag.Vertex, len(resourceOrphans)) for i, k := range resourceOrphans { @@ -95,11 +73,15 @@ func (t *OrphanTransformer) Transform(g *Graph) error { rs := state.Resources[k] + rsk, err := ParseResourceStateKey(k) + if err != nil { + return err + } resourceVertexes[i] = g.Add(&graphNodeOrphanResource{ - ResourceName: k, - ResourceType: rs.Type, - Provider: rs.Provider, - dependentOn: rs.Dependencies, + Path: g.Path, + ResourceKey: rsk, + Provider: rs.Provider, + dependentOn: rs.Dependencies, }) } } @@ -175,15 +157,25 @@ func (n *graphNodeOrphanModule) Expand(b GraphBuilder) (GraphNodeSubgraph, error // graphNodeOrphanResource is the graph vertex representing an orphan resource.. type graphNodeOrphanResource struct { - ResourceName string - ResourceType string - Provider string + Path []string + ResourceKey *ResourceStateKey + Provider string dependentOn []string } +func (n *graphNodeOrphanResource) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeResource +} + func (n *graphNodeOrphanResource) ResourceAddress() *ResourceAddress { - return n.ResourceAddress() + return &ResourceAddress{ + Index: n.ResourceKey.Index, + InstanceType: TypePrimary, + Name: n.ResourceKey.Name, + Path: n.Path[1:], + Type: n.ResourceKey.Type, + } } func (n *graphNodeOrphanResource) DependableName() []string { @@ -202,11 +194,11 @@ func (n *graphNodeOrphanResource) Flatten(p []string) (dag.Vertex, error) { } func (n *graphNodeOrphanResource) Name() string { - return fmt.Sprintf("%s (orphan)", n.ResourceName) + return fmt.Sprintf("%s (orphan)", n.ResourceKey) } func (n *graphNodeOrphanResource) ProvidedBy() []string { - return []string{resourceProvider(n.ResourceName, n.Provider)} + return []string{resourceProvider(n.ResourceKey.Type, n.Provider)} } // GraphNodeEvalable impl. @@ -217,7 +209,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { seq := &EvalSequence{Nodes: make([]EvalNode, 0, 5)} // Build instance info - info := &InstanceInfo{Id: n.ResourceName, Type: n.ResourceType} + info := &InstanceInfo{Id: n.ResourceKey.String(), Type: n.ResourceKey.Type} seq.Nodes = append(seq.Nodes, &EvalInstanceInfo{Info: info}) // Refresh the resource @@ -230,7 +222,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { Output: &provider, }, &EvalReadState{ - Name: n.ResourceName, + Name: n.ResourceKey.String(), Output: &state, }, &EvalRefresh{ @@ -240,8 +232,8 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { Output: &state, }, &EvalWriteState{ - Name: n.ResourceName, - ResourceType: n.ResourceType, + Name: n.ResourceKey.String(), + ResourceType: n.ResourceKey.Type, Provider: n.Provider, Dependencies: n.DependentOn(), State: &state, @@ -257,7 +249,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { Node: &EvalSequence{ Nodes: []EvalNode{ &EvalReadState{ - Name: n.ResourceName, + Name: n.ResourceKey.String(), Output: &state, }, &EvalDiffDestroy{ @@ -266,7 +258,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { Output: &diff, }, &EvalWriteDiff{ - Name: n.ResourceName, + Name: n.ResourceKey.String(), Diff: &diff, }, }, @@ -280,7 +272,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { Node: &EvalSequence{ Nodes: []EvalNode{ &EvalReadDiff{ - Name: n.ResourceName, + Name: n.ResourceKey.String(), Diff: &diff, }, &EvalGetProvider{ @@ -288,7 +280,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { Output: &provider, }, &EvalReadState{ - Name: n.ResourceName, + Name: n.ResourceKey.String(), Output: &state, }, &EvalApply{ @@ -300,8 +292,8 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { Error: &err, }, &EvalWriteState{ - Name: n.ResourceName, - ResourceType: n.ResourceType, + Name: n.ResourceKey.String(), + ResourceType: n.ResourceKey.Type, Provider: n.Provider, Dependencies: n.DependentOn(), State: &state, @@ -320,7 +312,7 @@ func (n *graphNodeOrphanResource) EvalTree() EvalNode { } func (n *graphNodeOrphanResource) dependableName() string { - return n.ResourceName + return n.ResourceKey.String() } // GraphNodeDestroyable impl. diff --git a/terraform/transform_orphan_test.go b/terraform/transform_orphan_test.go index 85bb4637bb..76fbaa6a1b 100644 --- a/terraform/transform_orphan_test.go +++ b/terraform/transform_orphan_test.go @@ -333,17 +333,18 @@ func TestGraphNodeOrphanResource_impl(t *testing.T) { var _ dag.Vertex = new(graphNodeOrphanResource) var _ dag.NamedVertex = new(graphNodeOrphanResource) var _ GraphNodeProviderConsumer = new(graphNodeOrphanResource) + var _ GraphNodeAddressable = new(graphNodeOrphanResource) } func TestGraphNodeOrphanResource_ProvidedBy(t *testing.T) { - n := &graphNodeOrphanResource{ResourceName: "aws_instance.foo"} + n := &graphNodeOrphanResource{ResourceKey: &ResourceStateKey{Type: "aws_instance"}} if v := n.ProvidedBy(); v[0] != "aws" { t.Fatalf("bad: %#v", v) } } func TestGraphNodeOrphanResource_ProvidedBy_alias(t *testing.T) { - n := &graphNodeOrphanResource{ResourceName: "aws_instance.foo", Provider: "aws.bar"} + n := &graphNodeOrphanResource{ResourceKey: &ResourceStateKey{Type: "aws_instance"}, Provider: "aws.bar"} if v := n.ProvidedBy(); v[0] != "aws.bar" { t.Fatalf("bad: %#v", v) } diff --git a/terraform/transform_targets.go b/terraform/transform_targets.go index cab8c8b1ec..db577b361f 100644 --- a/terraform/transform_targets.go +++ b/terraform/transform_targets.go @@ -13,20 +13,25 @@ type TargetsTransformer struct { // List of targeted resource names specified by the user Targets []string + // List of parsed targets, provided by callers like ResourceCountTransform + // that already have the targets parsed + ParsedTargets []ResourceAddress + // Set to true when we're in a `terraform destroy` or a // `terraform plan -destroy` Destroy bool } func (t *TargetsTransformer) Transform(g *Graph) error { - if len(t.Targets) > 0 { - // TODO: duplicated in OrphanTransformer; pull up parsing earlier + if len(t.Targets) > 0 && len(t.ParsedTargets) == 0 { addrs, err := t.parseTargetAddresses() if err != nil { return err } - - targetedNodes, err := t.selectTargetedNodes(g, addrs) + t.ParsedTargets = addrs + } + if len(t.ParsedTargets) > 0 { + targetedNodes, err := t.selectTargetedNodes(g, t.ParsedTargets) if err != nil { return err } diff --git a/terraform/version.go b/terraform/version.go index 043fdcf052..210eb87ffb 100644 --- a/terraform/version.go +++ b/terraform/version.go @@ -1,7 +1,7 @@ package terraform // The main version number that is being run at the moment. -const Version = "0.6.8" +const Version = "0.6.10" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release diff --git a/website/.buildpacks b/website/.buildpacks deleted file mode 100644 index f85b304c33..0000000000 --- a/website/.buildpacks +++ /dev/null @@ -1,2 +0,0 @@ -https://github.com/heroku/heroku-buildpack-ruby.git -https://github.com/hashicorp/heroku-buildpack-middleman.git diff --git a/website/.bundle/config b/website/.bundle/config new file mode 100644 index 0000000000..df11c7518e --- /dev/null +++ b/website/.bundle/config @@ -0,0 +1,2 @@ +--- +BUNDLE_DISABLE_SHARED_GEMS: '1' diff --git a/website/Gemfile.lock b/website/Gemfile.lock index 725b16df37..6beb2d721b 100644 --- a/website/Gemfile.lock +++ b/website/Gemfile.lock @@ -1,6 +1,6 @@ GIT remote: https://github.com/hashicorp/middleman-hashicorp - revision: 15cbda0cf1d963fa71292dee921229e7ee618272 + revision: 953baf8762b915cf57553bcc82bc946ad777056f specs: middleman-hashicorp (0.2.0) bootstrap-sass (~> 3.3) @@ -21,18 +21,18 @@ GIT GEM remote: https://rubygems.org/ specs: - activesupport (4.2.4) + activesupport (4.2.5) i18n (~> 0.7) json (~> 1.7, >= 1.7.7) minitest (~> 5.1) thread_safe (~> 0.3, >= 0.3.4) tzinfo (~> 1.1) - autoprefixer-rails (6.0.3) + autoprefixer-rails (6.3.0) execjs json - bootstrap-sass (3.3.5.1) - autoprefixer-rails (>= 5.0.0.1) - sass (>= 3.3.0) + bootstrap-sass (3.3.6) + autoprefixer-rails (>= 5.2.1) + sass (>= 3.3.4) builder (3.2.2) capybara (2.4.4) mime-types (>= 1.16) @@ -40,11 +40,11 @@ GEM rack (>= 1.0.0) rack-test (>= 0.5.4) xpath (~> 2.0) - chunky_png (1.3.4) + chunky_png (1.3.5) coffee-script (2.4.1) coffee-script-source execjs - coffee-script-source (1.9.1.1) + coffee-script-source (1.10.0) commonjs (0.2.7) compass (1.0.3) chunky_png (~> 1.2) @@ -63,7 +63,7 @@ GEM eventmachine (>= 0.12.9) http_parser.rb (~> 0.6.0) erubis (2.7.0) - eventmachine (1.0.8) + eventmachine (1.0.9.1) execjs (2.6.0) ffi (1.9.10) git-version-bump (0.15.1) @@ -80,21 +80,21 @@ GEM less (2.6.0) commonjs (~> 0.2.7) libv8 (3.16.14.13) - listen (3.0.3) + listen (3.0.5) rb-fsevent (>= 0.9.3) rb-inotify (>= 0.9) - middleman (3.4.0) + middleman (3.4.1) coffee-script (~> 2.2) compass (>= 1.0.0, < 2.0.0) compass-import-once (= 1.0.5) execjs (~> 2.0) haml (>= 4.0.5) kramdown (~> 1.2) - middleman-core (= 3.4.0) + middleman-core (= 3.4.1) middleman-sprockets (>= 3.1.2) sass (>= 3.4.0, < 4.0) uglifier (~> 2.5) - middleman-core (3.4.0) + middleman-core (3.4.1) activesupport (~> 4.1) bundler (~> 1.1) capybara (~> 2.4.4) @@ -106,7 +106,7 @@ GEM rack (>= 1.4.5, < 2.0) thor (>= 0.15.2, < 2.0) tilt (~> 1.4.1, < 2.0) - middleman-livereload (3.4.3) + middleman-livereload (3.4.6) em-websocket (~> 0.5.1) middleman-core (>= 3.3) rack-livereload (~> 0.3.15) @@ -118,15 +118,17 @@ GEM sprockets (~> 2.12.1) sprockets-helpers (~> 1.1.0) sprockets-sass (~> 1.3.0) - middleman-syntax (2.0.0) - middleman-core (~> 3.2) + middleman-syntax (2.1.0) + middleman-core (>= 3.2) rouge (~> 1.0) - mime-types (2.6.2) - mini_portile (0.6.2) - minitest (5.8.1) + mime-types (3.0) + mime-types-data (~> 3.2015) + mime-types-data (3.2015.1120) + mini_portile2 (2.0.0) + minitest (5.8.3) multi_json (1.11.2) - nokogiri (1.6.6.2) - mini_portile (~> 0.6.0) + nokogiri (1.6.7.1) + mini_portile2 (~> 2.0.0.rc2) padrino-helpers (0.12.5) i18n (~> 0.6, >= 0.6.7) padrino-support (= 0.12.5) @@ -145,13 +147,13 @@ GEM rack-ssl-enforcer (0.2.9) rack-test (0.6.3) rack (>= 1.0) - rb-fsevent (0.9.6) + rb-fsevent (0.9.7) rb-inotify (0.9.5) ffi (>= 0.5.0) - redcarpet (3.3.3) + redcarpet (3.3.4) ref (2.0.0) rouge (1.10.1) - sass (3.4.19) + sass (3.4.21) sprockets (2.12.4) hike (~> 1.2) multi_json (~> 1.0) @@ -186,3 +188,6 @@ PLATFORMS DEPENDENCIES middleman-hashicorp! + +BUNDLED WITH + 1.10.6 diff --git a/website/Procfile b/website/Procfile deleted file mode 100644 index 58361e473f..0000000000 --- a/website/Procfile +++ /dev/null @@ -1 +0,0 @@ -web: bundle exec thin start -p $PORT diff --git a/website/Vagrantfile b/website/Vagrantfile index 4bfc410e20..6507bea16a 100644 --- a/website/Vagrantfile +++ b/website/Vagrantfile @@ -28,7 +28,7 @@ bundle exec middleman server & SCRIPT Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - config.vm.box = "chef/ubuntu-12.04" + config.vm.box = "bento/ubuntu-12.04" config.vm.network "private_network", ip: "33.33.30.10" config.vm.provision "shell", inline: $script, privileged: false config.vm.synced_folder ".", "/vagrant", type: "rsync" diff --git a/website/config.rb b/website/config.rb index 236bbceb88..ea34906d47 100644 --- a/website/config.rb +++ b/website/config.rb @@ -2,6 +2,6 @@ set :base_url, "https://www.terraform.io/" activate :hashicorp do |h| h.name = "terraform" - h.version = "0.6.7" + h.version = "0.6.9" h.github_slug = "hashicorp/terraform" end diff --git a/website/packer.json b/website/packer.json new file mode 100644 index 0000000000..b230c7e510 --- /dev/null +++ b/website/packer.json @@ -0,0 +1,41 @@ +{ + "variables": { + "aws_access_key_id": "{{ env `AWS_ACCESS_KEY_ID` }}", + "aws_secret_access_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}", + "aws_region": "{{ env `AWS_REGION` }}", + "fastly_api_key": "{{ env `FASTLY_API_KEY` }}" + }, + "builders": [ + { + "type": "docker", + "image": "ruby:2.3-slim", + "commit": "true" + } + ], + "provisioners": [ + { + "type": "file", + "source": ".", + "destination": "/app" + }, + { + "type": "shell", + "environment_vars": [ + "AWS_ACCESS_KEY_ID={{ user `aws_access_key_id` }}", + "AWS_SECRET_ACCESS_KEY={{ user `aws_secret_access_key` }}", + "AWS_REGION={{ user `aws_region` }}", + "FASTLY_API_KEY={{ user `fastly_api_key` }}" + ], + "inline": [ + "apt-get update", + "apt-get install -y build-essential curl git libffi-dev s3cmd wget", + "cd /app", + + "bundle check || bundle install --jobs 7", + "bundle exec middleman build", + + "/bin/bash ./scripts/deploy.sh" + ] + } + ] +} diff --git a/website/scripts/deploy.sh b/website/scripts/deploy.sh new file mode 100755 index 0000000000..9376c39cdf --- /dev/null +++ b/website/scripts/deploy.sh @@ -0,0 +1,88 @@ +#!/bin/bash +set -e + +PROJECT="terraform" +PROJECT_URL="www.terraform.io" +FASTLY_SERVICE_ID="7GrxRJP3PVBuqQbyxYQ0MV" + +# Ensure the proper AWS environment variables are set +if [ -z "$AWS_ACCESS_KEY_ID" ]; then + echo "Missing AWS_ACCESS_KEY_ID!" + exit 1 +fi + +if [ -z "$AWS_SECRET_ACCESS_KEY" ]; then + echo "Missing AWS_SECRET_ACCESS_KEY!" + exit 1 +fi + +# Ensure the proper Fastly keys are set +if [ -z "$FASTLY_API_KEY" ]; then + echo "Missing FASTLY_API_KEY!" + exit 1 +fi + +# Ensure we have s3cmd installed +if ! command -v "s3cmd" >/dev/null 2>&1; then + echo "Missing s3cmd!" + exit 1 +fi + +# Get the parent directory of where this script is and change into our website +# directory +SOURCE="${BASH_SOURCE[0]}" +while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done +DIR="$(cd -P "$( dirname "$SOURCE" )/.." && pwd)" + +# Delete any .DS_Store files for our OS X friends. +find "$DIR" -type f -name '.DS_Store' -delete + +# Upload the files to S3 - we disable mime-type detection by the python library +# and just guess from the file extension because it's surprisingly more +# accurate, especially for CSS and javascript. We also tag the uploaded files +# with the proper Surrogate-Key, which we will later purge in our API call to +# Fastly. +if [ -z "$NO_UPLOAD" ]; then + echo "Uploading to S3..." + + # Check that the site has been built + if [ ! -d "$DIR/build" ]; then + echo "Missing compiled website! Run 'make build' to compile!" + exit 1 + fi + + s3cmd \ + --quiet \ + --delete-removed \ + --guess-mime-type \ + --no-mime-magic \ + --acl-public \ + --recursive \ + --add-header="Cache-Control: max-age=31536000" \ + --add-header="x-amz-meta-surrogate-key: site-$PROJECT" \ + sync "$DIR/build/" "s3://hc-sites/$PROJECT/latest/" +fi + +# Perform a soft-purge of the surrogate key. +if [ -z "$NO_PURGE" ]; then + echo "Purging Fastly cache..." + curl \ + --fail \ + --silent \ + --output /dev/null \ + --request "POST" \ + --header "Accept: application/json" \ + --header "Fastly-Key: $FASTLY_API_KEY" \ + --header "Fastly-Soft-Purge: 1" \ + "https://api.fastly.com/service/$FASTLY_SERVICE_ID/purge/site-$PROJECT" +fi + +# Warm the cache with recursive wget. +if [ -z "$NO_WARM" ]; then + echo "Warming Fastly cache..." + wget \ + --recursive \ + --delete-after \ + --quiet \ + "https://$PROJECT_URL/" +fi diff --git a/website/source/assets/images/bg-galaxy.jpg b/website/source/assets/images/bg-galaxy.jpg index 894a5b6ea4..587004afa3 100644 Binary files a/website/source/assets/images/bg-galaxy.jpg and b/website/source/assets/images/bg-galaxy.jpg differ diff --git a/website/source/assets/images/docs/atlas-workflow.png b/website/source/assets/images/docs/atlas-workflow.png index e519ee0042..144e2cedc8 100644 Binary files a/website/source/assets/images/docs/atlas-workflow.png and b/website/source/assets/images/docs/atlas-workflow.png differ diff --git a/website/source/assets/images/docs/module_graph.png b/website/source/assets/images/docs/module_graph.png index 196746116f..482f4bb55c 100644 Binary files a/website/source/assets/images/docs/module_graph.png and b/website/source/assets/images/docs/module_graph.png differ diff --git a/website/source/assets/images/docs/module_graph_expand.png b/website/source/assets/images/docs/module_graph_expand.png index d9a8631935..32459c873d 100644 Binary files a/website/source/assets/images/docs/module_graph_expand.png and b/website/source/assets/images/docs/module_graph_expand.png differ diff --git a/website/source/assets/images/favicon.png b/website/source/assets/images/favicon.png index 01eb4f927f..14d04676c1 100644 Binary files a/website/source/assets/images/favicon.png and b/website/source/assets/images/favicon.png differ diff --git a/website/source/assets/images/feature-iterate-bg.png b/website/source/assets/images/feature-iterate-bg.png index 1eb2280098..d208badbf5 100644 Binary files a/website/source/assets/images/feature-iterate-bg.png and b/website/source/assets/images/feature-iterate-bg.png differ diff --git a/website/source/assets/images/feature-iterate-bg@2x.png b/website/source/assets/images/feature-iterate-bg@2x.png index 950e3e8c2e..77666d9451 100644 Binary files a/website/source/assets/images/feature-iterate-bg@2x.png and b/website/source/assets/images/feature-iterate-bg@2x.png differ diff --git a/website/source/assets/images/footer-hashicorp-logo.png b/website/source/assets/images/footer-hashicorp-logo.png index 706fd1b4f6..96ac858144 100644 Binary files a/website/source/assets/images/footer-hashicorp-logo.png and b/website/source/assets/images/footer-hashicorp-logo.png differ diff --git a/website/source/assets/images/footer-hashicorp-logo@2x.png b/website/source/assets/images/footer-hashicorp-logo@2x.png index 4aeacf2f49..34699040a6 100644 Binary files a/website/source/assets/images/footer-hashicorp-logo@2x.png and b/website/source/assets/images/footer-hashicorp-logo@2x.png differ diff --git a/website/source/assets/images/footer-hashicorp-white-logo.png b/website/source/assets/images/footer-hashicorp-white-logo.png index 52122b23ed..9fddc9779b 100644 Binary files a/website/source/assets/images/footer-hashicorp-white-logo.png and b/website/source/assets/images/footer-hashicorp-white-logo.png differ diff --git a/website/source/assets/images/footer-hashicorp-white-logo@2x.png b/website/source/assets/images/footer-hashicorp-white-logo@2x.png index bd014b159f..a0b4797990 100644 Binary files a/website/source/assets/images/footer-hashicorp-white-logo@2x.png and b/website/source/assets/images/footer-hashicorp-white-logo@2x.png differ diff --git a/website/source/assets/images/graph-example.png b/website/source/assets/images/graph-example.png index 99ab088c50..0dbd6060a6 100644 Binary files a/website/source/assets/images/graph-example.png and b/website/source/assets/images/graph-example.png differ diff --git a/website/source/assets/images/header-download-icon.png b/website/source/assets/images/header-download-icon.png index 56becb7ab1..bface93345 100644 Binary files a/website/source/assets/images/header-download-icon.png and b/website/source/assets/images/header-download-icon.png differ diff --git a/website/source/assets/images/header-github-icon.png b/website/source/assets/images/header-github-icon.png index 1d4c90159b..5bef9cb891 100644 Binary files a/website/source/assets/images/header-github-icon.png and b/website/source/assets/images/header-github-icon.png differ diff --git a/website/source/assets/images/header-github-icon@2x.png b/website/source/assets/images/header-github-icon@2x.png index 72a9e6fc27..ab15e12dfa 100644 Binary files a/website/source/assets/images/header-github-icon@2x.png and b/website/source/assets/images/header-github-icon@2x.png differ diff --git a/website/source/assets/images/hero-bg.png b/website/source/assets/images/hero-bg.png index 09f4ae91f6..61cac81727 100644 Binary files a/website/source/assets/images/hero-bg.png and b/website/source/assets/images/hero-bg.png differ diff --git a/website/source/assets/images/logo-header-black@2x.png b/website/source/assets/images/logo-header-black@2x.png index 74521a6646..312f8692e2 100644 Binary files a/website/source/assets/images/logo-header-black@2x.png and b/website/source/assets/images/logo-header-black@2x.png differ diff --git a/website/source/assets/images/logo-header.png b/website/source/assets/images/logo-header.png index 8e1956a742..7054f3dbbf 100644 Binary files a/website/source/assets/images/logo-header.png and b/website/source/assets/images/logo-header.png differ diff --git a/website/source/assets/images/logo-header@2x.png b/website/source/assets/images/logo-header@2x.png index a96feef3a8..417086a932 100644 Binary files a/website/source/assets/images/logo-header@2x.png and b/website/source/assets/images/logo-header@2x.png differ diff --git a/website/source/assets/images/logo-static.png b/website/source/assets/images/logo-static.png index f98abbf107..7cfecc36a2 100644 Binary files a/website/source/assets/images/logo-static.png and b/website/source/assets/images/logo-static.png differ diff --git a/website/source/assets/images/readme.png b/website/source/assets/images/readme.png index 620f407425..bec5aa7f50 100644 Binary files a/website/source/assets/images/readme.png and b/website/source/assets/images/readme.png differ diff --git a/website/source/assets/images/sidebar-wire.png b/website/source/assets/images/sidebar-wire.png index c219189886..87ccdf351b 100644 Binary files a/website/source/assets/images/sidebar-wire.png and b/website/source/assets/images/sidebar-wire.png differ diff --git a/website/source/assets/stylesheets/_docs.scss b/website/source/assets/stylesheets/_docs.scss index c626631996..348f84273a 100755 --- a/website/source/assets/stylesheets/_docs.scss +++ b/website/source/assets/stylesheets/_docs.scss @@ -9,6 +9,8 @@ body.page-sub{ body.layout-atlas, body.layout-aws, body.layout-azure, +body.layout-chef, +body.layout-azurerm, body.layout-cloudflare, body.layout-cloudstack, body.layout-consul, @@ -16,19 +18,25 @@ body.layout-digitalocean, body.layout-dme, body.layout-dnsimple, body.layout-docker, +body.layout-dyn, body.layout-google, body.layout-heroku, body.layout-mailgun, +body.layout-mysql, body.layout-openstack, body.layout-packet, +body.layout-postgresql, body.layout-rundeck, body.layout-statuscake, body.layout-template, body.layout-tls, +body.layout-vcd, body.layout-vsphere, body.layout-docs, body.layout-downloads, body.layout-inner, +body.layout-remotestate, +body.layout-terraform, body.layout-intro{ background: $light-black image-url('sidebar-wire.png') left 62px no-repeat; @@ -198,7 +206,9 @@ body.layout-intro{ h1{ color: $purple; + font-size: 36px; text-transform: uppercase; + word-wrap: break-word; padding-bottom: 24px; margin-top: 40px; margin-bottom: 24px; @@ -215,7 +225,6 @@ body.layout-intro{ } } - @media (max-width: 992px) { body.layout-docs, body.layout-inner, @@ -276,6 +285,7 @@ body.layout-intro{ .bs-docs-section{ h1{ + font-size: 32px; padding-top: 24px; border-top: 1px solid #eeeeee; } @@ -289,7 +299,7 @@ body.layout-intro{ } h1{ - font-size: 32px; + font-size: 28px; } } } diff --git a/website/source/assets/stylesheets/_footer.scss b/website/source/assets/stylesheets/_footer.scss index 3c2c08e4fd..2bf21204f0 100644 --- a/website/source/assets/stylesheets/_footer.scss +++ b/website/source/assets/stylesheets/_footer.scss @@ -2,6 +2,17 @@ body.page-sub{ #footer{ padding: 40px 0; margin-top: 0; + + .hashicorp-project{ + margin-top: 24px; + &:hover{ + svg{ + .svg-bg-line{ + opacity: .4; + } + } + } + } } } diff --git a/website/source/assets/stylesheets/_header.scss b/website/source/assets/stylesheets/_header.scss index 68e50f3683..5b2980ebb8 100755 --- a/website/source/assets/stylesheets/_header.scss +++ b/website/source/assets/stylesheets/_header.scss @@ -28,7 +28,7 @@ body.page-sub{ .by-hashicorp{ &:hover{ svg{ - line{ + .svg-bg-line{ opacity: .4; } } @@ -41,12 +41,6 @@ body.page-sub{ ul.navbar-nav{ li { - // &:hover{ - // svg path{ - // fill: $purple; - // } - // } - svg path{ fill: $white; } diff --git a/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss index e9bbe501e7..699a2d073d 100755 --- a/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss +++ b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss @@ -171,12 +171,10 @@ font-weight: 300; svg{ path, - polygon{ + polygon, + rect{ fill: white; } - line{ - stroke: white; - } } &:focus, @@ -212,15 +210,13 @@ path, polygon{ - fill: black; @include transition(all 300ms ease-in); &:hover{ @include transition(all 300ms ease-in); } } - line{ - stroke: black; + .svg-bg-line{ @include transition(all 300ms ease-in); &:hover{ @@ -243,26 +239,22 @@ color: white; svg{ path, + rect, polygon{ fill: white; } - line{ - stroke: white; - } } } - &:focus{ + &:focus, + &:hover{ text-decoration: none; } &:hover{ - text-decoration: none; svg{ - &.svg-by{ - line{ - stroke: $purple; - } + .svg-bg-line{ + fill: $purple; } } } @@ -295,7 +287,13 @@ path, line{ - fill: $black; + @include transition(all 300ms ease-in); + + &:hover{ + @include transition(all 300ms ease-in); + } + } + .svg-bg-line{ @include transition(all 300ms ease-in); &:hover{ diff --git a/website/source/community.html.erb b/website/source/community.html.erb index 9f7a608213..3bf5550ceb 100644 --- a/website/source/community.html.erb +++ b/website/source/community.html.erb @@ -33,7 +33,7 @@ disappear from this list as contributors come and go.

- +

Mitchell Hashimoto (@mitchellh)

@@ -48,7 +48,7 @@ disappear from this list as contributors come and go.

- +

Armon Dadgar (@armon)

@@ -64,7 +64,7 @@ disappear from this list as contributors come and go.

- +

Jack Pearkes (@pearkes)

diff --git a/website/source/docs/commands/graph.html.markdown b/website/source/docs/commands/graph.html.markdown index d24005fcb9..611738ec02 100644 --- a/website/source/docs/commands/graph.html.markdown +++ b/website/source/docs/commands/graph.html.markdown @@ -31,7 +31,7 @@ Options: This helps when diagnosing cycle errors. * `-module-depth=n` - The maximum depth to expand modules. By default this is - zero, which will not expand modules at all. + -1, which will expand all modules. * `-verbose` - Generate a verbose, "worst-case" graph, with all nodes for potential operations in place. diff --git a/website/source/docs/commands/init.html.markdown b/website/source/docs/commands/init.html.markdown index 803d937d75..2f87e168dc 100644 --- a/website/source/docs/commands/init.html.markdown +++ b/website/source/docs/commands/init.html.markdown @@ -34,7 +34,7 @@ The command-line flags are all optional. The list of available flags are: * `-backend=atlas` - Specifies the type of remote backend. Must be one of Atlas, Consul, S3, or HTTP. Defaults to Atlas. -* `-backend-config="k=v"` - Specify a configuration variable for a backend. This is how you set the required variables for the selected backend (as detailed in the [remote command documentation](/docs/command/remote.html). +* `-backend-config="k=v"` - Specify a configuration variable for a backend. This is how you set the required variables for the selected backend (as detailed in the [remote command documentation](/docs/commands/remote.html). ## Example: Consul diff --git a/website/source/docs/commands/plan.html.markdown b/website/source/docs/commands/plan.html.markdown index e4a48ab5ba..cf6a780171 100644 --- a/website/source/docs/commands/plan.html.markdown +++ b/website/source/docs/commands/plan.html.markdown @@ -39,7 +39,7 @@ The command-line flags are all optional. The list of available flags are: * `-module-depth=n` - Specifies the depth of modules to show in the output. This does not affect the plan itself, only the output shown. By default, - this is zero. -1 will expand all. + this is -1, which will expand all. * `-no-color` - Disables output with coloring. diff --git a/website/source/docs/commands/push.html.markdown b/website/source/docs/commands/push.html.markdown index 11130a8cb1..f6dda9e645 100644 --- a/website/source/docs/commands/push.html.markdown +++ b/website/source/docs/commands/push.html.markdown @@ -96,8 +96,8 @@ don't already exist on Atlas. If you want to force push a certain variable value to update it, use the `-overwrite` flag. All the variable values stored on Atlas are encrypted and secured -using [Vault](https://vaultproject.io). We blogged about the -[architecture of our secure storage system](https://hashicorp.com/blog/how-atlas-uses-vault-for-managing-secrets.html) if you want more detail. +using [Vault](https://www.vaultproject.io). We blogged about the +[architecture of our secure storage system](https://www.hashicorp.com/blog/how-atlas-uses-vault-for-managing-secrets.html) if you want more detail. The variable values can be updated using the `-overwrite` flag or via the [Atlas website](https://atlas.hashicorp.com). An example of updating diff --git a/website/source/docs/commands/remote-config.html.markdown b/website/source/docs/commands/remote-config.html.markdown index ad31021134..8f2e55c7ab 100644 --- a/website/source/docs/commands/remote-config.html.markdown +++ b/website/source/docs/commands/remote-config.html.markdown @@ -40,54 +40,16 @@ below this section for more details. When remote storage is disabled, the existing remote state is migrated to a local file. This defaults to the `-state` path during restore. -The following backends are supported: - -* Atlas - Stores the state in Atlas. Requires the `name` and `access_token` - variables. The `address` variable can optionally be provided. - -* Consul - Stores the state in the KV store at a given path. Requires the - `path` variable. Supports the `CONSUL_HTTP_TOKEN` environment variable - for specifying access credentials, or the `access_token` variable may - be provided, but this is not recommended since it would be included in - cleartext inside the persisted, shard state. Other supported parameters - include: - * `address` - DNS name and port of your Consul endpoint specified in the - format `dnsname:port`. Defaults to the local agent HTTP listener. This - may also be specified using the `CONSUL_HTTP_ADDR` environment variable. - * `scheme` - Specifies what protocol to use when talking to the given - `address`, either `http` or `https`. SSL support can also be triggered - by setting then environment variable `CONSUL_HTTP_SSL` to `true`. - -* Etcd - Stores the state in etcd at a given path. - Requires the `path` and `endpoints` variables. The `username` and `password` - variables can optionally be provided. `endpoints` is assumed to be a - space-separated list of etcd endpoints. - -* S3 - Stores the state as a given key in a given bucket on Amazon S3. - Requires the `bucket` and `key` variables. Supports and honors the standard - AWS environment variables `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - and `AWS_DEFAULT_REGION`. These can optionally be provided as parameters - in the `access_key`, `secret_key` and `region` variables - respectively, but passing credentials this way is not recommended since they - will be included in cleartext inside the persisted state. - Other supported parameters include: - * `bucket` - the name of the S3 bucket - * `key` - path where to place/look for state file inside the bucket - * `encrypt` - whether to enable [server side encryption](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) - of the state file - * `acl` - [Canned ACL](http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) - to be applied to the state file. - -* HTTP - Stores the state using a simple REST client. State will be fetched - via GET, updated via POST, and purged with DELETE. Requires the `address` variable. +Supported storage backends and supported features of those +are documented in the [Remote State](/docs/state/remote/index.html) section. The command-line flags are all optional. The list of available flags are: -* `-backend=Atlas` - The remote backend to use. Must be one of the above +* `-backend=Atlas` - The remote backend to use. Must be one of the supported backends. * `-backend-config="k=v"` - Specify a configuration variable for a backend. - This is how you set the required variables for the backends above. + This is how you set the required variables for the backend. * `-backup=path` - Path to backup the existing state file before modifying. Defaults to the "-state" path with ".backup" extension. diff --git a/website/source/docs/commands/show.html.markdown b/website/source/docs/commands/show.html.markdown index 703c3b09c8..8b88ccf29a 100644 --- a/website/source/docs/commands/show.html.markdown +++ b/website/source/docs/commands/show.html.markdown @@ -23,7 +23,7 @@ file. If no path is specified, the current state will be shown. The command-line flags are all optional. The list of available flags are: * `-module-depth=n` - Specifies the depth of modules to show in the output. - By default this is zero. -1 will expand all. + By default this is -1, which will expand all. * `-no-color` - Disables output with coloring diff --git a/website/source/docs/configuration/environment-variables.html.md b/website/source/docs/configuration/environment-variables.html.md index 1bb1f955a1..3c6eb89907 100644 --- a/website/source/docs/configuration/environment-variables.html.md +++ b/website/source/docs/configuration/environment-variables.html.md @@ -44,10 +44,10 @@ export TF_INPUT=0 ## TF_MODULE_DEPTH -When given a value, causes terraform commands to behave as if the `-module=depth=VALUE` flag was specified. Modules are treated like a black box and terraform commands do not show what resources within the module will be created. By setting this to -1, for example, you enable commands such as [plan](/docs/commands/plan.html) and [graph](/docs/commands/graph.html) to display more detailed information. +When given a value, causes terraform commands to behave as if the `-module=depth=VALUE` flag was specified. By setting this to 0, for example, you enable commands such as [plan](/docs/commands/plan.html) and [graph](/docs/commands/graph.html) to display more compressed information. ``` -export TF_MODULE_DEPTH=-1 +export TF_MODULE_DEPTH=0 ``` For more information regarding modules, check out the section on [Using Modules](/docs/modules/usage.html). diff --git a/website/source/docs/configuration/interpolation.html.md b/website/source/docs/configuration/interpolation.html.md index bcabe58ff1..77c40e59e6 100644 --- a/website/source/docs/configuration/interpolation.html.md +++ b/website/source/docs/configuration/interpolation.html.md @@ -80,6 +80,14 @@ The supported built-in functions are: * `base64encode(string)` - Returns a base64-encoded representation of the given string. + * `sha1(string)` - Returns a SHA-1 hash representation of the + given string. + Example: `"${sha1(concat(aws_vpc.default.tags.customer, "-s3-bucket"))}"` + + * `sha256(string)` - Returns a SHA-256 hash representation of the + given string. + Example: `"${sha256(concat(aws_vpc.default.tags.customer, "-s3-bucket"))}"` + * `cidrhost(iprange, hostnum)` - Takes an IP address range in CIDR notation and creates an IP address with the given host number. For example, ``cidrhost("10.0.0.0/8", 2)`` returns ``10.0.0.2``. @@ -95,7 +103,7 @@ The supported built-in functions are: CIDR notation (like ``10.0.0.0/8``) and extends its prefix to include an additional subnet number. For example, ``cidrsubnet("10.0.0.0/8", 8, 2)`` returns ``10.2.0.0/16``. - + * `coalesce(string1, string2, ...)` - Returns the first non-empty value from the given arguments. At least two arguments must be provided. @@ -120,7 +128,7 @@ The supported built-in functions are: * `format(format, args...)` - Formats a string according to the given format. The syntax for the format is standard `sprintf` syntax. - Good documentation for the syntax can be [found here](http://golang.org/pkg/fmt/). + Good documentation for the syntax can be [found here](https://golang.org/pkg/fmt/). Example to zero-prefix a count, used commonly for naming servers: `format("web-%03d", count.index + 1)`. @@ -150,7 +158,7 @@ The supported built-in functions are: variable. The `map` parameter should be another variable, such as `var.amis`. - * `lower(string)` - returns a copy of the string with all Unicode letters mapped to their lower case. + * `lower(string)` - Returns a copy of the string with all Unicode letters mapped to their lower case. * `replace(string, search, replace)` - Does a search and replace on the given string. All instances of `search` are replaced with the value @@ -168,7 +176,7 @@ The supported built-in functions are: `a_resource_param = ["${split(",", var.CSV_STRING)}"]`. Example: `split(",", module.amod.server_ids)` - * `upper(string)` - returns a copy of the string with all Unicode letters mapped to their upper case. + * `upper(string)` - Returns a copy of the string with all Unicode letters mapped to their upper case. ## Templates @@ -254,8 +262,8 @@ resource "aws_instance" "web" { The supported operations are: -- *Add*, *Subtract*, *Multiply*, and *Divide* for **float** types -- *Add*, *Subtract*, *Multiply*, *Divide*, and *Modulo* for **integer** types +- *Add* (`+`), *Subtract* (`-`), *Multiply* (`*`), and *Divide* (`/`) for **float** types +- *Add* (`+`), *Subtract* (`-`), *Multiply* (`*`), *Divide* (`/`), and *Modulo* (`%`) for **integer** types -> **Note:** Since Terraform allows hyphens in resource and variable names, it's best to use spaces between math operators to prevent confusion or unexpected diff --git a/website/source/docs/configuration/load.html.md b/website/source/docs/configuration/load.html.md index 3ccbcf7b65..101ac3fec5 100644 --- a/website/source/docs/configuration/load.html.md +++ b/website/source/docs/configuration/load.html.md @@ -31,6 +31,6 @@ which do merge. The order of variables, resources, etc. defined within the configuration doesn't matter. Terraform configurations are -[declarative](http://en.wikipedia.org/wiki/Declarative_programming), +[declarative](https://en.wikipedia.org/wiki/Declarative_programming), so references to other resources and variables do not depend on the order they're defined. diff --git a/website/source/docs/configuration/override.html.md b/website/source/docs/configuration/override.html.md index a667adcd58..b3dbb1dbc5 100644 --- a/website/source/docs/configuration/override.html.md +++ b/website/source/docs/configuration/override.html.md @@ -37,7 +37,7 @@ If you have a Terraform configuration `example.tf` with the contents: ``` resource "aws_instance" "web" { - ami = "ami-1234567" + ami = "ami-408c7f28" } ``` diff --git a/website/source/docs/configuration/resources.html.md b/website/source/docs/configuration/resources.html.md index d5e087fec4..11fb9a9c5f 100644 --- a/website/source/docs/configuration/resources.html.md +++ b/website/source/docs/configuration/resources.html.md @@ -25,8 +25,8 @@ A resource configuration looks like the following: ``` resource "aws_instance" "web" { - ami = "ami-123456" - instance_type = "m1.small" + ami = "ami-408c7f28" + instance_type = "t1.micro" } ``` diff --git a/website/source/docs/internals/graph.html.md b/website/source/docs/internals/graph.html.md index 6b74283bac..684156433e 100644 --- a/website/source/docs/internals/graph.html.md +++ b/website/source/docs/internals/graph.html.md @@ -9,7 +9,7 @@ description: |- # Resource Graph Terraform builds a -[dependency graph](http://en.wikipedia.org/wiki/Dependency_graph) +[dependency graph](https://en.wikipedia.org/wiki/Dependency_graph) from the Terraform configurations, and walks this graph to generate plans, refresh state, and more. This page documents the details of what are contained in this graph, what types diff --git a/website/source/docs/modules/usage.html.markdown b/website/source/docs/modules/usage.html.markdown index 3e7ae2477a..cc331012ce 100644 --- a/website/source/docs/modules/usage.html.markdown +++ b/website/source/docs/modules/usage.html.markdown @@ -46,7 +46,7 @@ $ terraform get This command will download the modules if they haven't been already. By default, the command will not check for updates, so it is safe (and fast) -to run multiple times. You can use the `-u` flag to check and download +to run multiple times. You can use the `-update` flag to check and download updates. ## Configuration @@ -87,9 +87,9 @@ For example: ``` resource "aws_instance" "client" { - ami = "ami-123456" - instance_type = "m1.small" - availability_zone = "${module.consul.server_availability_zone}" + ami = "ami-408c7f28" + instance_type = "t1.micro" + availability_zone = "${module.consul.server_availability_zone}" } ``` @@ -104,28 +104,26 @@ resource to the module, so the module will be built first. With modules, commands such as the [plan command](/docs/commands/plan.html) and -[graph command](/docs/commands/graph.html) will show the module as a single -unit by default. You can use the `-module-depth` parameter to expand this -graph further. +[graph command](/docs/commands/graph.html) will expand modules by default. You +can use the `-module-depth` parameter to limit the graph. For example, with a configuration similar to what we've built above, here is what the graph output looks like by default:

-![Terraform Module Graph](docs/module_graph.png) +![Terraform Expanded Module Graph](docs/module_graph_expand.png)
-But if we set `-module-depth=-1`, the graph will look like this: +But if we set `-module-depth=0`, the graph will look like this:
-![Terraform Expanded Module Graph](docs/module_graph_expand.png) +![Terraform Module Graph](docs/module_graph.png)
Other commands work similarly with modules. Note that the `-module-depth` flag is purely a formatting flag; it doesn't affect what modules are created or not. - ## Tainting resources within a module The [taint command](/docs/commands/taint.html) can be used to _taint_ diff --git a/website/source/docs/plugins/provider.html.md b/website/source/docs/plugins/provider.html.md index 1ca41655b4..57b7ccfaad 100644 --- a/website/source/docs/plugins/provider.html.md +++ b/website/source/docs/plugins/provider.html.md @@ -58,14 +58,14 @@ the framework beforehand, but it goes to show how expressive the framework can be. The GoDoc for `helper/schema` can be -[found here](http://godoc.org/github.com/hashicorp/terraform/helper/schema). +[found here](https://godoc.org/github.com/hashicorp/terraform/helper/schema). This is API-level documentation but will be extremely important for you going forward. ## Provider The first thing to do in your plugin is to create the -[schema.Provider](http://godoc.org/github.com/hashicorp/terraform/helper/schema#Provider) structure. +[schema.Provider](https://godoc.org/github.com/hashicorp/terraform/helper/schema#Provider) structure. This structure implements the `ResourceProvider` interface. We recommend creating this structure in a function to make testing easier later. Example: @@ -86,13 +86,13 @@ are documented within the godoc, but a brief overview is here as well: * `ResourcesMap` - The map of resources that this provider supports. All keys are resource names and the values are the - [schema.Resource](http://godoc.org/github.com/hashicorp/terraform/helper/schema#Resource) structures implementing this resource. + [schema.Resource](https://godoc.org/github.com/hashicorp/terraform/helper/schema#Resource) structures implementing this resource. * `ConfigureFunc` - This function callback is used to configure the provider. This function should do things such as initialize any API clients, validate API keys, etc. The `interface{}` return value of this function is the `meta` parameter that will be passed into all - resource [CRUD](http://en.wikipedia.org/wiki/Create,_read,_update_and_delete) + resource [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) functions. In general, the returned value is a configuration structure or a client. @@ -127,7 +127,7 @@ func resourceComputeAddress() *schema.Resource { ``` Resources are described using the -[schema.Resource](http://godoc.org/github.com/hashicorp/terraform/helper/schema#Resource) +[schema.Resource](https://godoc.org/github.com/hashicorp/terraform/helper/schema#Resource) structure. This structure has the following fields: * `Schema` - The configuration schema for this resource. Schemas are @@ -189,7 +189,7 @@ best practices. A good starting place is the The parameter to provider configuration as well as all the CRUD operations on a resource is a -[schema.ResourceData](http://godoc.org/github.com/hashicorp/terraform/helper/schema#ResourceData). +[schema.ResourceData](https://godoc.org/github.com/hashicorp/terraform/helper/schema#ResourceData). This structure is used to query configurations as well as to set information about the resource such as its ID, connection information, and computed attributes. diff --git a/website/source/docs/providers/aws/index.html.markdown b/website/source/docs/providers/aws/index.html.markdown index 7199111c2b..e110e9f7c4 100644 --- a/website/source/docs/providers/aws/index.html.markdown +++ b/website/source/docs/providers/aws/index.html.markdown @@ -34,14 +34,26 @@ resource "aws_instance" "web" { The following arguments are supported in the `provider` block: -* `access_key` - (Required) This is the AWS access key. It must be provided, but - it can also be sourced from the `AWS_ACCESS_KEY_ID` environment variable. +* `access_key` - (Optional) This is the AWS access key. It must be provided, but + it can also be sourced from the `AWS_ACCESS_KEY_ID` environment variable, or via + a shared credentials file if `profile` is specified. -* `secret_key` - (Required) This is the AWS secret key. It must be provided, but - it can also be sourced from the `AWS_SECRET_ACCESS_KEY` environment variable. +* `secret_key` - (Optional) This is the AWS secret key. It must be provided, but + it can also be sourced from the `AWS_SECRET_ACCESS_KEY` environment variable, or + via a shared credentials file if `profile` is specified. * `region` - (Required) This is the AWS region. It must be provided, but - it can also be sourced from the `AWS_DEFAULT_REGION` environment variables. + it can also be sourced from the `AWS_DEFAULT_REGION` environment variables, or + via a shared credentials file if `profile` is specified. + +* `profile` - (Optional) This is the AWS profile name as set in the shared credentials + file. + +* `shared_credentials_file` = (Optional) This is the path to the shared credentials file. + If this is not set and a profile is specified, ~/.aws/credentials will be used. + +* `token` - (Optional) Use this to set an MFA token. It can also be sourced + from the `AWS_SECURITY_TOKEN` environment variable. * `max_retries` - (Optional) This is the maximum number of times an API call is being retried in case requests are being throttled or experience transient failures. @@ -55,8 +67,10 @@ The following arguments are supported in the `provider` block: to prevent you mistakenly using a wrong one (and end up destroying live environment). Conflicts with `allowed_account_ids`. -* `dynamodb_endpoint` - (Optional) Use this to override the default endpoint URL constructed from the `region`. It's typically used to connect to dynamodb-local. +* `dynamodb_endpoint` - (Optional) Use this to override the default endpoint + URL constructed from the `region`. It's typically used to connect to + dynamodb-local. -* `kinesis_endpoint` - (Optional) Use this to override the default endpoint URL constructed from the `region`. It's typically used to connect to kinesalite. +* `kinesis_endpoint` - (Optional) Use this to override the default endpoint URL + constructed from the `region`. It's typically used to connect to kinesalite. -* `token` - (Optional) Use this to set an MFA token. It can also be sourced from the `AWS_SECURITY_TOKEN` environment variable. diff --git a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown index 6f2b0e5112..553365d3d0 100644 --- a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown @@ -13,6 +13,11 @@ Provides an AutoScaling Group resource. ## Example Usage ``` +resource "aws_placement_group" "test" { + name = "test" + strategy = "cluster" +} + resource "aws_autoscaling_group" "bar" { availability_zones = ["us-east-1a"] name = "foobar3-terraform-test" @@ -22,6 +27,7 @@ resource "aws_autoscaling_group" "bar" { health_check_type = "ELB" desired_capacity = 4 force_delete = true + placement_group = "${aws_placement_group.test.id}" launch_configuration = "${aws_launch_configuration.foobar.name}" tag { @@ -48,14 +54,11 @@ The following arguments are supported: * `availability_zones` - (Optional) A list of AZs to launch resources in. Required only if you do not specify any `vpc_zone_identifier` * `launch_configuration` - (Required) The name of the launch configuration to use. -* `health_check_grace_period` - (Optional) Time after instance comes into service before checking health. +* `health_check_grace_period` - (Optional) Time after instance comes into service before checking health. * `health_check_type` - (Optional) "EC2" or "ELB". Controls how health checking is done. * `desired_capacity` - (Optional) The number of Amazon EC2 instances that should be running in the group. (See also [Waiting for Capacity](#waiting-for-capacity) below.) -* `min_elb_capacity` - (Optional) Setting this will cause Terraform to wait - for this number of healthy instances all attached load balancers. - (See also [Waiting for Capacity](#waiting-for-capacity) below.) * `force_delete` - (Optional) Allows deleting the autoscaling group without waiting for all instances in the pool to terminate. You can force an autoscaling group to delete even if it's in the process of scaling a resource. Normally, Terraform @@ -66,11 +69,15 @@ The following arguments are supported: * `vpc_zone_identifier` (Optional) A list of subnet IDs to launch resources in. * `termination_policies` (Optional) A list of policies to decide how the instances in the auto scale group should be terminated. * `tag` (Optional) A list of tag blocks. Tags documented below. +* `placement_group` (Optional) The name of the placement group into which you'll launch your instances, if any. * `wait_for_capacity_timeout` (Default: "10m") A maximum [duration](https://golang.org/pkg/time/#ParseDuration) that Terraform should wait for ASG instances to be healthy before timing out. (See also [Waiting for Capacity](#waiting-for-capacity) below.) Setting this to "0" causes Terraform to skip all Capacity Waiting behavior. +* `wait_for_elb_capacity` - (Optional) Setting this will cause Terraform to wait + for this number of healthy instances all attached load balancers. + (See also [Waiting for Capacity](#waiting-for-capacity) below.) Tags support the following: @@ -79,6 +86,10 @@ Tags support the following: * `propagate_at_launch` - (Required) Enables propagation of the tag to Amazon EC2 instances launched via this ASG +The following fields are deprecated: + +* `min_elb_capacity` - Please use `wait_for_elb_capacity` instead. + ## Attributes Reference The following attributes are exported: @@ -96,7 +107,7 @@ The following attributes are exported: * `vpc_zone_identifier` - The VPC zone identifier * `load_balancers` (Optional) The load balancer names associated with the autoscaling group. - + ~> **NOTE:** When using `ELB` as the health_check_type, `health_check_grace_period` is required. @@ -115,6 +126,10 @@ The first is default behavior. Terraform waits after ASG creation for `min_size` (or `desired_capacity`, if specified) healthy instances to show up in the ASG before continuing. +If `min_size` or `desired_capacity` are changed in a subsequent update, +Terraform will also wait for the correct number of healthy instances before +continuing. + Terraform considers an instance "healthy" when the ASG reports `HealthStatus: "Healthy"` and `LifecycleState: "InService"`. See the [AWS AutoScaling Docs](https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroupLifecycle.html) @@ -130,9 +145,9 @@ Setting `wait_for_capacity_timeout` to `"0"` disables ASG Capacity waiting. #### Waiting for ELB Capacity The second mechanism is optional, and affects ASGs with attached Load -Balancers. If `min_elb_capacity` is set, Terraform will wait for that number of -Instances to be `"InService"` in all attached `load_balancers`. This can be -used to ensure that service is being provided before Terraform moves on. +Balancers. If `wait_for_elb_capacity` is set, Terraform will wait for that +number of Instances to be `"InService"` in all attached `load_balancers`. This +can be used to ensure that service is being provided before Terraform moves on. As with ASG Capacity, Terraform will wait for up to `wait_for_capacity_timeout` (for `"InService"` instances. If ASG creation takes more than a few minutes, diff --git a/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown b/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown index a753c864b4..5d001f1ac7 100644 --- a/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_lifecycle_hooks.html.markdown @@ -47,9 +47,9 @@ The following arguments are supported: * `name` - (Required) The name of the lifecycle hook. * `autoscaling_group_name` - (Requred) The name of the Auto Scaling group to which you want to assign the lifecycle hook -* `default_result` - (Optional) Defines the action the Auto Scaling group should take when the lifecycle hook timeout elapses or if an unexpected failure occurs. +* `default_result` - (Optional) Defines the action the Auto Scaling group should take when the lifecycle hook timeout elapses or if an unexpected failure occurs. The value for this parameter can be either CONTINUE or ABANDON. The default value for this parameter is ABANDON. * `heartbeat_timeout` - (Optional) Defines the amount of time, in seconds, that can elapse before the lifecycle hook times out. When the lifecycle hook times out, Auto Scaling performs the action defined in the DefaultResult parameter -* `lifecycle_transition` - (Optional) The instance state to which you want to attach the lifecycle hook. For a list of lifecycle hook types, see [describe-lifecycle-hook-types](http://docs.aws.amazon.com/cli/latest/reference/autoscaling/describe-lifecycle-hook-types.html#examples) +* `lifecycle_transition` - (Optional) The instance state to which you want to attach the lifecycle hook. For a list of lifecycle hook types, see [describe-lifecycle-hook-types](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/describe-lifecycle-hook-types.html#examples) * `notification_metadata` - (Optional) Contains additional information that you want to include any time Auto Scaling sends a message to the notification target. * `notification_target_arn` - (Required) The ARN of the notification target that Auto Scaling will use to notify you when an instance is in the transition state for the lifecycle hook. This ARN target can be either an SQS queue or an SNS topic. * `role_arn` - (Required) The ARN of the IAM role that allows the Auto Scaling group to publish to the specified notification target. \ No newline at end of file diff --git a/website/source/docs/providers/aws/r/autoscaling_notification.html.markdown b/website/source/docs/providers/aws/r/autoscaling_notification.html.markdown index e47d337c64..4ebb6fc6c8 100644 --- a/website/source/docs/providers/aws/r/autoscaling_notification.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_notification.html.markdown @@ -64,5 +64,5 @@ The following attributes are exported: * `topic_arn` -[1]: http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_NotificationConfiguration.html -[2]: http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_DescribeNotificationConfigurations.html +[1]: https://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_NotificationConfiguration.html +[2]: https://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_DescribeNotificationConfigurations.html diff --git a/website/source/docs/providers/aws/r/autoscaling_policy.html.markdown b/website/source/docs/providers/aws/r/autoscaling_policy.html.markdown index 2543c0220d..82671e698c 100644 --- a/website/source/docs/providers/aws/r/autoscaling_policy.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_policy.html.markdown @@ -12,8 +12,8 @@ Provides an AutoScaling Scaling Policy resource. ~> **NOTE:** You may want to omit `desired_capacity` attribute from attached `aws_autoscaling_group` when using autoscaling policies. It's good practice to pick either -[manual](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scaling.html) -or [dynamic](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html) +[manual](https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scaling.html) +or [dynamic](https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html) (policy-based) scaling. ## Example Usage diff --git a/website/source/docs/providers/aws/r/autoscaling_schedule.html.markdown b/website/source/docs/providers/aws/r/autoscaling_schedule.html.markdown new file mode 100644 index 0000000000..4ad9472932 --- /dev/null +++ b/website/source/docs/providers/aws/r/autoscaling_schedule.html.markdown @@ -0,0 +1,55 @@ +--- +layout: "aws" +page_title: "AWS: aws_autoscaling_schedule" +sidebar_current: "docs-aws-resource-autoscaling-schedule" +description: |- + Provides an AutoScaling Schedule resource. +--- + +# aws\_autoscaling\_schedule + +Provides an AutoScaling Schedule resource. + +## Example Usage +``` +resource "aws_autoscaling_group" "foobar" { + availability_zones = ["us-west-2a"] + name = "terraform-test-foobar5" + max_size = 1 + min_size = 1 + health_check_grace_period = 300 + health_check_type = "ELB" + force_delete = true + termination_policies = ["OldestInstance"] +} + +resource "aws_autoscaling_schedule" "foobar" { + scheduled_action_name = "foobar" + min_size = 0 + max_size = 1 + desired_capacity = 0 + start_time = "2016-12-11T18:00:00Z" + end_time = "2016-12-12T06:00:00Z" + autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `autoscaling_group_name` - (Required) The name or Amazon Resource Name (ARN) of the Auto Scaling group. +* `scheduled_action_name` - (Required) The name of this scaling action. +* `start_time` - (Optional) The time for this action to start, in "YYYY-MM-DDThh:mm:ssZ" format in UTC/GMT only (for example, 2014-06-01T00:00:00Z ). + If you try to schedule your action in the past, Auto Scaling returns an error message. +* `end_time` - (Optional) The time for this action to end, in "YYYY-MM-DDThh:mm:ssZ" format in UTC/GMT only (for example, 2014-06-01T00:00:00Z ). + If you try to schedule your action in the past, Auto Scaling returns an error messag +* `recurrence` - (Optional) The time when recurring future actions will start. Start time is specified by the user following the Unix cron syntax format. +* `min_size` - (Optional) The minimum size for the Auto Scaling group. +* `max_size` - (Optional) The maximum size for the Auto Scaling group. +* `desired_capacity` - (Optional) The number of EC2 instances that should be running in the group. + +~> **NOTE:** When `start_time` and `end_time` are specified with `recurrence` , they form the boundaries of when the recurring action will start and stop. + +## Attribute Reference +* `arn` - The ARN assigned by AWS to the autoscaling schedule. \ No newline at end of file diff --git a/website/source/docs/providers/aws/r/cloudtrail.html.markdown b/website/source/docs/providers/aws/r/cloudtrail.html.markdown index 6bffee09e6..aa7314ee11 100644 --- a/website/source/docs/providers/aws/r/cloudtrail.html.markdown +++ b/website/source/docs/providers/aws/r/cloudtrail.html.markdown @@ -29,14 +29,18 @@ resource "aws_s3_bucket" "foo" { { "Sid": "AWSCloudTrailAclCheck", "Effect": "Allow", - "Principal": "*", + "Principal": { + "Service": "cloudtrail.amazonaws.com" + }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::tf-test-trail" }, { "Sid": "AWSCloudTrailWrite", "Effect": "Allow", - "Principal": "*", + "Principal": { + "Service": "cloudtrail.amazonaws.com" + }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::tf-test-trail/*", "Condition": { diff --git a/website/source/docs/providers/aws/r/cloudwatch_metric_alarm.html.markdown b/website/source/docs/providers/aws/r/cloudwatch_metric_alarm.html.markdown index 6c6cab23b7..ab627eab48 100644 --- a/website/source/docs/providers/aws/r/cloudwatch_metric_alarm.html.markdown +++ b/website/source/docs/providers/aws/r/cloudwatch_metric_alarm.html.markdown @@ -54,7 +54,7 @@ resource "aws_cloudwatch_metric_alarm" "bat" { ``` ## Argument Reference -See [related part of AWS Docs](http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricAlarm.html) +See [related part of AWS Docs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricAlarm.html) for details about valid values. The following arguments are supported: @@ -63,7 +63,9 @@ The following arguments are supported: * `comparison_operator` - (Required) The arithmetic operation to use when comparing the specified Statistic and Threshold. The specified Statistic value is used as the first operand. Either of the following is supported: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanThreshold`, `LessThanOrEqualToThreshold`. * `evaluation_periods` - (Required) The number of periods over which data is compared to the specified threshold. * `metric_name` - (Required) The name for the alarm's associated metric. - See docs for [supported metrics](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html). + See docs for [supported metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html). +* `namespace` - (Required) The namespace for the alarm's associated metric. See docs for the [list of namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html). + See docs for [supported metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html). * `namespace` - (Required) The namespace for the alarm's associated metric. * `period` - (Required) The period in seconds over which the specified `statistic` is applied. * `statistic` - (Required) The statistic to apply to the alarm's associated metric. diff --git a/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown b/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown index c157065357..3b295aeafd 100644 --- a/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown +++ b/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown @@ -88,7 +88,7 @@ The following arguments are supported: * `autoscaling_groups` - (Optional) Autoscaling groups associated with the deployment group. * `deployment_config_name` - (Optional) The name of the group's deployment config. The default is "CodeDeployDefault.OneAtATime". * `ec2_tag_filter` - (Optional) Tag filters associated with the group. See the AWS docs for details. -* `on_premises_instance_tag_filter" - (Optional) On premise tag filters associated with the group. See the AWS docs for details. +* `on_premises_instance_tag_filter` - (Optional) On premise tag filters associated with the group. See the AWS docs for details. Both ec2_tag_filter and on_premises_tag_filter blocks support the following: diff --git a/website/source/docs/providers/aws/r/db_instance.html.markdown b/website/source/docs/providers/aws/r/db_instance.html.markdown index 55d13e250f..0b8178477b 100644 --- a/website/source/docs/providers/aws/r/db_instance.html.markdown +++ b/website/source/docs/providers/aws/r/db_instance.html.markdown @@ -8,7 +8,21 @@ description: |- # aws\_db\_instance -Provides an RDS instance resource. +Provides an RDS instance resource. A DB instance is an isolated database +environment in the cloud. A DB instance can contain multiple user-created +databases. + +Changes to a DB instance can occur when you manually change a +parameter, such as `allocated_storage`, and are reflected in the next maintenance +window. Because of this, Terraform may report a difference in it's planning +phase because a modification has not yet taken place. You can use the +`apply_immediately` flag to instruct the service to apply the change immediately +(see documentation below). + +~> **Note:** using `apply_immediately` can result in a +brief downtime as the server reboots. See the AWS Docs on [RDS Maintenance][2] +for more information. + ## Example Usage @@ -30,12 +44,12 @@ resource "aws_db_instance" "default" { ## Argument Reference For more detailed documentation about each argument, refer to -the [AWS official documentation](http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). +the [AWS official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). The following arguments are supported: -* `allocated_storage` - (Required) The allocated storage in gigabytes. -* `engine` - (Required) The database engine to use. +* `allocated_storage` - (Required unless a `snapshot_identifier` or `replicate_source_db` is provided) The allocated storage in gigabytes. +* `engine` - (Required unless a `snapshot_identifier` or `replicate_source_db` is provided) The database engine to use. * `engine_version` - (Optional) The engine version to use. * `identifier` - (Required) The name of the RDS instance * `instance_class` - (Required) The instance type of the RDS instance. @@ -45,14 +59,15 @@ The following arguments are supported: * `final_snapshot_identifier` - (Optional) The name of your final DB snapshot when this DB instance is deleted. If omitted, no final snapshot will be made. +* `skip_final_snapshot` - (Optional) Determines whether a final DB snapshot is created before the DB instance is deleted. If true is specified, no DBSnapshot is created. If false is specified, a DB snapshot is created before the DB instance is deleted. Default is true. * `copy_tags_to_snapshot` – (Optional, boolean) On delete, copy all Instance `tags` to the final snapshot (if `final_snapshot_identifier` is specified). Default `false` * `name` - (Optional) The DB name to create. If omitted, no database is created initially. -* `password` - (Required) Password for the master DB user. Note that this may +* `password` - (Required unless a `snapshot_identifier` or `replicate_source_db` is provided) Password for the master DB user. Note that this may show up in logs, and it will be stored in the state file. -* `username` - (Required) Username for the master DB user. +* `username` - (Required unless a `snapshot_identifier` or `replicate_source_db` is provided) Username for the master DB user. * `availability_zone` - (Optional) The AZ for the RDS instance. * `backup_retention_period` - (Optional) The days to retain backups for. Must be `1` or greater to be a source for a [Read Replica][1]. @@ -61,27 +76,29 @@ the final snapshot (if `final_snapshot_identifier` is specified). Default storage_type of "io1". * `maintenance_window` - (Optional) The window to perform maintenance in. Syntax: "ddd:hh24:mi-ddd:hh24:mi". Eg: "Mon:00:00-Mon:03:00". - See [RDS Maintenance Window docs](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) for more. + See [RDS Maintenance Window docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) for more. * `multi_az` - (Optional) Specifies if the RDS instance is multi-AZ * `port` - (Optional) The port on which the DB accepts connections. * `publicly_accessible` - (Optional) Bool to control if instance is publicly accessible. * `vpc_security_group_ids` - (Optional) List of VPC security groups to associate. * `security_group_names` - (Optional/Deprecated) List of DB Security Groups to associate. - Only used for [DB Instances on the _EC2-Classic_ Platform](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html#USER_VPC.FindDefaultVPC). + Only used for [DB Instances on the _EC2-Classic_ Platform](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html#USER_VPC.FindDefaultVPC). * `db_subnet_group_name` - (Optional) Name of DB subnet group. DB instance will be created in the VPC associated with the DB subnet group. If unspecified, will be created in the `default` VPC, or in EC2 Classic, if available. * `parameter_group_name` - (Optional) Name of the DB parameter group to associate. * `storage_encrypted` - (Optional) Specifies whether the DB instance is encrypted. The default is `false` if not specified. * `apply_immediately` - (Optional) Specifies whether any database modifications are applied immediately, or during the next maintenance window. Default is - `false`. See [Amazon RDS Documentation for more information.](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) + `false`. See [Amazon RDS Documentation for more information.](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) * `replicate_source_db` - (Optional) Specifies that this resource is a Replicate database, and to use this value as the source database. This correlates to the `identifier` of another Amazon RDS Database to replicate. See [DB Instance Replication][1] and -[Working with PostgreSQL and MySQL Read Replicas](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) for +[Working with PostgreSQL and MySQL Read Replicas](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) for more information on using Replication. * `snapshot_identifier` - (Optional) Specifies whether or not to create this database from a snapshot. This correlates to the snapshot ID you'd find in the RDS console, e.g: rds:production-2015-06-26-06-05. * `license_model` - (Optional, but required for some DB engines, i.e. Oracle SE1) License model information for this DB instance. +* `auto_minor_version_upgrade` - (Optional) Indicates that minor engine upgrades will be applied automatically to the DB instance during the maintenance window. Defaults to true. +* `allow_major_version_upgrade` - (Optional) Indicates that major version upgrades are allowed. Changing this parameter does not result in an outage and the change is asynchronously applied as soon as possible. ~> **NOTE:** Removing the `replicate_source_db` attribute from an existing RDS Replicate database managed by Terraform will promote the database to a fully @@ -93,6 +110,7 @@ The following attributes are exported: * `id` - The RDS instance ID. * `address` - The address of the RDS instance. +* `arn` - The ARN of the RDS instance. * `allocated_storage` - The amount of allocated storage * `availability_zone` - The availability zone of the instance * `backup_retention_period` - The backup retention period @@ -109,4 +127,5 @@ The following attributes are exported: * `username` - The master username for the database * `storage_encrypted` - Specifies whether the DB instance is encrypted -[1]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html +[1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html +[2]: https://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html diff --git a/website/source/docs/providers/aws/r/db_parameter_group.html.markdown b/website/source/docs/providers/aws/r/db_parameter_group.html.markdown index 41e2f7b860..8fa5b3b6c5 100644 --- a/website/source/docs/providers/aws/r/db_parameter_group.html.markdown +++ b/website/source/docs/providers/aws/r/db_parameter_group.html.markdown @@ -36,6 +36,7 @@ The following arguments are supported: * `family` - (Required) The family of the DB parameter group. * `description` - (Required) The description of the DB parameter group. * `parameter` - (Optional) A list of DB parameters to apply. +* `tags` - (Optional) A mapping of tags to assign to the resource. Parameter blocks support the following: @@ -50,3 +51,4 @@ Parameter blocks support the following: The following attributes are exported: * `id` - The db parameter group name. +* `arn` - The ARN of the db parameter group. diff --git a/website/source/docs/providers/aws/r/db_security_group.html.markdown b/website/source/docs/providers/aws/r/db_security_group.html.markdown index 7a92426778..72b969bee7 100644 --- a/website/source/docs/providers/aws/r/db_security_group.html.markdown +++ b/website/source/docs/providers/aws/r/db_security_group.html.markdown @@ -32,7 +32,8 @@ The following arguments are supported: * `name` - (Required) The name of the DB security group. * `description` - (Required) The description of the DB security group. -* `ingress` - (Optional) A list of ingress rules. +* `ingress` - (Required) A list of ingress rules. +* `tags` - (Optional) A mapping of tags to assign to the resource. Ingress blocks support the following: @@ -47,4 +48,5 @@ Ingress blocks support the following: The following attributes are exported: * `id` - The db security group ID. +* `arn` - The arn of the DB security group. diff --git a/website/source/docs/providers/aws/r/db_subnet_group.html.markdown b/website/source/docs/providers/aws/r/db_subnet_group.html.markdown index e3dcd18ed9..1a539ffa2b 100644 --- a/website/source/docs/providers/aws/r/db_subnet_group.html.markdown +++ b/website/source/docs/providers/aws/r/db_subnet_group.html.markdown @@ -37,4 +37,5 @@ The following arguments are supported: The following attributes are exported: * `id` - The db subnet group name. +* `arn` - The ARN of the db subnet group. diff --git a/website/source/docs/providers/aws/r/directory_service_directory.html.markdown b/website/source/docs/providers/aws/r/directory_service_directory.html.markdown index 04049ee553..83f07649b1 100644 --- a/website/source/docs/providers/aws/r/directory_service_directory.html.markdown +++ b/website/source/docs/providers/aws/r/directory_service_directory.html.markdown @@ -8,7 +8,7 @@ description: |- # aws\_directory\_service\_directory -Provides a directory in AWS Directory Service. +Provides a Simple or Managed Microsoft directory in AWS Directory Service. ## Example Usage @@ -45,24 +45,32 @@ resource "aws_subnet" "bar" { The following arguments are supported: * `name` - (Required) The fully qualified name for the directory, such as `corp.example.com` -* `password` - (Required) The password for the directory administrator. -* `size` - (Required) The size of the directory (`Small` or `Large` are accepted values). -* `vpc_settings` - (Required) VPC related information about the directory. Fields documented below. +* `password` - (Required) The password for the directory administrator or connector user. +* `size` - (Required for `SimpleAD` and `ADConnector`) The size of the directory (`Small` or `Large` are accepted values). +* `vpc_settings` - (Required for `SimpleAD` and `MicrosoftAD`) VPC related information about the directory. Fields documented below. +* `connect_settings` - (Required for `ADConnector`) Connector related information about the directory. Fields documented below. * `alias` - (Optional) The alias for the directory (must be unique amongst all aliases in AWS). Required for `enable_sso`. * `description` - (Optional) A textual description for the directory. * `short_name` - (Optional) The short name of the directory, such as `CORP`. * `enable_sso` - (Optional) Whether to enable single-sign on for the directory. Requires `alias`. Defaults to `false`. +* `type` (Optional) - The directory type (`SimpleAD` or `MicrosoftAD` are accepted values). Defaults to `SimpleAD`. **vpc\_settings** supports the following: * `subnet_ids` - (Required) The identifiers of the subnets for the directory servers (min. 2 subnets in 2 different AZs). * `vpc_id` - (Required) The identifier of the VPC that the directory is in. +**connect\_settings** supports the following: + +* `customer_username` - (Required) The username corresponding to the password provided. +* `customer_dns_ips` - (Required) The DNS IP addresses of the domain to connect to. +* `subnet_ids` - (Required) The identifiers of the subnets for the directory servers (min. 2 subnets in 2 different AZs). +* `vpc_id` - (Required) The identifier of the VPC that the directory is in. + ## Attributes Reference The following attributes are exported: * `id` - The directory identifier. * `access_url` - The access URL for the directory, such as `http://alias.awsapps.com`. -* `dns_ip_addresses` - A list of IP addresses of the DNS servers for the directory. -* `type` - The directory type. +* `dns_ip_addresses` - A list of IP addresses of the DNS servers for the directory or connector. diff --git a/website/source/docs/providers/aws/r/dynamodb_table.html.markdown b/website/source/docs/providers/aws/r/dynamodb_table.html.markdown index 5bab941974..cf84fcc033 100644 --- a/website/source/docs/providers/aws/r/dynamodb_table.html.markdown +++ b/website/source/docs/providers/aws/r/dynamodb_table.html.markdown @@ -13,7 +13,7 @@ Provides a DynamoDB table resource ## Example Usage The following dynamodb table description models the table and GSI shown -in the [AWS SDK example documentation](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html) +in the [AWS SDK example documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html) ``` resource "aws_dynamodb_table" "basic-dynamodb-table" { @@ -72,8 +72,9 @@ For both `local_secondary_index` and `global_secondary_index` objects, the following properties are supported: * `name` - (Required) The name of the LSI or GSI -* `hash_key` - (Required) The name of the hash key in the index; must be - defined as an attribute in the resource +* `hash_key` - (Required for GSI) The name of the hash key in the index; must be +defined as an attribute in the resource. Only applies to + `global_secondary_index` * `range_key` - (Required) The name of the range key; must be defined * `projection_type` - (Required) One of "ALL", "INCLUDE" or "KEYS_ONLY" where *ALL* projects every attribute into the index, *KEYS_ONLY* @@ -83,6 +84,8 @@ parameter. * `non_key_attributes` - (Optional) Only required with *INCLUDE* as a projection type; a list of attributes to project into the index. These do not need to be defined as attributes on the table. +* `stream_enabled` - (Optional) Indicates whether Streams is to be enabled (true) or disabled (false). +* `stream_view_type` - (Optional) When an item in the table is modified, StreamViewType determines what information is written to the table's stream. Valid values are KEYS_ONLY, NEW_IMAGE, OLD_IMAGE, NEW_AND_OLD_IMAGES. For `global_secondary_index` objects only, you need to specify `write_capacity` and `read_capacity` in the same way you would for the diff --git a/website/source/docs/providers/aws/r/ebs_volume.html.md b/website/source/docs/providers/aws/r/ebs_volume.html.md index 00bb639a6a..78d902b3e0 100644 --- a/website/source/docs/providers/aws/r/ebs_volume.html.md +++ b/website/source/docs/providers/aws/r/ebs_volume.html.md @@ -14,7 +14,7 @@ Manages a single EBS volume. ``` resource "aws_ebs_volume" "example" { - availability_zone = "us-west-1a" + availability_zone = "us-west-2a" size = 40 tags { Name = "HelloWorld" @@ -31,7 +31,7 @@ The following arguments are supported: * `iops` - (Optional) The amount of IOPS to provision for the disk. * `size` - (Optional) The size of the drive in GB. * `snapshot_id` (Optional) A snapshot to base the EBS volume off of. -* `type` - (Optional) The type of EBS volume. +* `type` - (Optional) The type of EBS volume. Can be "standard", "gp2", or "io1". (Default: "standard"). * `kms_key_id` - (Optional) The KMS key ID for the volume. * `tags` - (Optional) A mapping of tags to assign to the resource. diff --git a/website/source/docs/providers/aws/r/ecr_repository.html.markdown b/website/source/docs/providers/aws/r/ecr_repository.html.markdown new file mode 100644 index 0000000000..e90b796da5 --- /dev/null +++ b/website/source/docs/providers/aws/r/ecr_repository.html.markdown @@ -0,0 +1,39 @@ +--- +layout: "aws" +page_title: "AWS: aws_ecr_repository" +sidebar_current: "docs-aws-resource-ecr-repository" +description: |- + Provides an EC2 Container Registry Repository. +--- + +# aws\_ecr\_repository + +Provides an EC2 Container Registry Repository. + +~> **NOTE on ECR Availability**: The EC2 Container Registry has an [initial +launch region of +`us-east-1`](https://aws.amazon.com/blogs/aws/ec2-container-registry-now-generally-available/). +As more regions become available, they will be listed [in the AWS +Docs](https://docs.aws.amazon.com/general/latest/gr/rande.html#ecr_region) + +## Example Usage + +``` +resource "aws_ecr_repository" "foo" { + name = "bar" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Name of the repository. + +## Attributes Reference + +The following attributes are exported: + +* `arn` - Full ARN of the repository. +* `name` - The name of the repository. +* `registry_id` - The registry ID where the repository was created. diff --git a/website/source/docs/providers/aws/r/ecr_repository_policy.html.markdown b/website/source/docs/providers/aws/r/ecr_repository_policy.html.markdown new file mode 100644 index 0000000000..59bb6fb066 --- /dev/null +++ b/website/source/docs/providers/aws/r/ecr_repository_policy.html.markdown @@ -0,0 +1,73 @@ +--- +layout: "aws" +page_title: "AWS: aws_ecr_repository_policy" +sidebar_current: "docs-aws-resource-ecr-repository-policy" +description: |- + Provides an ECR Repository Policy. +--- + +# aws\_ecr\_repository\_policy + +Provides an ECR repository policy. + +Note that currently only one policy may be applied to a repository. + +~> **NOTE on ECR Availability**: The EC2 Container Registry has an [initial +launch region of +`us-east-1`](https://aws.amazon.com/blogs/aws/ec2-container-registry-now-generally-available/). +As more regions become available, they will be listed [in the AWS +Docs](https://docs.aws.amazon.com/general/latest/gr/rande.html#ecr_region) + +## Example Usage + +``` +resource "aws_ecr_repository" "foo" { + name = "bar" +} + +resource "aws_ecr_repository_policy" "foopolicy" { + repository = "${aws_ecr_repository.foo.name}" + policy = < **NOTE:** You can specify either the `instance` ID or the `network_interface` ID, +~> **NOTE:** You can specify either the `instance` ID or the `network_interface` ID, but not both. Including both will **not** return an error from the AWS API, but will have undefined behavior. See the relevant [AssociateAddress API Call][1] for more information. @@ -36,10 +36,10 @@ more information. The following attributes are exported: +* `id` - Contains the EIP allocation ID. * `private_ip` - Contains the private IP address (if in VPC). * `public_ip` - Contains the public IP address. * `instance` - Contains the ID of the attached instance. * `network_interface` - Contains the ID of the attached network interface. - -[1]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssociateAddress.html +[1]: https://docs.aws.amazon.com/fr_fr/AWSEC2/latest/APIReference/API_AssociateAddress.html diff --git a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown index e39d6172a7..5578f19e8d 100644 --- a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown +++ b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown @@ -10,6 +10,17 @@ description: |- Provides an ElastiCache Cluster resource. +Changes to a Cache Cluster can occur when you manually change a +parameter, such as `node_type`, and are reflected in the next maintenance +window. Because of this, Terraform may report a difference in it's planning +phase because a modification has not yet taken place. You can use the +`apply_immediately` flag to instruct the service to apply the change immediately +(see documentation below). + +~> **Note:** using `apply_immediately` can result in a +brief downtime as the server reboots. See the AWS Docs on +[Modifying an ElastiCache Cache Cluster][2] for more information. + ## Example Usage ``` @@ -34,20 +45,21 @@ The following arguments are supported: Valid values for this parameter are `memcached` or `redis` * `engine_version` – (Optional) Version number of the cache engine to be used. -See [Selecting a Cache Engine and Version](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html) +See [Selecting a Cache Engine and Version](https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html) in the AWS Documentation center for supported versions -* `maintenance_window` – (Optional) Specifies the weekly time range which maintenance -on the cache cluster is performed. The format is `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). +* `maintenance_window` – (Optional) Specifies the weekly time range which maintenance +on the cache cluster is performed. The format is `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). The minimum maintenance window is a 60 minute period. Example: `sun:05:00-sun:09:00` * `node_type` – (Required) The compute and memory capacity of the nodes. See -[Available Cache Node Types](http://aws.amazon.com/elasticache/details#Available_Cache_Node_Types) for +[Available Cache Node Types](https://aws.amazon.com/elasticache/details#Available_Cache_Node_Types) for supported node types * `num_cache_nodes` – (Required) The initial number of cache nodes that the cache cluster will have. For Redis, this value must be 1. For Memcache, this -value must be between 1 and 20 +value must be between 1 and 20. If this number is reduced on subsequent runs, +the highest numbered nodes will be removed. * `parameter_group_name` – (Required) Name of the parameter group to associate with this cache cluster @@ -69,23 +81,29 @@ names to associate with this cache cluster `false`. See [Amazon ElastiCache Documentation for more information.][1] (Available since v0.6.0) -* `snapshot_arns` – (Optional) A single-element string list containing an -Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3. +* `snapshot_arns` – (Optional) A single-element string list containing an +Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3. Example: `arn:aws:s3:::my_bucket/snapshot1.rdb` -* `snapshot_window` - (Optional) The daily time range (in UTC) during which ElastiCache will +* `snapshot_window` - (Optional) The daily time range (in UTC) during which ElastiCache will begin taking a daily snapshot of your cache cluster. Can only be used for the Redis engine. Example: 05:00-09:00 -* `snapshow_retention_limit` - (Optional) The number of days for which ElastiCache will -retain automatic cache cluster snapshots before deleting them. For example, if you set -SnapshotRetentionLimit to 5, then a snapshot that was taken today will be retained for 5 days -before being deleted. If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off. +* `snapshot_retention_limit` - (Optional) The number of days for which ElastiCache will +retain automatic cache cluster snapshots before deleting them. For example, if you set +SnapshotRetentionLimit to 5, then a snapshot that was taken today will be retained for 5 days +before being deleted. If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off. Can only be used for the Redis engine. -* `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an -SNS topic to send ElastiCache notifications to. Example: +* `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an +SNS topic to send ElastiCache notifications to. Example: `arn:aws:sns:us-east-1:012345678999:my_sns_topic` +* `az_mode` - (Optional, Memcached only) Specifies whether the nodes in this Memcached node group are created in a single Availability Zone or created across multiple Availability Zones in the cluster's region. Valid values for this parameter are `single-az` or `cross-az`, default is `single-az`. If you want to choose `cross-az`, `num_cache_nodes` must be greater than `1`. + +* `availability_zone` - (Optional) The AZ for the cache cluster. If you want to create cache nodes in multi-az, use `availability_zones`. + +* `availability_zones` - (Optional, Memcached only) List of AZ in which the cache nodes will be created. If you want to create cache nodes in single-az, use `availability_zone`. + * `tags` - (Optional) A mapping of tags to assign to the resource. ~> **NOTE:** Snapshotting functionality is not compatible with t2 instance types. @@ -94,9 +112,10 @@ SNS topic to send ElastiCache notifications to. Example: The following attributes are exported: -* `cache_nodes` - List of node objects including `id`, `address` and `port`. +* `cache_nodes` - List of node objects including `id`, `address`, `port` and `availability_zone`. Referenceable e.g. as `${aws_elasticache_cluster.bar.cache_nodes.0.address}` - + * `configuration_endpoint` - (Memcached only) The configuration endpoint to allow host discovery -[1]: http://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html +[1]: https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html +[2]: https://docs.aws.amazon.com/fr_fr/AmazonElastiCache/latest/UserGuide/Clusters.Modify.html diff --git a/website/source/docs/providers/aws/r/elb.html.markdown b/website/source/docs/providers/aws/r/elb.html.markdown index dde90e54d7..997d7274d1 100644 --- a/website/source/docs/providers/aws/r/elb.html.markdown +++ b/website/source/docs/providers/aws/r/elb.html.markdown @@ -120,5 +120,5 @@ The following attributes are exported: instances. Use this for Classic or Default VPC only. * `source_security_group_id` - The ID of the security group that you can use as part of your inbound rules for your load balancer's back-end application - instances. Only available on ELBs launch in a VPC. + instances. Only available on ELBs launched in a VPC. * `zone_id` - The canonical hosted zone ID of the ELB (to be used in a Route 53 Alias record) diff --git a/website/source/docs/providers/aws/r/glacier_vault.html.markdown b/website/source/docs/providers/aws/r/glacier_vault.html.markdown index d783c02263..c0b6d8685e 100644 --- a/website/source/docs/providers/aws/r/glacier_vault.html.markdown +++ b/website/source/docs/providers/aws/r/glacier_vault.html.markdown @@ -8,7 +8,7 @@ description: |- # aws\_glacier\_vault -Provides a Glacier Vault Resource. You can refer to the [Glacier Developer Guide](http://docs.aws.amazon.com/amazonglacier/latest/dev/working-with-vaults.html) for a full explanation of the Glacier Vault functionality +Provides a Glacier Vault Resource. You can refer to the [Glacier Developer Guide](https://docs.aws.amazon.com/amazonglacier/latest/dev/working-with-vaults.html) for a full explanation of the Glacier Vault functionality ~> **NOTE:** When removing a Glacier Vault, the Vault must be empty. diff --git a/website/source/docs/providers/aws/r/iam_group.html.markdown b/website/source/docs/providers/aws/r/iam_group.html.markdown index 458234ac71..692dc3d498 100644 --- a/website/source/docs/providers/aws/r/iam_group.html.markdown +++ b/website/source/docs/providers/aws/r/iam_group.html.markdown @@ -36,4 +36,4 @@ The following attributes are exported: * `path` - The path of the group in IAM. * `unique_id` - The [unique ID][1] assigned by AWS. - [1]: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#GUIDs + [1]: https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#GUIDs diff --git a/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown b/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown index f9b05f66fa..aaed8cf534 100644 --- a/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown +++ b/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown @@ -55,4 +55,4 @@ The following arguments are supported: * `roles` - The list of roles assigned to the instance profile. * `unique_id` - The [unique ID][1] assigned by AWS. - [1]: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#GUIDs + [1]: https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#GUIDs diff --git a/website/source/docs/providers/aws/r/iam_role.html.markdown b/website/source/docs/providers/aws/r/iam_role.html.markdown index d7292a9a73..7a5d0df171 100644 --- a/website/source/docs/providers/aws/r/iam_role.html.markdown +++ b/website/source/docs/providers/aws/r/iam_role.html.markdown @@ -40,7 +40,7 @@ The following arguments are supported: * `name` - (Required) The name of the role. * `assume_role_policy` - (Required) The policy that grants an entity permission to assume the role. * `path` - (Optional) The path to the role. - See [IAM Identifiers](http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) for more information. + See [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) for more information. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown b/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown index 820bc6f896..61b49a96be 100644 --- a/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown +++ b/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown @@ -91,6 +91,8 @@ The following arguments are supported: AWS CloudFront, the path must be in format `/cloudfront/your_path_here`. See [IAM Identifiers][1] for more details on IAM Paths. +~> **NOTE:** AWS performs behind-the-scenes modifications to some certificate files if they do not adhere to a specific format. These modifications will result in terraform forever believing that it needs to update the resources since the local and AWS file contents will not match after theses modifications occur. In order to prevent this from happening you must ensure that all your PEM-encoded files use UNIX line-breaks and that `certificate_body` contains only one certificate. All other certificates should go in `certificate_chain`. It is common for some Certificate Authorities to issue certificate files that have DOS line-breaks and that are actually multiple certificates concatenated together in order to form a full certificate chain. + ## Attributes Reference * `id` - The unique Server Certificate name @@ -98,5 +100,5 @@ The following arguments are supported: * `arn` - The Amazon Resource Name (ARN) specifying the server certificate. -[1]: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html -[2]: http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingServerCerts.html +[1]: https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html +[2]: https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingServerCerts.html diff --git a/website/source/docs/providers/aws/r/iam_user.html.markdown b/website/source/docs/providers/aws/r/iam_user.html.markdown index a8b21c29a8..ef53316e80 100644 --- a/website/source/docs/providers/aws/r/iam_user.html.markdown +++ b/website/source/docs/providers/aws/r/iam_user.html.markdown @@ -56,4 +56,4 @@ The following attributes are exported: * `unique_id` - The [unique ID][1] assigned by AWS. * `arn` - The ARN assigned by AWS for this user. - [1]: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#GUIDs + [1]: https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#GUIDs diff --git a/website/source/docs/providers/aws/r/instance.html.markdown b/website/source/docs/providers/aws/r/instance.html.markdown index e9e8356454..2cca6e8196 100644 --- a/website/source/docs/providers/aws/r/instance.html.markdown +++ b/website/source/docs/providers/aws/r/instance.html.markdown @@ -14,11 +14,15 @@ and deleted. Instances also support [provisioning](/docs/provisioners/index.html ## Example Usage ``` -# Create a new instance of the ami-1234 on an m1.small node -# with an AWS Tag naming it "HelloWorld" +# Create a new instance of the `ami-408c7f28` (Ubuntu 14.04) on an +# t1.micro node with an AWS Tag naming it "HelloWorld" +provider "aws" { + region = "us-east-1" +} + resource "aws_instance" "web" { - ami = "ami-1234" - instance_type = "m1.small" + ami = "ami-408c7f28" + instance_type = "t1.micro" tags { Name = "HelloWorld" } @@ -40,7 +44,7 @@ The following arguments are supported: * `instance_initiated_shutdown_behavior` - (Optional) Shutdown behavior for the instance. Amazon defaults this to `stop` for EBS-backed instances and `terminate` for instance-store instances. Cannot be set on instance-store -instances. See [Shutdown Behavior](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingInstanceInitiatedShutdownBehavior) for more information. +instances. See [Shutdown Behavior](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingInstanceInitiatedShutdownBehavior) for more information. * `instance_type` - (Required) The type of instance to start * `key_name` - (Optional) The key name to use for the instance. * `monitoring` - (Optional) If true, the launched EC2 instance will have detailed monitoring enabled. (Available since v0.6.0) @@ -70,7 +74,7 @@ instances. See [Shutdown Behavior](http://docs.aws.amazon.com/AWSEC2/latest/User Each of the `*_block_device` attributes controls a portion of the AWS Instance's "Block Device Mapping". It's a good idea to familiarize yourself with [AWS's Block Device -Mapping docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) +Mapping docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) to understand the implications of using these attributes. The `root_block_device` mapping supports the following: @@ -79,7 +83,7 @@ The `root_block_device` mapping supports the following: or `"io1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. * `iops` - (Optional) The amount of provisioned - [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + [IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). This must be set with a `volume_type` of `"io1"`. * `delete_on_termination` - (Optional) Whether the volume should be destroyed on instance termination (Default: `true`). @@ -95,12 +99,12 @@ Each `ebs_block_device` supports the following: or `"io1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. * `iops` - (Optional) The amount of provisioned - [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + [IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). This must be set with a `volume_type` of `"io1"`. * `delete_on_termination` - (Optional) Whether the volume should be destroyed on instance termination (Default: `true`). * `encrypted` - (Optional) Enables [EBS - encryption](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) + encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) on the volume (Default: `false`). Cannot be used with `snapshot_id`. Modifying any `ebs_block_device` currently requires resource replacement. @@ -109,12 +113,12 @@ Each `ephemeral_block_device` supports the following: * `device_name` - The name of the block device to mount on the instance. * `virtual_name` - The [Instance Store Device - Name](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames) + Name](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames) (e.g. `"ephemeral0"`) Each AWS Instance type has a different set of Instance Store block devices available for attachment. AWS [publishes a -list](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) +list](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) of which ephemeral devices are available on each type. The devices are always identified by the `virtual_name` in the format `"ephemeral{0..N}"`. diff --git a/website/source/docs/providers/aws/r/key_pair.html.markdown b/website/source/docs/providers/aws/r/key_pair.html.markdown index abd54ea111..df2e3200f4 100644 --- a/website/source/docs/providers/aws/r/key_pair.html.markdown +++ b/website/source/docs/providers/aws/r/key_pair.html.markdown @@ -8,11 +8,11 @@ description: |- # aws\_key\_pair -Provides an [EC2 key pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) resource. A key pair is used to control login access to EC2 instances. +Provides an [EC2 key pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) resource. A key pair is used to control login access to EC2 instances. Currently this resource only supports importing an existing key pair, not creating a new key pair. -When importing an existing key pair the public key material may be in any format supported by AWS. Supported formats (per the [AWS documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws)) are: +When importing an existing key pair the public key material may be in any format supported by AWS. Supported formats (per the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws)) are: * OpenSSH public key format (the format in ~/.ssh/authorized_keys) * Base64 encoded DER format diff --git a/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown b/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown index 61a6c62b59..3cfa4c5258 100644 --- a/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown +++ b/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown @@ -53,7 +53,7 @@ resource "aws_kinesis_firehose_delivery_stream" "test_stream" { The following arguments are supported: -* `name` - (Required) A name to identify the stream. This is unique to the +* `name` - (Required) A name to identify the stream. This is unique to the AWS account and region the Stream is created in. * `destination` – (Required) This is the destination to where the data is delivered. The only options are `s3` & `redshift` * `role_arn` - (Required) The ARN of the AWS credentials. @@ -62,11 +62,11 @@ AWS account and region the Stream is created in. * `s3_buffer_size` - (Optional) Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting SizeInMBs to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec set SizeInMBs to be 10 MB or highe * `s3_buffer_interval` - (Optional) Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 -* `s3_data_compression` - (Optional) The compression format. If no value is specified, the default is NOCOMPRESSION. Other supported values are GZIP, ZIP & Snappy +* `s3_data_compression` - (Optional) The compression format. If no value is specified, the default is NOCOMPRESSION. Other supported values are GZIP, ZIP & Snappy ## Attributes Reference * `arn` - The Amazon Resource Name (ARN) specifying the Stream -[1]: http://aws.amazon.com/documentation/firehose/ +[1]: https://aws.amazon.com/documentation/firehose/ diff --git a/website/source/docs/providers/aws/r/kinesis_stream.html.markdown b/website/source/docs/providers/aws/r/kinesis_stream.html.markdown index b1ef962997..b46752a00f 100644 --- a/website/source/docs/providers/aws/r/kinesis_stream.html.markdown +++ b/website/source/docs/providers/aws/r/kinesis_stream.html.markdown @@ -44,5 +44,5 @@ when creating a Kinesis stream. See [Amazon Kinesis Streams][2] for more. * `arn` - The Amazon Resource Name (ARN) specifying the Stream -[1]: http://aws.amazon.com/documentation/kinesis/ -[2]: http://docs.aws.amazon.com/kinesis/latest/dev/amazon-kinesis-streams.html +[1]: https://aws.amazon.com/documentation/kinesis/ +[2]: https://docs.aws.amazon.com/kinesis/latest/dev/amazon-kinesis-streams.html diff --git a/website/source/docs/providers/aws/r/lambda_alias.html.markdown b/website/source/docs/providers/aws/r/lambda_alias.html.markdown new file mode 100644 index 0000000000..9c259f1642 --- /dev/null +++ b/website/source/docs/providers/aws/r/lambda_alias.html.markdown @@ -0,0 +1,35 @@ +--- +layout: "aws" +page_title: "AWS: aws_lambda_alias" +sidebar_current: "docs-aws-resource-aws-lambda-alias" +description: |- + Creates a Lambda function alias. +--- + +# aws\_lambda\_alias + +Creates a Lambda function alias. Creates an alias that points to the specified Lambda function version. + +For information about Lambda and how to use it, see [What is AWS Lambda?][1] +For information about function aliases, see [CreateAlias][2] in the API docs. + +## Example Usage + +``` +resource "aws_lambda_alias" "test_alias" { + name = "testalias" + description = "a sample description" + function_name = "${aws_lambda_function.lambda_function_test.arn}" + function_version = "$LATEST" +} +``` + +## Argument Reference + +* `name` - (Required) Name for the alias you are creating. Pattern: `(?!^[0-9]+$)([a-zA-Z0-9-_]+)` +* `description` - (Optional) Description of the alias. +* `function_name` - (Required) The function ARN of the Lambda function for which you want to create an alias. +* `function_version` - (Required) Lambda function version for which you are creating the alias. Pattern: `(\$LATEST|[0-9]+)`. + +[1]: http://docs.aws.amazon.com/lambda/latest/dg/welcome.html +[2]: http://docs.aws.amazon.com/lambda/latest/dg/API_CreateAlias.html diff --git a/website/source/docs/providers/aws/r/lambda_event_source_mapping.html.markdown b/website/source/docs/providers/aws/r/lambda_event_source_mapping.html.markdown new file mode 100644 index 0000000000..cf31e4b13c --- /dev/null +++ b/website/source/docs/providers/aws/r/lambda_event_source_mapping.html.markdown @@ -0,0 +1,47 @@ +--- +layout: "aws" +page_title: "AWS: aws_lambda_event_source_mapping" +sidebar_current: "docs-aws-resource-aws-lambda-event-source-mapping" +description: |- + Provides a Lambda event source mapping. This allows Lambda functions to get events from Kinesis and DynamoDB. +--- + +# aws\_lambda\_event\_source\_mapping + +Provides a Lambda event source mapping. This allows Lambda functions to get events from Kinesis and DynamoDB. + +For information about Lambda and how to use it, see [What is AWS Lambda?][1] +For information about event source mappings, see [CreateEventSourceMapping][2] in the API docs. + +## Example Usage + +``` +resource "aws_lambda_event_source_mapping" "event_source_mapping" { + batch_size = 100 + event_source_arn = "arn:aws:kinesis:REGION:123456789012:stream/stream_name" + enabled = true + function_name = "arn:aws:lambda:REGION:123456789012:function:function_name" + starting_position = "TRIM_HORIZON|LATEST" +} +``` + +## Argument Reference + +* `batch_size` - (Optional) The largest number of records that Lambda will retrieve from your event source at the time of invocation. Defaults to `100`. +* `event_source_arn` - (Required) The event source ARN - can either be a Kinesis or DynamoDB stream. +* `enabled` - (Optional) Determines if the mapping will be enabled on creation. Defaults to `true`. +* `function_name` - (Required) The name or the ARN of the Lambda function that will be subscribing to events. +* `starting_position` - (Required) The position in the stream where AWS Lambda should start reading. Can be one of either `TRIM_HORIZON` or `LATEST`. + +## Attributes Reference + +* `function_arn` - The the ARN of the Lambda function the event source mapping is sending events to. (Note: this is a computed value that differs from `function_name` above.) +* `last_modified` - The date this resource was last modified. +* `last_processing_result` - The result of the last AWS Lambda invocation of your Lambda function. +* `state` - The state of the event source mapping. +* `state_transition_reason` - The reason the event source mapping is in its current state. +* `uuid` - The UUID of the created event source mapping. + + +[1]: http://docs.aws.amazon.com/lambda/latest/dg/welcome.html +[2]: http://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html diff --git a/website/source/docs/providers/aws/r/lambda_function.html.markdown b/website/source/docs/providers/aws/r/lambda_function.html.markdown index f9c1ea4a3f..5bc79c0fc5 100644 --- a/website/source/docs/providers/aws/r/lambda_function.html.markdown +++ b/website/source/docs/providers/aws/r/lambda_function.html.markdown @@ -53,7 +53,7 @@ resource "aws_lambda_function" "test_lambda" { * `role` - (Required) IAM role attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See [Lambda Permission Model][4] for more details. * `description` - (Optional) Description of what your Lambda Function does. * `memory_size` - (Optional) Amount of memory in MB your Lambda Function can use at runtime. Defaults to `128`. See [Limits][5] -* `runtime` - (Optional) Defaults to `nodejs`. +* `runtime` - (Optional) Defaults to `nodejs`. See [Runtimes][6] for valid values. * `timeout` - (Optional) The amount of time your Lambda Function has to run in seconds. Defaults to `3`. See [Limits][5] ## Attributes Reference @@ -61,9 +61,9 @@ resource "aws_lambda_function" "test_lambda" { * `arn` - The Amazon Resource Name (ARN) identifying your Lambda Function. * `last_modified` - The date this resource was last modified. - -[1]: http://docs.aws.amazon.com/lambda/latest/dg/welcome.html -[2]: http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-create-function.html -[3]: http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-custom-events-create-test-function.html -[4]: http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html -[5]: http://docs.aws.amazon.com/lambda/latest/dg/limits.html +[1]: https://docs.aws.amazon.com/lambda/latest/dg/welcome.html +[2]: https://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-create-function.html +[3]: https://docs.aws.amazon.com/lambda/latest/dg/walkthrough-custom-events-create-test-function.html +[4]: https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html +[5]: https://docs.aws.amazon.com/lambda/latest/dg/limits.html +[6]: https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html#API_CreateFunction_RequestBody diff --git a/website/source/docs/providers/aws/r/launch_configuration.html.markdown b/website/source/docs/providers/aws/r/launch_configuration.html.markdown index 413f1b4a1e..46c26a5bf1 100644 --- a/website/source/docs/providers/aws/r/launch_configuration.html.markdown +++ b/website/source/docs/providers/aws/r/launch_configuration.html.markdown @@ -15,8 +15,8 @@ Provides a resource to create a new launch configuration, used for autoscaling g ``` resource "aws_launch_configuration" "as_conf" { name = "web_config" - image_id = "ami-1234" - instance_type = "m1.small" + image_id = "ami-408c7f28" + instance_type = "t1.micro" } ``` @@ -33,8 +33,8 @@ with `name_prefix`. Example: ``` resource "aws_launch_configuration" "as_conf" { name_prefix = "terraform-lc-example-" - image_id = "ami-1234" - instance_type = "m1.small" + image_id = "ami-408c7f28" + instance_type = "t1.micro" lifecycle { create_before_destroy = true @@ -66,8 +66,8 @@ for more information or how to launch [Spot Instances][3] with Terraform. ``` resource "aws_launch_configuration" "as_conf" { - image_id = "ami-1234" - instance_type = "m1.small" + image_id = "ami-408c7f28" + instance_type = "t1.micro" spot_price = "0.001" lifecycle { create_before_destroy = true @@ -77,10 +77,6 @@ resource "aws_launch_configuration" "as_conf" { resource "aws_autoscaling_group" "bar" { name = "terraform-asg-example" launch_configuration = "${aws_launch_configuration.as_conf.name}" - - lifecycle { - create_before_destroy = true - } } ``` @@ -115,7 +111,7 @@ The following arguments are supported: Each of the `*_block_device` attributes controls a portion of the AWS Launch Configuration's "Block Device Mapping". It's a good idea to familiarize yourself with [AWS's Block Device -Mapping docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) +Mapping docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) to understand the implications of using these attributes. The `root_block_device` mapping supports the following: @@ -124,7 +120,7 @@ The `root_block_device` mapping supports the following: or `"io1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. * `iops` - (Optional) The amount of provisioned - [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + [IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). This must be set with a `volume_type` of `"io1"`. * `delete_on_termination` - (Optional) Whether the volume should be destroyed on instance termination (Default: `true`). @@ -140,10 +136,11 @@ Each `ebs_block_device` supports the following: or `"io1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. * `iops` - (Optional) The amount of provisioned - [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + [IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). This must be set with a `volume_type` of `"io1"`. * `delete_on_termination` - (Optional) Whether the volume should be destroyed on instance termination (Default: `true`). +* `encryption` - (Optional) Whether the volume should be encrypted or not. Do not use this option if you are using `snapshot_id` as the encryption flag will be determined by the snapshot. (Default: `false`). Modifying any `ebs_block_device` currently requires resource replacement. @@ -151,12 +148,12 @@ Each `ephemeral_block_device` supports the following: * `device_name` - The name of the block device to mount on the instance. * `virtual_name` - The [Instance Store Device - Name](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames) + Name](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames) (e.g. `"ephemeral0"`) Each AWS Instance type has a different set of Instance Store block devices available for attachment. AWS [publishes a -list](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) +list](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) of which ephemeral devices are available on each type. The devices are always identified by the `virtual_name` in the format `"ephemeral{0..N}"`. diff --git a/website/source/docs/providers/aws/r/nat_gateway.html.markdown b/website/source/docs/providers/aws/r/nat_gateway.html.markdown new file mode 100644 index 0000000000..5f831c043d --- /dev/null +++ b/website/source/docs/providers/aws/r/nat_gateway.html.markdown @@ -0,0 +1,51 @@ +--- +layout: "aws" +page_title: "AWS: aws_nat_gateway" +sidebar_current: "docs-aws-resource-nat-gateway" +description: |- + Provides a resource to create a VPC NAT Gateway. +--- + +# aws\_nat\_gateway + +Provides a resource to create a VPC NAT Gateway. + +## Example Usage + +``` +resource "aws_nat_gateway" "gw" { + allocation_id = "${aws_eip.nat.id}" + subnet_id = "${aws_subnet.public.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `allocation_id` - (Required) The Allocation ID of the Elastic IP address for the gateway. +* `subnet_id` - (Required) The Subnet ID of the subnet in which to place the gateway. + +-> **Note:** It's recommended to denote that the NAT Gateway depends on the Internet Gateway for the VPC in which the NAT Gateway's subnet is located. For example: + + resource "aws_internet_gateway" "gw" { + vpc_id = "${aws_vpc.main.id}" + } + + resource "aws_nat_gateway" "gw" { + //other arguments + + depends_on = ["aws_internet_gateway.gw"] + } + + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the NAT Gateway. +* `allocation_id` - The Allocation ID of the Elastic IP address for the gateway. +* `subnet_id` - The Subnet ID of the subnet in which the NAT gateway is placed. +* `network_interface_id` - The ENI ID of the network interface created by the NAT gateway. +* `private_ip` - The private IP address of the NAT Gateway. +* `public_ip` - The public IP address of the NAT Gateway. diff --git a/website/source/docs/providers/aws/r/network_acl_rule.html.markdown b/website/source/docs/providers/aws/r/network_acl_rule.html.markdown new file mode 100644 index 0000000000..e5766756fe --- /dev/null +++ b/website/source/docs/providers/aws/r/network_acl_rule.html.markdown @@ -0,0 +1,53 @@ +--- +layout: "aws" +page_title: "AWS: aws_network_acl_rule" +sidebar_current: "docs-aws-resource-network-acl-rule" +description: |- + Provides an network ACL Rule resource. +--- + +# aws\_network\_acl\_rule + +Creates an entry (a rule) in a network ACL with the specified rule number. + +## Example Usage + +``` +resource "aws_network_acl" "bar" { + vpc_id = "${aws_vpc.foo.id}" +} +resource "aws_network_acl_rule" "bar" { + network_acl_id = "${aws_network_acl.bar.id}" + rule_number = 200 + egress = false + protocol = "tcp" + rule_action = "allow" + cidr_block = "0.0.0.0/0" + from_port = 22 + to_port = 22 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `network_acl_id` - (Required) The ID of the network ACL. +* `rule_number` - (Required) The rule number for the entry (for example, 100). ACL entries are processed in ascending order by rule number. +* `egress` - (Optional, bool) Indicates whether this is an egress rule (rule is applied to traffic leaving the subnet). Default `false`. +* `protocol` - (Required) The protocol. A value of -1 means all protocols. +* `rule_action` - (Required) Indicates whether to allow or deny the traffic that matches the rule. Accepted values: `allow` | `deny` +* `cidr_block` - (Required) The network range to allow or deny, in CIDR notation (for example 172.16.0.0/24 ). +* `from_port` - (Optional) The from port to match. +* `to_port` - (Optional) The to port to match. +* `icmp_type` - (Optional) ICMP protocol: The ICMP type. Required if specifying ICMP for the protocol. e.g. -1 +* `icmp_code` - (Optional) ICMP protocol: The ICMP code. Required if specifying ICMP for the protocol. e.g. -1 + +~> Note: For more information on ICMP types and codes, see here: http://www.nthelp.com/icmp.html + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the network ACL Rule + diff --git a/website/source/docs/providers/aws/r/placement_group.html.markdown b/website/source/docs/providers/aws/r/placement_group.html.markdown index e4ad98df8e..1e7c024fac 100644 --- a/website/source/docs/providers/aws/r/placement_group.html.markdown +++ b/website/source/docs/providers/aws/r/placement_group.html.markdown @@ -9,7 +9,7 @@ description: |- # aws\_placement\_group Provides an EC2 placement group. Read more about placement groups -in [AWS Docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html). +in [AWS Docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html). ## Example Usage diff --git a/website/source/docs/providers/aws/r/rds_cluster.html.markdown b/website/source/docs/providers/aws/r/rds_cluster.html.markdown index c60e6ef294..cd843a16eb 100644 --- a/website/source/docs/providers/aws/r/rds_cluster.html.markdown +++ b/website/source/docs/providers/aws/r/rds_cluster.html.markdown @@ -15,6 +15,17 @@ database engine. For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amazon RDS User Guide. +Changes to a RDS Cluster can occur when you manually change a +parameter, such as `port`, and are reflected in the next maintenance +window. Because of this, Terraform may report a difference in it's planning +phase because a modification has not yet taken place. You can use the +`apply_immediately` flag to instruct the service to apply the change immediately +(see documentation below). + +~> **Note:** using `apply_immediately` can result in a +brief downtime as the server reboots. See the AWS Docs on [RDS Maintenance][4] +for more information. + ## Example Usage ``` @@ -35,7 +46,7 @@ RDS Cluster Instances do not currently display in the AWS Console. ## Argument Reference For more detailed documentation about each argument, refer to -the [AWS official documentation](http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). +the [AWS official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). The following arguments are supported: @@ -62,7 +73,7 @@ Default: A 30-minute window selected at random from an 8-hour block of time per with the Cluster * `apply_immediately` - (Optional) Specifies whether any cluster modifications are applied immediately, or during the next maintenance window. Default is - `false`. See [Amazon RDS Documentation for more information.](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) + `false`. See [Amazon RDS Documentation for more information.](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) * `db_subnet_group_name` - (Optional) A DB subnet group to associate with this DB instance. ## Attributes Reference @@ -89,7 +100,8 @@ The following attributes are exported: * `storage_encrypted` - Specifies whether the DB instance is encrypted * `preferred_backup_window` - The daily time range during which the backups happen -[1]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html +[1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html -[2]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html +[2]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html [3]: /docs/providers/aws/r/rds_cluster_instance.html +[4]: http://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html diff --git a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown index 893c96d889..2580ff71cc 100644 --- a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown +++ b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown @@ -43,7 +43,7 @@ resource "aws_rds_cluster" "default" { ## Argument Reference For more detailed documentation about each argument, refer to -the [AWS official documentation](http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). +the [AWS official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). The following arguments are supported: @@ -86,8 +86,8 @@ this instance is a read replica * `port` - The database port * `status` - The RDS instance status -[2]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html +[2]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html [3]: /docs/providers/aws/r/rds_cluster.html -[4]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html +[4]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html [5]: /docs/configuration/resources.html#count -[6]: http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html +[6]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html diff --git a/website/source/docs/providers/aws/r/redshift_cluster.html.markdown b/website/source/docs/providers/aws/r/redshift_cluster.html.markdown new file mode 100644 index 0000000000..ac04e6e75d --- /dev/null +++ b/website/source/docs/providers/aws/r/redshift_cluster.html.markdown @@ -0,0 +1,82 @@ +--- +layout: "aws" +page_title: "AWS: aws_redshift_cluster" +sidebar_current: "docs-aws-resource-redshift-cluster" +--- + +# aws\_redshift\_cluster + +Provides a Redshift Cluster Resource. + +## Example Usage + +``` +resource "aws_redshift_cluster" "default" { + cluster_identifier = "tf-redshift-cluster" + database_name = "mydb" + master_username = "foo" + master_password = "Mustbe8characters" + node_type = "dc1.large" + cluster_type = "single-node" +} +``` + +## Argument Reference + +For more detailed documentation about each argument, refer to +the [AWS official documentation](http://docs.aws.amazon.com/cli/latest/reference/redshift/index.html#cli-aws-redshift). + +The following arguments are supported: + +* `cluster_identifier` - (Required) The Cluster Identifier. Must be a lower case +string. +* `database_name` - (Optional) The name of the first database to be created when the cluster is created. + If you do not provide a name, Amazon Redshift will create a default database called `dev`. +* `cluster_type` - (Required) The type of the cluster. Valid values are `multi-node` and `single-node` +* `node_type` - (Required) The node type to be provisioned for the cluster. +* `master_password` - (Required) Password for the master DB user. Note that this may + show up in logs, and it will be stored in the state file +* `master_username` - (Required) Username for the master DB user +* `cluster_security_groups` - (Optional) A list of security groups to be associated with this cluster. +* `vpc_security_group_ids` - (Optional) A list of Virtual Private Cloud (VPC) security groups to be associated with the cluster. +* `cluster_subnet_group_name` - (Optional) The name of a cluster subnet group to be associated with this cluster. If this parameter is not provided the resulting cluster will be deployed outside virtual private cloud (VPC). +* `availability_zone` - (Optional) The EC2 Availability Zone (AZ) in which you want Amazon Redshift to provision the cluster. For example, if you have several EC2 instances running in a specific Availability Zone, then you might want the cluster to be provisioned in the same zone in order to decrease network latency. +* `preferred_maintenance_window` - (Optional) The weekly time range (in UTC) during which automated cluster maintenance can occur. + Format: ddd:hh24:mi-ddd:hh24:mi +* `cluster_parameter_group_name` - (Optional) The name of the parameter group to be associated with this cluster. +* `automated_snapshot_retention_period` - (Optional) The number of days that automated snapshots are retained. If the value is 0, automated snapshots are disabled. Even if automated snapshots are disabled, you can still create manual snapshots when you want with create-cluster-snapshot. +* `port` - (Optional) The port number on which the cluster accepts incoming connections. + The cluster is accessible only via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will listen for incoming connections. Default port is 5439. +* `cluster_version` - (Optional) The version of the Amazon Redshift engine software that you want to deploy on the cluster. + The version selected runs on all the nodes in the cluster. +* `allow_version_upgrade` - (Optional) If true , major version upgrades can be applied during the maintenance window to the Amazon Redshift engine that is running on the cluster. Default is true +* `number_of_nodes` - (Optional) The number of compute nodes in the cluster. This parameter is required when the ClusterType parameter is specified as multi-node. Default is 1. +* `publicly_accessible` - (Optional) If true , the cluster can be accessed from a public network. +* `encrypted` - (Optional) If true , the data in the cluster is encrypted at rest. +* `elastic_ip` - (Optional) The Elastic IP (EIP) address for the cluster. +* `skip_final_snapshot` - (Optional) Determines whether a final snapshot of the cluster is created before Amazon Redshift deletes the cluster. If true , a final cluster snapshot is not created. If false , a final cluster snapshot is created before the cluster is deleted. Default is false. +* `final_snapshot_identifier` - (Optional) The identifier of the final snapshot that is to be created immediately before deleting the cluster. If this parameter is provided, `skip_final_snapshot` must be false. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Redshift Cluster ID. +* `cluster_identifier` - The Cluster Identifier +* `cluster_type` - The cluster type +* `node_type` - The type of nodes in the cluster +* `database_name` - The name of the default database in the Cluster +* `availability_zone` - The availability zone of the Cluster +* `automated_snapshot_retention_period` - The backup retention period +* `preferred_maintenance_window` - The backup window +* `endpoint` - The connection endpoint +* `encrypted` - Whether the data in the cluster is encrypted +* `cluster_security_groups` - The security groups associated with the cluster +* `vpc_security_group_ids` - The VPC security group Ids associated with the cluster +* `port` - The Port the cluster responds on +* `cluster_version` - The version of Redshift engine software +* `cluster_parameter_group_name` - The name of the parameter group to be associated with this cluster +* `cluster_subnet_group_name` - The name of a cluster subnet group to be associated with this cluster +* `cluster_public_key` - The public key for the cluster +* `cluster_revision_number` - The specific revision number of the database in the cluster + diff --git a/website/source/docs/providers/aws/r/redshift_parameter_group.html.markdown b/website/source/docs/providers/aws/r/redshift_parameter_group.html.markdown new file mode 100644 index 0000000000..d7a869520d --- /dev/null +++ b/website/source/docs/providers/aws/r/redshift_parameter_group.html.markdown @@ -0,0 +1,53 @@ +--- +layout: "aws" +page_title: "AWS: aws_redshift_parameter_group" +sidebar_current: "docs-aws-resource-redshift-parameter-group" +--- + +# aws\_redshift\_parameter\_group + +Provides a Redshift Cluster parameter group resource. + +## Example Usage + +``` +resource "aws_redshift_parameter_group" "bar" { + name = "parameter-group-test-terraform" + family = "redshift-1.0" + description = "Test parameter group for terraform" + parameter { + name = "require_ssl" + value = "true" + } + parameter { + name = "query_group" + value = "example" + } + parameter{ + name = "enable_user_activity_logging" + value = "true" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the Redshift parameter group. +* `family` - (Required) The family of the Redshift parameter group. +* `description` - (Required) The description of the Redshift parameter group. +* `parameter` - (Optional) A list of Redshift parameters to apply. + +Parameter blocks support the following: + +* `name` - (Required) The name of the Redshift parameter. +* `value` - (Required) The value of the Redshift parameter. + +You can read more about the parameters that Redshift supports in the [documentation](http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Redshift parameter group name. diff --git a/website/source/docs/providers/aws/r/redshift_security_group.html.markdown b/website/source/docs/providers/aws/r/redshift_security_group.html.markdown new file mode 100644 index 0000000000..ebdcc92c7e --- /dev/null +++ b/website/source/docs/providers/aws/r/redshift_security_group.html.markdown @@ -0,0 +1,46 @@ +--- +layout: "aws" +page_title: "AWS: aws_redshift_security_group" +sidebar_current: "docs-aws-resource-redshift-security-group" +description: |- + Provides a Redshift security group resource. +--- + +# aws\_redshift\_security\_group + +Creates a new Amazon Redshift security group. You use security groups to control access to non-VPC clusters + +## Example Usage + +``` +resource "aws_redshift_security_group" "default" { + name = "redshift_sg" + description = "Redshift Example security group" + + ingress { + cidr = "10.0.0.0/24" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the Redshift security group. +* `description` - (Required) The description of the Redshift security group. +* `ingress` - (Optional) A list of ingress rules. + +Ingress blocks support the following: + +* `cidr` - The CIDR block to accept +* `security_group_name` - The name of the security group to authorize +* `security_group_owner_id` - The owner Id of the security group provided + by `security_group_name`. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Redshift security group ID. + diff --git a/website/source/docs/providers/aws/r/redshift_subnet_group.html.markdown b/website/source/docs/providers/aws/r/redshift_subnet_group.html.markdown new file mode 100644 index 0000000000..6354c32baa --- /dev/null +++ b/website/source/docs/providers/aws/r/redshift_subnet_group.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: aws_redshift_subnet_group" +sidebar_current: "docs-aws-resource-redshift-subnet-group" +description: |- + Provides a Redshift Subnet Group resource. +--- + +# aws\_redshift\_subnet\_group + +Creates a new Amazon Redshift subnet group. You must provide a list of one or more subnets in your existing Amazon Virtual Private Cloud (Amazon VPC) when creating Amazon Redshift subnet group. + +## Example Usage + +``` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-1" + } +} + +resource "aws_subnet" "bar" { + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-2" + } +} + +resource "aws_redshift_subnet_group" "foo" { + name = "foo" + description = "foo description" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] +} +` +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the Redshift Subnet group. +* `description` - (Required) The description of the Redshift Subnet group. +* `subnet_ids` - (Optional) An array of VPC subnet IDs.. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Redshift Subnet group ID. + diff --git a/website/source/docs/providers/aws/r/route.html.markdown b/website/source/docs/providers/aws/r/route.html.markdown index 3606555e6f..299e526fd1 100644 --- a/website/source/docs/providers/aws/r/route.html.markdown +++ b/website/source/docs/providers/aws/r/route.html.markdown @@ -35,12 +35,14 @@ The following arguments are supported: * `destination_cidr_block` - (Required) The destination CIDR block. * `vpc_peering_connection_id` - (Optional) An ID of a VPC peering connection. * `gateway_id` - (Optional) An ID of a VPC internet gateway or a virtual private gateway. +* `nat_gateway_id` - (Optional) An ID of a VPC NAT gateway. * `instance_id` - (Optional) An ID of a NAT instance. * `network_interface_id` - (Optional) An ID of a network interface. -Each route must contain either a `gateway_id`, an `instance_id` or a `vpc_peering_connection_id` -or a `network_interface_id`. Note that the default route, mapping the VPC's CIDR block to "local", -is created implicitly and cannot be specified. +Each route must contain either a `gateway_id`, a `nat_gateway_id`, an +`instance_id` or a `vpc_peering_connection_id` or a `network_interface_id`. +Note that the default route, mapping the VPC's CIDR block to "local", is +created implicitly and cannot be specified. ## Attributes Reference @@ -53,5 +55,6 @@ will be exported as an attribute once the resource is created. * `destination_cidr_block` - The destination CIDR block. * `vpc_peering_connection_id` - An ID of a VPC peering connection. * `gateway_id` - An ID of a VPC internet gateway or a virtual private gateway. +* `nat_gateway_id` - An ID of a VPC NAT gateway. * `instance_id` - An ID of a NAT instance. * `network_interface_id` - An ID of a network interface. diff --git a/website/source/docs/providers/aws/r/route53_delegation_set.html.markdown b/website/source/docs/providers/aws/r/route53_delegation_set.html.markdown index 907000077b..cf6ddb59e9 100644 --- a/website/source/docs/providers/aws/r/route53_delegation_set.html.markdown +++ b/website/source/docs/providers/aws/r/route53_delegation_set.html.markdown @@ -8,7 +8,7 @@ description: |- # aws\_route53\_delegation_set -Provides a [Route53 Delegation Set](http://docs.aws.amazon.com/Route53/latest/APIReference/actions-on-reusable-delegation-sets.html) resource. +Provides a [Route53 Delegation Set](https://docs.aws.amazon.com/Route53/latest/APIReference/actions-on-reusable-delegation-sets.html) resource. ## Example Usage diff --git a/website/source/docs/providers/aws/r/route53_health_check.html.markdown b/website/source/docs/providers/aws/r/route53_health_check.html.markdown index 07f8dc751a..357972eea1 100644 --- a/website/source/docs/providers/aws/r/route53_health_check.html.markdown +++ b/website/source/docs/providers/aws/r/route53_health_check.html.markdown @@ -12,7 +12,7 @@ Provides a Route53 health check. ## Example Usage ``` -resource "aws_route53_health_check" "foo" { +resource "aws_route53_health_check" "child1" { fqdn = "foobar.terraform.com" port = 80 type = "HTTP" @@ -24,6 +24,16 @@ resource "aws_route53_health_check" "foo" { Name = "tf-test-health-check" } } + +resource "aws_route53_health_check" "foo" { + type = "CALCULATED" + child_health_threshold = 1 + child_healthchecks = ["${aws_route53_health_check.child1.id}"] + + tags = { + Name = "tf-test-calculated-health-check" + } +} ``` ## Argument Reference @@ -32,11 +42,17 @@ The following arguments are supported: * `fqdn` - (Optional) The fully qualified domain name of the endpoint to be checked. * `ip_address` - (Optional) The IP address of the endpoint to be checked. +* `port` - (Optional) The port of the endpoint to be checked. +* `type` - (Required) The protocol to use when performing health checks. Valid values are `HTTP`, `HTTPS`, `HTTP_STR_MATCH`, `HTTPS_STR_MATCH`, `TCP` and `CALCULATED`. * `failure_threshold` - (Required) The number of consecutive health checks that an endpoint must pass or fail. * `request_interval` - (Required) The number of seconds between the time that Amazon Route 53 gets a response from your endpoint and the time that it sends the next health-check request. * `resource_path` - (Optional) The path that you want Amazon Route 53 to request when performing health checks. -* `search_string` - (Optional) String searched in respoonse body for check to considered healthy. +* `search_string` - (Optional) String searched in the first 5120 bytes of the response body for check to be considered healthy. +* `measure_latency` - (Optional) A Boolean value that indicates whether you want Route 53 to measure the latency between health checkers in multiple AWS regions and your endpoint and to display CloudWatch latency graphs in the Route 53 console. +* `invert_healthcheck` - (Optional) A boolean value that indicates whether the status of health check should be inverted. For example, if a health check is healthy but Inverted is True , then Route 53 considers the health check to be unhealthy. +* `child_healthchecks` - (Optional) For a specified parent health check, a list of HealthCheckId values for the associated child health checks. +* `child_health_threshold` - (Optional) The minimum number of child health checks that must be healthy for Route 53 to consider the parent health check to be healthy. Valid values are integers between 0 and 256, inclusive * `tags` - (Optional) A mapping of tags to assign to the health check. -Exactly one of `fqdn` or `ip_address` must be specified. +At least one of either `fqdn` or `ip_address` must be specified. diff --git a/website/source/docs/providers/aws/r/route53_record.html.markdown b/website/source/docs/providers/aws/r/route53_record.html.markdown index e7455ba13f..e099eb9cd7 100644 --- a/website/source/docs/providers/aws/r/route53_record.html.markdown +++ b/website/source/docs/providers/aws/r/route53_record.html.markdown @@ -25,7 +25,7 @@ resource "aws_route53_record" "www" { ``` ### Weighted routing policy -See [AWS Route53 Developer Guide](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted) for details. +See [AWS Route53 Developer Guide](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted) for details. ``` resource "aws_route53_record" "www-dev" { @@ -50,10 +50,10 @@ resource "aws_route53_record" "www-live" { ``` ### Alias record -See [related part of AWS Route53 Developer Guide](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html) +See [related part of AWS Route53 Developer Guide](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html) to understand differences between alias and non-alias records. -TTL for all alias records is [60 seconds](http://aws.amazon.com/route53/faqs/#dns_failover_do_i_need_to_adjust), +TTL for all alias records is [60 seconds](https://aws.amazon.com/route53/faqs/#dns_failover_do_i_need_to_adjust), you cannot change this, therefore `ttl` has to be omitted in alias records. ``` @@ -99,15 +99,21 @@ record from one another. Required for each weighted record. * `alias` - (Optional) An alias block. Conflicts with `ttl` & `records`. Alias record documented below. +~> **Note:** The `weight` attribute uses a special sentinel value of `-1` for a +default in Terraform. This allows Terraform to distinquish between a `0` value +and an empty value in the configuration (none specified). As a result, a +`weight` of `-1` will be present in the statefile if `weight` is omitted in the +configuration. + Exactly one of `records` or `alias` must be specified: this determines whether it's an alias record. Alias records support the following: * `name` - (Required) DNS domain name for a CloudFront distribution, S3 bucket, ELB, or another resource record set in this hosted zone. * `zone_id` - (Required) Hosted zone ID for a CloudFront distribution, S3 bucket, ELB, or Route 53 hosted zone. See [`resource_elb.zone_id`](/docs/providers/aws/r/elb.html#zone_id) for example. -* `evaluate_target_health` - (Required) Set to `true` if you want Route 53 to determine whether to respond to DNS queries using this resource record set by checking the health of the resource record set. Some resources have special requirements, see [related part of documentation](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values.html#rrsets-values-alias-evaluate-target-health). +* `evaluate_target_health` - (Required) Set to `true` if you want Route 53 to determine whether to respond to DNS queries using this resource record set by checking the health of the resource record set. Some resources have special requirements, see [related part of documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values.html#rrsets-values-alias-evaluate-target-health). ## Attributes Reference -* `fqdn` - [FQDN](http://en.wikipedia.org/wiki/Fully_qualified_domain_name) built using the zone domain and `name` +* `fqdn` - [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name) built using the zone domain and `name` diff --git a/website/source/docs/providers/aws/r/route53_zone.html.markdown b/website/source/docs/providers/aws/r/route53_zone.html.markdown index 2533a76c27..ed8ad5416e 100644 --- a/website/source/docs/providers/aws/r/route53_zone.html.markdown +++ b/website/source/docs/providers/aws/r/route53_zone.html.markdown @@ -66,4 +66,4 @@ The following attributes are exported: * `zone_id` - The Hosted Zone ID. This can be referenced by zone records. * `name_servers` - A list of name servers in associated (or default) delegation set. - Find more about delegation sets in [AWS docs](http://docs.aws.amazon.com/Route53/latest/APIReference/actions-on-reusable-delegation-sets.html). + Find more about delegation sets in [AWS docs](https://docs.aws.amazon.com/Route53/latest/APIReference/actions-on-reusable-delegation-sets.html). diff --git a/website/source/docs/providers/aws/r/route_table.html.markdown b/website/source/docs/providers/aws/r/route_table.html.markdown index e751b71933..0b9c036c1c 100644 --- a/website/source/docs/providers/aws/r/route_table.html.markdown +++ b/website/source/docs/providers/aws/r/route_table.html.markdown @@ -45,13 +45,14 @@ Each route supports the following: * `cidr_block` - (Required) The CIDR block of the route. * `gateway_id` - (Optional) The Internet Gateway ID. +* `nat_gateway_id` - (Optional) The NAT Gateway ID. * `instance_id` - (Optional) The EC2 instance ID. * `vpc_peering_connection_id` - (Optional) The VPC Peering ID. * `network_interface_id` - (Optional) The ID of the elastic network interface (eni) to use. -Each route must contain either a `gateway_id`, an `instance_id` or a `vpc_peering_connection_id` -or a `network_interface_id`. Note that the default route, mapping the VPC's CIDR block to "local", -is created implicitly and cannot be specified. +Each route must contain either a `gateway_id`, an `instance_id`, a `nat_gateway_id`, a +`vpc_peering_connection_id` or a `network_interface_id`. Note that the default route, mapping +the VPC's CIDR block to "local", is created implicitly and cannot be specified. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/s3_bucket.html.markdown b/website/source/docs/providers/aws/r/s3_bucket.html.markdown index 5ef365b7b3..1e923c462f 100644 --- a/website/source/docs/providers/aws/r/s3_bucket.html.markdown +++ b/website/source/docs/providers/aws/r/s3_bucket.html.markdown @@ -70,27 +70,45 @@ resource "aws_s3_bucket" "b" { } ``` +### Enable Logging + +``` +resource "aws_s3_bucket" "log_bucket" { + bucket = "my_tf_log_bucket" + acl = "log-delivery-write" +} +resource "aws_s3_bucket" "b" { + bucket = "my_tf_test_bucket" + acl = "private" + logging { + target_bucket = "${aws_s3_bucket.log_bucket.id}" + target_prefix = "log/" + } +} +``` + ## Argument Reference The following arguments are supported: * `bucket` - (Required) The name of the bucket. -* `acl` - (Optional) The [canned ACL](http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Defaults to "private". -* `policy` - (Optional) A valid [bucket policy](http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a `terraform plan`. In this case, please make sure you use the verbose/specific version of the policy. +* `acl` - (Optional) The [canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Defaults to "private". +* `policy` - (Optional) A valid [bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a `terraform plan`. In this case, please make sure you use the verbose/specific version of the policy. * `tags` - (Optional) A mapping of tags to assign to the bucket. * `force_destroy` - (Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are *not* recoverable. * `website` - (Optional) A website object (documented below). -* `cors_rule` - (Optional) A rule of [Cross-Origin Resource Sharing](http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html) (documented below). -* `versioning` - (Optional) A state of [versioning](http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) (documented below) +* `cors_rule` - (Optional) A rule of [Cross-Origin Resource Sharing](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html) (documented below). +* `versioning` - (Optional) A state of [versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) (documented below) +* `logging` - (Optional) A settings of [bucket logging](https://docs.aws.amazon.com/AmazonS3/latest/UG/ManagingBucketLogging.html) (documented below). -The website object supports the following: +The `website` object supports the following: * `index_document` - (Required, unless using `redirect_all_requests_to`) Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders. * `error_document` - (Optional) An absolute path to the document to return in case of a 4XX error. -* `redirect_all_requests_to` - (Optional) A hostname to redirect all website requests for this bucket to. +* `redirect_all_requests_to` - (Optional) A hostname to redirect all website requests for this bucket to. Hostname can optionally be prefixed with a protocol (`http://` or `https://`) to use when redirecting requests. The default is the protocol that is used in the original request. -The CORS supports the following: +The `CORS` object supports the following: * `allowed_headers` (Optional) Specifies which headers are allowed. * `allowed_methods` (Required) Specifies which methods are allowed. Can be `GET`, `PUT`, `POST`, `DELETE` or `HEAD`. @@ -98,17 +116,22 @@ The CORS supports the following: * `expose_headers` (Optional) Specifies expose header in the response. * `max_age_seconds` (Optional) Specifies time in seconds that browser can cache the response for a preflight request. -The versioning supports the following: +The `versioning` object supports the following: * `enabled` - (Optional) Enable versioning. Once you version-enable a bucket, it can never return to an unversioned state. You can, however, suspend versioning on that bucket. +The `logging` object supports the following: + +* `target_bucket` - (Required) The name of the bucket that will receive the log objects. +* `target_prefix` - (Optional) To specify a key prefix for log objects. + ## Attributes Reference The following attributes are exported: * `id` - The name of the bucket. * `arn` - The ARN of the bucket. Will be of format `arn:aws:s3:::bucketname` -* `hosted_zone_id` - The [Route 53 Hosted Zone ID](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_website_region_endpoints) for this bucket's region. +* `hosted_zone_id` - The [Route 53 Hosted Zone ID](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_website_region_endpoints) for this bucket's region. * `region` - The AWS region this bucket resides in. * `website_endpoint` - The website endpoint, if the bucket is configured with a website. If not, this will be an empty string. * `website_domain` - The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. diff --git a/website/source/docs/providers/aws/r/security_group.html.markdown b/website/source/docs/providers/aws/r/security_group.html.markdown index ebd21bc732..860d6a4b9c 100644 --- a/website/source/docs/providers/aws/r/security_group.html.markdown +++ b/website/source/docs/providers/aws/r/security_group.html.markdown @@ -66,7 +66,10 @@ resource "aws_security_group" "allow_all" { The following arguments are supported: -* `name` - (Required) The name of the security group +* `name` - (Optional) The name of the security group. If omitted, Terraform will +assign a random, unique name +* `name_prefix` - (Optional) Creates a unique name beginning with the specified + prefix. Conflicts with `name`. * `description` - (Optional) The security group description. Defaults to "Managed by Terraform". Cannot be "". * `ingress` - (Optional) Can be specified multiple times for each ingress rule. Each ingress block supports fields documented below. diff --git a/website/source/docs/providers/aws/r/sns_topic.html.markdown b/website/source/docs/providers/aws/r/sns_topic.html.markdown index 62a3c23f74..b17d5536fd 100644 --- a/website/source/docs/providers/aws/r/sns_topic.html.markdown +++ b/website/source/docs/providers/aws/r/sns_topic.html.markdown @@ -23,6 +23,7 @@ resource "aws_sns_topic" "user_updates" { The following arguments are supported: * `name` - (Required) The friendly name for the SNS topic +* `display_name` - (Optional) The display name for the SNS topic * `policy` - (Optional) The fully-formed AWS policy as JSON * `delivery_policy` - (Optional) The SNS delivery policy diff --git a/website/source/docs/providers/aws/r/sns_topic_subscription.html.markdown b/website/source/docs/providers/aws/r/sns_topic_subscription.html.markdown index eeb504ae8b..6e13bea718 100644 --- a/website/source/docs/providers/aws/r/sns_topic_subscription.html.markdown +++ b/website/source/docs/providers/aws/r/sns_topic_subscription.html.markdown @@ -49,7 +49,7 @@ resource "aws_sns_topic_subscription" "user_updates_sqs_target" { The following arguments are supported: * `topic_arn` - (Required) The ARN of the SNS topic to subscribe to -* `protocol` - (Required) The protocol to use. The possible values for this are: `sqs`, `http`, `https`, `sms`, or `application`. (`email` is an option but unsupported, see below) +* `protocol` - (Required) The protocol to use. The possible values for this are: `sqs`, `lambda`, or `application`. (`email`, `http`, `https`, `sms`, are options but unsupported, see below) * `endpoint` - (Required) The endpoint to send data to, the contents will vary with the protocol. (see below for more information) * `raw_message_delivery` - (Optional) Boolean indicating whether or not to enable raw message delivery (the original message is directly passed, not wrapped in JSON with the original message in the message property). @@ -57,10 +57,7 @@ The following arguments are supported: Supported SNS protocols include: -* `http` -- delivery of JSON-encoded message via HTTP POST -* `https` -- delivery of JSON-encoded message via HTTPS POST * `lambda` -- delivery of JSON-encoded message to a lambda function -* `sms` -- delivery of message via SMS * `sqs` -- delivery of JSON-encoded message to an Amazon SQS queue * `application` -- delivery of JSON-encoded message to an EndpointArn for a mobile app and device @@ -68,16 +65,18 @@ Unsupported protocols include the following: * `email` -- delivery of message via SMTP * `email-json` -- delivery of JSON-encoded message via SMTP +* `http` -- delivery via HTTP +* `http(s)` -- delivery via HTTPS +* `sms` -- delivery text message -These are unsupported because the email address needs to be authorized and does not generate an ARN until the target email address has been validated. This breaks +These are unsupported because the endpoint needs to be authorized and does not +generate an ARN until the target email address has been validated. This breaks the Terraform model and as a result are not currently supported. ### Specifying endpoints Endpoints have different format requirements according to the protocol that is chosen. -* HTTP/HTTPS endpoints will require a URL to POST data to -* SMS endpoints are mobile numbers that are capable of receiving an SMS * SQS endpoints come in the form of the SQS queue's ARN (not the URL of the queue) e.g: `arn:aws:sqs:us-west-2:432981146916:terraform-queue-too` * Application endpoints are also the endpoint ARN for the mobile app and device. diff --git a/website/source/docs/providers/aws/r/spot_instance_request.html.markdown b/website/source/docs/providers/aws/r/spot_instance_request.html.markdown index 7f8fd9f0f2..8570464a3b 100644 --- a/website/source/docs/providers/aws/r/spot_instance_request.html.markdown +++ b/website/source/docs/providers/aws/r/spot_instance_request.html.markdown @@ -68,7 +68,7 @@ These attributes are exported, but they are expected to change over time and so should only be used for informational purposes, not for resource dependencies: * `spot_bid_status` - The current [bid - status](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) + status](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) of the Spot Instance Request. * `spot_request_state` The current [request state](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#creating-spot-request-status) diff --git a/website/source/docs/providers/aws/r/sqs_queue.html.markdown b/website/source/docs/providers/aws/r/sqs_queue.html.markdown index 62666b188c..3f33b6dfec 100644 --- a/website/source/docs/providers/aws/r/sqs_queue.html.markdown +++ b/website/source/docs/providers/aws/r/sqs_queue.html.markdown @@ -26,13 +26,13 @@ resource "aws_sqs_queue" "terraform_queue" { The following arguments are supported: * `name` - (Required) This is the human-readable name of the queue -* `visibility_timeout_seconds` - (Optional) The visibility timeout for the queue. An integer from 0 to 43200 (12 hours). The default for this attribute is 30. For more information about visibility timeout, see [AWS docs](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html). +* `visibility_timeout_seconds` - (Optional) The visibility timeout for the queue. An integer from 0 to 43200 (12 hours). The default for this attribute is 30. For more information about visibility timeout, see [AWS docs](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html). * `message_retention_seconds` - (Optional) The number of seconds Amazon SQS retains a message. Integer representing seconds, from 60 (1 minute) to 1209600 (14 days). The default for this attribute is 345600 (4 days). * `max_message_size` - (Optional) The limit of how many bytes a message can contain before Amazon SQS rejects it. An integer from 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this attribute is 262144 (256 KiB). * `delay_seconds` - (Optional) The time in seconds that the delivery of all messages in the queue will be delayed. An integer from 0 to 900 (15 minutes). The default for this attribute is 0 seconds. * `receive_wait_time_seconds` - (Optional) The time for which a ReceiveMessage call will wait for a message to arrive (long polling) before returning. An integer from 0 to 20 (seconds). The default for this attribute is 0, meaning that the call will return immediately. * `policy` - (Optional) The JSON policy for the SQS queue -* `redrive_policy` - (Optional) The JSON policy to set up the Dead Letter Queue, see [AWS docs](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html). +* `redrive_policy` - (Optional) The JSON policy to set up the Dead Letter Queue, see [AWS docs](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html). ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/volume_attachment.html.markdown b/website/source/docs/providers/aws/r/volume_attachment.html.markdown index d3421dc8cd..ccc6821903 100644 --- a/website/source/docs/providers/aws/r/volume_attachment.html.markdown +++ b/website/source/docs/providers/aws/r/volume_attachment.html.markdown @@ -54,4 +54,4 @@ as a last resort, as this can result in **data loss**. See * `instance_id` - ID of the Instance * `volume_id` - ID of the Volume -[1]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html +[1]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html diff --git a/website/source/docs/providers/aws/r/vpc.html.markdown b/website/source/docs/providers/aws/r/vpc.html.markdown index 48e56d340d..0e88ea3098 100644 --- a/website/source/docs/providers/aws/r/vpc.html.markdown +++ b/website/source/docs/providers/aws/r/vpc.html.markdown @@ -41,6 +41,7 @@ The following arguments are supported: * `instance_tenancy` - (Optional) A tenancy option for instances launched into the VPC * `enable_dns_support` - (Optional) A boolean flag to enable/disable DNS support in the VPC. Defaults true. * `enable_dns_hostnames` - (Optional) A boolean flag to enable/disable DNS hostnames in the VPC. Defaults false. +* `enable_classiclink` - (Optional) A boolean flag to enable/disable ClassicLink for the VPC. Defaults false. * `tags` - (Optional) A mapping of tags to assign to the resource. ## Attributes Reference @@ -52,6 +53,7 @@ The following attributes are exported: * `instance_tenancy` - Tenancy of instances spin up within VPC. * `enable_dns_support` - Whether or not the VPC has DNS support * `enable_dns_hostnames` - Whether or not the VPC has DNS hostname support +* `enable_classiclink` - Whether or not the VPC has Classiclink enabled * `main_route_table_id` - The ID of the main route table associated with this VPC. Note that you can change a VPC's main route table by using an [`aws_main_route_table_association`](/docs/providers/aws/r/main_route_table_assoc.html). diff --git a/website/source/docs/providers/aws/r/vpc_dhcp_options.html.markdown b/website/source/docs/providers/aws/r/vpc_dhcp_options.html.markdown index c2e05743f4..a890ebdc75 100644 --- a/website/source/docs/providers/aws/r/vpc_dhcp_options.html.markdown +++ b/website/source/docs/providers/aws/r/vpc_dhcp_options.html.markdown @@ -60,4 +60,4 @@ The following attributes are exported: * `id` - The ID of the DHCP Options Set. You can find more technical documentation about DHCP Options Set in the -official [AWS User Guide](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html). +official [AWS User Guide](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html). diff --git a/website/source/docs/providers/azure/r/affinity_group.html.markdown b/website/source/docs/providers/azure/r/affinity_group.html.markdown index 906f730707..31b001558d 100644 --- a/website/source/docs/providers/azure/r/affinity_group.html.markdown +++ b/website/source/docs/providers/azure/r/affinity_group.html.markdown @@ -29,7 +29,7 @@ The following arguments are supported: Azure subscription. * `location` - (Required) The location where the affinity group should be created. - For a list of all Azure locations, please consult [this link](http://azure.microsoft.com/en-us/regions/). + For a list of all Azure locations, please consult [this link](https://azure.microsoft.com/en-us/regions/). * `label` - (Required) A label to be used for tracking purposes. diff --git a/website/source/docs/providers/azure/r/hosted_service.html.markdown b/website/source/docs/providers/azure/r/hosted_service.html.markdown index 56a9463150..04d5ea8171 100644 --- a/website/source/docs/providers/azure/r/hosted_service.html.markdown +++ b/website/source/docs/providers/azure/r/hosted_service.html.markdown @@ -29,7 +29,7 @@ The following arguments are supported: * `name` - (Required) The name of the hosted service. Must be unique on Azure. * `location` - (Required) The location where the hosted service should be created. - For a list of all Azure locations, please consult [this link](http://azure.microsoft.com/en-us/regions/). + For a list of all Azure locations, please consult [this link](https://azure.microsoft.com/en-us/regions/). * `ephemeral_contents` - (Required) A boolean value (true|false), specifying whether all the resources present in the hosted hosted service should be diff --git a/website/source/docs/providers/azure/r/sql_database_server.html.markdown b/website/source/docs/providers/azure/r/sql_database_server.html.markdown index c038731813..0974a3e3df 100644 --- a/website/source/docs/providers/azure/r/sql_database_server.html.markdown +++ b/website/source/docs/providers/azure/r/sql_database_server.html.markdown @@ -31,7 +31,7 @@ The following arguments are supported: creation as it is randomly-generated per server. * `location` - (Required) The location where the database server should be created. - For a list of all Azure locations, please consult [this link](http://azure.microsoft.com/en-us/regions/). + For a list of all Azure locations, please consult [this link](https://azure.microsoft.com/en-us/regions/). * `username` - (Required) The username for the administrator of the database server. diff --git a/website/source/docs/providers/azure/r/storage_service.html.markdown b/website/source/docs/providers/azure/r/storage_service.html.markdown index 213965f999..bdc1cdd697 100644 --- a/website/source/docs/providers/azure/r/storage_service.html.markdown +++ b/website/source/docs/providers/azure/r/storage_service.html.markdown @@ -29,7 +29,7 @@ The following arguments are supported: lowercase-only characters or digits. Must be unique on Azure. * `location` - (Required) The location where the storage service should be created. - For a list of all Azure locations, please consult [this link](http://azure.microsoft.com/en-us/regions/). + For a list of all Azure locations, please consult [this link](https://azure.microsoft.com/en-us/regions/). * `account_type` - (Required) The type of storage account to be created. Available options include `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, diff --git a/website/source/docs/providers/azurerm/index.html.markdown b/website/source/docs/providers/azurerm/index.html.markdown new file mode 100644 index 0000000000..0655a48433 --- /dev/null +++ b/website/source/docs/providers/azurerm/index.html.markdown @@ -0,0 +1,80 @@ +--- +layout: "azurerm" +page_title: "Provider: Azure Resource Manager" +sidebar_current: "docs-azurerm-index" +description: |- + The Azure Resource Manager provider is used to interact with the many resources supported by Azure, via the ARM API. This supercedes the Azure provider, which interacts with Azure using the Service Management API. The provider needs to be configured with a credentials file, or credentials needed to generate OAuth tokens for the ARM API. +--- + +# Azure Resource Manager Provider + +The Azure Resource Manager provider is used to interact with the many resources +supported by Azure, via the ARM API. This supercedes the Azure provider, which +interacts with Azure using the Service Management API. The provider needs to be +configured with the credentials needed to generate OAuth tokens for the ARM API. + +Use the navigation to the left to read about the available resources. + +## Example Usage + +``` +# Configure the Azure Resource Manager Provider +provider "azurerm" { + subscription_id = "..." + client_id = "..." + client_secret = "..." + tenant_id = "..." +} + +# Create a resource group +resource "azurerm_resource_group" "production" { + name = "production" + location = "West US" +} + +# Create a virtual network in the web_servers resource group +resource "azurerm_virtual_network" "network" { + name = "productionNetwork" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.production.name}" + + subnet { + name = "subnet1" + address_prefix = "10.0.1.0/24" + } + + subnet { + name = "subnet2" + address_prefix = "10.0.2.0/24" + } + + subnet { + name = "subnet3" + address_prefix = "10.0.3.0/24" + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `subscription_id` - (Optional) The subscription ID to use. It can also + be sourced from the `ARM_SUBSCRIPTION_ID` environment variable. + +* `client_id` - (Optional) The client ID to use. It can also be sourced from + the `ARM_CLIENT_ID` environment variable. + +* `client_secret` - (Optional) The client secret to use. It can also be sourced from + the `ARM_CLIENT_SECRET` environment variable. + +* `tenant_id` - (Optional) The tenant ID to use. It can also be sourced from the + `ARM_TENANT_ID` environment variable. + +## Testing: + +Credentials must be provided via the `ARM_SUBSCRIPTION_ID`, `ARM_CLIENT_ID`, +`ARM_CLIENT_SECRET` and `ARM_TENANT_ID` environment variables in order to run +acceptance tests. diff --git a/website/source/docs/providers/azurerm/r/availability_set.html.markdown b/website/source/docs/providers/azurerm/r/availability_set.html.markdown new file mode 100644 index 0000000000..cd60368d07 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/availability_set.html.markdown @@ -0,0 +1,53 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_availability_set" +sidebar_current: "docs-azurerm-resource-virtualmachine-availability-set" +description: |- + Create an availability set for virtual machines. +--- + +# azurerm\_availability\_set + +Create an availability set for virtual machines. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "resourceGroup1" + location = "West US" +} + +resource "azurerm_availability_set" "test" { + name = "acceptanceTestAvailabilitySet1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + tags { + environment = "Production" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Specifies the name of the availability set. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the availability set. + +* `location` - (Required) Specifies the supported Azure location where the resource exists. Changing this forces a new resource to be created. + +* `platform_update_domain_count` - (Optional) Specifies the number of update domains that are used. Defaults to 5. + +* `platform_fault_domain_count` - (Optional) Specifies the number of fault domains that are used. Defaults to 3. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The virtual AvailabilitySet ID. \ No newline at end of file diff --git a/website/source/docs/providers/azurerm/r/cdn_endpoint.html.markdown b/website/source/docs/providers/azurerm/r/cdn_endpoint.html.markdown new file mode 100644 index 0000000000..9be153d077 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/cdn_endpoint.html.markdown @@ -0,0 +1,87 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_cdn_endpoint" +sidebar_current: "docs-azurerm-resource-cdn-endpoint" +description: |- + Create a CDN Endpoint entity. +--- + +# azurerm\_cdn\_endpoint + +A CDN Endpoint is the entity within a CDN Profile containing configuration information regarding caching behaviors and origins. The CDN Endpoint is exposed using the URL format .azureedge.net by default, but custom domains can also be created. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" +} + +resource "azurerm_cdn_endpoint" "test" { + name = "acceptanceTestCdnEndpoint1" + profile_name = "${azurerm_cdn_profile.test.name}" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + origin { + name = "acceptanceTestCdnOrigin1" + host_name = "www.example.com" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Specifies the name of the CDN Endpoint. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the CDN Endpoint. + +* `profile_name` - (Required) The CDN Profile to which to attach the CDN Endpoint. + +* `location` - (Required) Specifies the supported Azure location where the resource exists. Changing this forces a new resource to be created. + +* `origin_host_header` - (Optional) The host header CDN provider will send along with content requests to origins. Defaults to the host name of the origin. + +* `is_http_allowed` - (Optional) Defaults to `true`. + +* `is_https_allowed` - (Optional) Defaults to `true`. + +* `origin` - (Optional) The set of origins of the CDN endpoint. When multiple origins exist, the first origin will be used as primary and rest will be used as failover options. +Each `origin` block supports fields documented below. + +* `origin_path` - (Optional) The path used at for origin requests. + +* `querystring_caching_behaviour` - (Optional) Sets query string caching behavior. Allowed values are `IgnoreQueryString`, `BypassCaching` and `UseQueryString`. Defaults to `IgnoreQueryString`. + +* `content_types_to_compress` - (Optional) An array of strings that indicates a content types on which compression will be applied. The value for the elements should be MIME types. + +* `is_compression_enabled` - (Optional) Indicates whether compression is to be enabled. Defaults to false. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +The `origin` block supports: + +* `name` - (Required) The name of the origin. This is an arbitrary value. However, this value needs to be unique under endpoint. + +* `host_name` - (Required) A string that determines the hostname/IP address of the origin server. This string could be a domain name, IPv4 address or IPv6 address. + +* `http_port` - (Optional) The HTTP port of the origin. Defaults to null. When null, 80 will be used for HTTP. + +* `https_port` - (Optional) The HTTPS port of the origin. Defaults to null. When null, 443 will be used for HTTPS. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The CDN Endpoint ID. \ No newline at end of file diff --git a/website/source/docs/providers/azurerm/r/cdn_profile.html.markdown b/website/source/docs/providers/azurerm/r/cdn_profile.html.markdown new file mode 100644 index 0000000000..fb5c010595 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/cdn_profile.html.markdown @@ -0,0 +1,54 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_cdn_profile" +sidebar_current: "docs-azurerm-resource-cdn-profile" +description: |- + Create a CDN Profile to create a collection of CDN Endpoints. +--- + +# azurerm\_cdn\_profile + +Create a CDN Profile to create a collection of CDN Endpoints. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "resourceGroup1" + location = "West US" +} + +resource "azurerm_cdn_profile" "test" { + name = "acceptanceTestCdnProfile1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + sku = "Standard" + + tags { + environment = "Production" + cost_center = "MSFT" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Specifies the name of the CDN Profile. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the CDN Profile. + +* `location` - (Required) Specifies the supported Azure location where the resource exists. Changing this forces a new resource to be created. + +* `sku` - (Required) The pricing related information of current CDN profile. Accepted values are `Standard` or `Premium`. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The CDN Profile ID. \ No newline at end of file diff --git a/website/source/docs/providers/azurerm/r/local_network_gateway.html.markdown b/website/source/docs/providers/azurerm/r/local_network_gateway.html.markdown new file mode 100644 index 0000000000..1f2bbe0f98 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/local_network_gateway.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_local_network_gateway" +sidebar_current: "docs-azurerm-resource-local-network-gateway" +description: |- + Creates a new local network gateway connection over which specific connections can be configured. +--- + +# azurerm\_local\_network\_gateway + +Creates a new local network gateway connection over which specific connections can be configured. + +## Example Usage + +``` +resource "azurerm_local_network_gateway" "home" { + name = "backHome" + resource_group_name = "${azurerm_resource_group.test.name}" + location = "${azurerm_resource_group.test.location}" + gateway_address = "12.13.14.15" + address_space = ["10.0.0.0/16"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the local network gateway. Changing this + forces a new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the local network gateway. + +* `location` - (Required) The location/region where the local network gatway is + created. Changing this forces a new resource to be created. + +* `gateway_address` - (Required) The IP address of the gatway to which to + connect. + +* `address_space` - (Required) The list of string CIDRs representing the + addredss spaces the gateway exposes. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The local network gateway unique ID within Azure. diff --git a/website/source/docs/providers/azurerm/r/network_interface.html.markdown b/website/source/docs/providers/azurerm/r/network_interface.html.markdown new file mode 100644 index 0000000000..5dfd4fc576 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/network_interface.html.markdown @@ -0,0 +1,92 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azure_virtual_network" +sidebar_current: "docs-azurerm-resource-virtual-network" +description: |- + Creates a new virtual network including any configured subnets. Each subnet can optionally be configured with a security group to be associated with the subnet. +--- + +# azurerm\_virtual\_network + +Creates a new virtual network including any configured subnets. Each subnet can +optionally be configured with a security group to be associated with the subnet. + +## Example Usage + +``` +resource "azurerm_virtual_network" "test" { + name = "virtualNetwork1" + resource_group_name = "${azurerm_resource_group.test.name}" + address_space = ["10.0.0.0/16"] + location = "West US" + + subnet { + name = "subnet1" + address_prefix = "10.0.1.0/24" + } + + subnet { + name = "subnet2" + address_prefix = "10.0.2.0/24" + } + + subnet { + name = "subnet3" + address_prefix = "10.0.3.0/24" + } + + tags { + environment = "Production" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the network interface. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the network interface. + +* `location` - (Required) The location/region where the network interface is + created. Changing this forces a new resource to be created. + +* `network_security_group_id` - (Optional) The ID of the Network Security Group to associate with + the network interface. + +* `internal_dns_name_label` - (Optional) Relative DNS name for this NIC used for internal communications between VMs in the same VNet + +* `dns_servers` - (Optional) List of DNS servers IP addresses to use for this NIC, overrides the VNet-level server list + +* `ip_configuration` - (Optional) Collection of ipConfigurations associated with this NIC. Each `ip_configuration` block supports fields documented below. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +The `ip_configuration` block supports: + +* `name` - (Required) User-defined name of the IP. + +* `subnet_id` - (Required) Reference to a subnet in which this NIC has been created. + +* `private_ip_address` - (Optional) Static IP Address. + +* `private_ip_address_allocation` - (Required) Defines how a private IP address is assigned. Options are Static or Dynamic. + +* `public_ip_address_id` - (Optional) Reference to a Public IP Address to associate with this NIC + +* `load_balancer_backend_address_pools_ids` - (Optional) List of Load Balancer Backend Address Pool IDs references to which this NIC belongs + +* `load_balancer_inbound_nat_rules_ids` - (Optional) List of Load Balancer Inbound Nat Rules IDs involving this NIC + +## Attributes Reference + +The following attributes are exported: + +* `id` - The virtual NetworkConfiguration ID. +* `mac_address` - +* `virtual_machine_id` - +* `applied_dns_servers` - +* `internal_fqdn` - diff --git a/website/source/docs/providers/azurerm/r/network_security_group.html.markdown b/website/source/docs/providers/azurerm/r/network_security_group.html.markdown new file mode 100644 index 0000000000..69f9361222 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/network_security_group.html.markdown @@ -0,0 +1,90 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_network_security_group" +sidebar_current: "docs-azurerm-resource-network-security-group" +description: |- + Create a network security group that contains a list of network security rules. Network security groups enable inbound or outbound traffic to be enabled or denied. +--- + +# azurerm\_security\_group + +Create a network security group that contains a list of network security rules. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_network_security_group" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + security_rule { + name = "test123" + priority = 100 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + } + + tags { + environment = "Production" + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Specifies the name of the availability set. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the availability set. + +* `location` - (Required) Specifies the supported Azure location where the resource exists. Changing this forces a new resource to be created. + +* `security_rule` - (Optional) Can be specified multiple times to define multiple + security rules. Each `security_rule` block supports fields documented below. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + + +The `security_rule` block supports: + +* `name` - (Required) The name of the security rule. + +* `description` - (Optional) A description for this rule. Restricted to 140 characters. + +* `protocol` - (Required) Network protocol this rule applies to. Can be Tcp, Udp or * to match both. + +* `source_port_range` - (Required) Source Port or Range. Integer or range between 0 and 65535 or * to match any. + +* `destination_port_range` - (Required) Destination Port or Range. Integer or range between 0 and 65535 or * to match any. + +* `source_address_prefix` - (Required) CIDR or source IP range or * to match any IP. Tags such as ‘VirtualNetwork’, ‘AzureLoadBalancer’ and ‘Internet’ can also be used. + +* `destination_address_prefix` - (Required) CIDR or destination IP range or * to match any IP. Tags such as ‘VirtualNetwork’, ‘AzureLoadBalancer’ and ‘Internet’ can also be used. + +* `access` - (Required) Specifies whether network traffic is allowed or denied. Possible values are “Allow” and “Deny”. + +* `priority` - (Required) Specifies the priority of the rule. The value can be between 100 and 4096. The priority number must be unique for each rule in the collection. The lower the priority number, the higher the priority of the rule. + +* `direction` - (Required) The direction specifies if rule will be evaluated on incoming or outgoing traffic. Possible values are “Inbound” and “Outbound”. + + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Network Security Group ID. \ No newline at end of file diff --git a/website/source/docs/providers/azurerm/r/network_security_rule.html.markdown b/website/source/docs/providers/azurerm/r/network_security_rule.html.markdown new file mode 100644 index 0000000000..061175fa40 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/network_security_rule.html.markdown @@ -0,0 +1,75 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_network_security_rule" +sidebar_current: "docs-azurerm-resource-network-security-rule" +description: |- + Create a Network Security Rule. +--- + +# azurerm\_network\_security\_rule + +Create a Network Security Rule. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_network_security_group" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_network_security_rule" "test" { + name = "test123" + priority = 100 + direction = "Outbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + resource_group_name = "${azurerm_resource_group.test.name}" + network_security_group_name = "${azurerm_network_security_group.test.name}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the security rule. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the Network Security Rule. + +* `network_security_group_name` - (Required) The name of the Network Security Group that we want to attach the rule to. + +* `description` - (Optional) A description for this rule. Restricted to 140 characters. + +* `protocol` - (Required) Network protocol this rule applies to. Can be Tcp, Udp or * to match both. + +* `source_port_range` - (Required) Source Port or Range. Integer or range between 0 and 65535 or * to match any. + +* `destination_port_range` - (Required) Destination Port or Range. Integer or range between 0 and 65535 or * to match any. + +* `source_address_prefix` - (Required) CIDR or source IP range or * to match any IP. Tags such as ‘VirtualNetwork’, ‘AzureLoadBalancer’ and ‘Internet’ can also be used. + +* `destination_address_prefix` - (Required) CIDR or destination IP range or * to match any IP. Tags such as ‘VirtualNetwork’, ‘AzureLoadBalancer’ and ‘Internet’ can also be used. + +* `access` - (Required) Specifies whether network traffic is allowed or denied. Possible values are “Allow” and “Deny”. + +* `priority` - (Required) Specifies the priority of the rule. The value can be between 100 and 4096. The priority number must be unique for each rule in the collection. The lower the priority number, the higher the priority of the rule. + +* `direction` - (Required) The direction specifies if rule will be evaluated on incoming or outgoing traffic. Possible values are “Inbound” and “Outbound”. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Network Security Rule ID. \ No newline at end of file diff --git a/website/source/docs/providers/azurerm/r/public_ip.html.markdown b/website/source/docs/providers/azurerm/r/public_ip.html.markdown new file mode 100644 index 0000000000..62c1cb1496 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/public_ip.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_public_ip" +sidebar_current: "docs-azurerm-resource-network-public-ip" +description: |- + Create a Public IP Address. +--- + +# azurerm\_public\_ip + +Create a Public IP Address. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "resourceGroup1" + location = "West US" +} + +resource "azurerm_public_ip" "test" { + name = "acceptanceTestPublicIp1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + public_ip_address_allocation = "static" + + tags { + environment = "Production" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Specifies the name of the availability set. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the availability set. + +* `location` - (Required) Specifies the supported Azure location where the resource exists. Changing this forces a new resource to be created. + +* `public_ip_address_allocation` - (Required) Defines whether the IP address is stable or dynamic. Options are Static or Dynamic. + +* `idle_timeout_in_minutes` - (Optional) Specifies the timeout for the TCP idle connection. The value can be set between 4 and 30 minutes. + +* `domain_name_label` - (Optional) Label for the Domain Name. Will be used to make up the FQDN. If a domain name label is specified, an A DNS record is created for the public IP in the Microsoft Azure DNS system. + +* `reverse_fqdn` - (Optional) A fully qualified domain name that resolves to this public IP address. If the reverseFqdn is specified, then a PTR DNS record is created pointing from the IP address in the in-addr.arpa domain to the reverse FQDN. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Public IP ID. +* `ip_address` - The IP address value that was allocated. +* `fqdn` - Fully qualified domain name of the A DNS record associated with the public IP. This is the concatenation of the domainNameLabel and the regionalized DNS zone \ No newline at end of file diff --git a/website/source/docs/providers/azurerm/r/resource_group.html.markdown b/website/source/docs/providers/azurerm/r/resource_group.html.markdown new file mode 100644 index 0000000000..0ba13feed3 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/resource_group.html.markdown @@ -0,0 +1,42 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_resource_group" +sidebar_current: "docs-azurerm-resource-resource-group" +description: |- + Creates a new resource group on Azure. +--- + +# azurerm\_resource\_group + +Creates a new resource group on Azure. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "testResourceGroup1" + location = "West US" + + tags { + environment = "Production" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the resource group. Must be unique on your + Azure subscription. + +* `location` - (Required) The location where the resource group should be created. + For a list of all Azure locations, please consult [this link](http://azure.microsoft.com/en-us/regions/). + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The resource group ID. diff --git a/website/source/docs/providers/azurerm/r/route.html.markdown b/website/source/docs/providers/azurerm/r/route.html.markdown new file mode 100644 index 0000000000..82f1ae3783 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/route.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_route" +sidebar_current: "docs-azurerm-resource-network-route" +description: |- + Creates a new Route Resource +--- + +# azurerm\_route + +Creates a new Route Resource + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestRouteTable1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_route" "test" { + name = "acceptanceTestRoute1" + resource_group_name = "${azurerm_resource_group.test.name}" + route_table_name = "${azurerm_route_table.test.name}" + + address_prefix = "10.1.0.0/16" + next_hop_type = "vnetlocal" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the route. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the route. + + +* `route_table_name` - (Required) The name of the route table to which to create the route + +* `address_prefix` - (Required) The destination CIDR to which the route applies, such as 10.1.0.0/16 + +* `next_hop_type` - (Required) The type of Azure hop the packet should be sent to. + Possible values are VirtualNetworkGateway, VnetLocal, Internet, VirtualAppliance and None + +* `next_hop_in_ip_address` - (Optional) Contains the IP address packets should be forwarded to. Next hop values are only allowed in routes where the next hop type is VirtualAppliance. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Route ID. diff --git a/website/source/docs/providers/azurerm/r/route_table.html.markdown b/website/source/docs/providers/azurerm/r/route_table.html.markdown new file mode 100644 index 0000000000..8dd38a3366 --- /dev/null +++ b/website/source/docs/providers/azurerm/r/route_table.html.markdown @@ -0,0 +1,71 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_route_table" +sidebar_current: "docs-azurerm-resource-network-route-table" +description: |- + Creates a new Route Table Resource +--- + +# azurerm\_route\_table + +Creates a new Route Table Resource + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_route_table" "test" { + name = "acceptanceTestSecurityGroup1" + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" + + route { + name = "route1" + address_prefix = "*" + next_hop_type = "internet" + } + + tags { + environment = "Production" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the route table. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the route table. + +* `location` - (Required) Specifies the supported Azure location where the resource exists. Changing this forces a new resource to be created. + +* `route` - (Optional) Can be specified multiple times to define multiple + routes. Each `route` block supports fields documented below. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +The `route` block supports: + +* `name` - (Required) The name of the route. + +* `address_prefix` - (Required) The destination CIDR to which the route applies, such as 10.1.0.0/16 + +* `next_hop_type` - (Required) The type of Azure hop the packet should be sent to. + Possible values are VirtualNetworkGateway, VnetLocal, Internet, VirtualAppliance and None + +* `next_hop_in_ip_address` - (Optional) Contains the IP address packets should be forwarded to. Next hop values are only allowed in routes where the next hop type is VirtualAppliance. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Route Table ID. +* `subnets` - The collection of Subnets associated with this route table. diff --git a/website/source/docs/providers/azurerm/r/storage_account.html.markdown b/website/source/docs/providers/azurerm/r/storage_account.html.markdown new file mode 100644 index 0000000000..93d756144a --- /dev/null +++ b/website/source/docs/providers/azurerm/r/storage_account.html.markdown @@ -0,0 +1,72 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azurerm_storage_account" +sidebar_current: "docs-azurerm-resource-storage-account" +description: |- + Create a Azure Storage Account. +--- + +# azurerm\_storage\_account + +Create an Azure Storage Account. + +## Example Usage + +``` +resource "azurerm_resource_group" "testrg" { + name = "resourceGroupName" + location = "westus" +} + +resource "azurerm_storage_account" "testsa" { + name = "storageaccountname" + resource_group_name = "${azurerm_resource_group.testrg.name}" + + location = "westus" + account_type = "Standard_GRS" + + tags { + environment = "staging" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Specifies the name of the storage account. Changing this forces a + new resource to be created. This must be unique across the entire Azure service, + not just within the resource group. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the storage account. Changing this forces a new resource to be created. + +* `location` - (Required) Specifies the supported Azure location where the + resource exists. Changing this forces a new resource to be created. + +* `account_type` - (Required) Defines the type of storage account to be + created. Valid options are `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, + `Standard_RAGRS`, `Premium_LRS`. Changing this is sometimes valid - see the Azure + documentation for more information on which types of accounts can be converted + into other types. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +Note that although the Azure API supports setting custom domain names for +storage accounts, this is not currently supported. + +## Attributes Reference + +The following attributes are exported in addition to the arguments listed above: + +* `id` - The storage account Resource ID. +* `primary_location` - The primary location of the storage account. +* `secondary_location` - The secondary location of the storage account. +* `primary_blob_endpoint` - The endpoint URL for blob storage in the primary location. +* `secondary_blob_endpoint` - The endpoint URL for blob storage in the secondary location. +* `primary_queue_endpoint` - The endpoint URL for queue storage in the primary location. +* `secondary_queue_endpoint` - The endpoint URL for queue storage in the secondary location. +* `primary_table_endpoint` - The endpoint URL for table storage in the primary location. +* `secondary_table_endpoint` - The endpoint URL for table storage in the secondary location. +* `primary_file_endpoint` - The endpoint URL for file storage in the primary location. diff --git a/website/source/docs/providers/azurerm/r/subnet.html.markdown b/website/source/docs/providers/azurerm/r/subnet.html.markdown new file mode 100644 index 0000000000..b75f1ba95b --- /dev/null +++ b/website/source/docs/providers/azurerm/r/subnet.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azure_subnet" +sidebar_current: "docs-azurerm-resource-network-subnet" +description: |- + Creates a new subnet. Subnets represent network segments within the IP space defined by the virtual network. +--- + +# azurerm\_subnet + +Creates a new subnet. Subnets represent network segments within the IP space defined by the virtual network. + +## Example Usage + +``` +resource "azurerm_resource_group" "test" { + name = "acceptanceTestResourceGroup1" + location = "West US" +} + +resource "azurerm_virtual_network" "test" { + name = "acceptanceTestVirtualNetwork1" + address_space = ["10.0.0.0/16"] + location = "West US" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_subnet" "test" { + name = "testsubnet" + resource_group_name = "${azurerm_resource_group.test.name}" + virtual_network_name = "${azurerm_virtual_network.test.name}" + address_prefix = "10.0.1.0/24" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the virtual network. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the subnet. + +* `virtual_network_name` - (Required) The name of the virtual network to which to attach the subnet. + +* `address_prefix` - (Required) The address prefix to use for the subnet. + +* `network_security_group_id` - (Optional) The ID of the Network Security Group to associate with + the subnet. + +* `route_table_id` - (Optional) The ID of the Route Table to associate with + the subnet. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The subnet ID. +* `ip_configurations` - The collection of IP Configurations with IPs within this subnet. diff --git a/website/source/docs/providers/azurerm/r/virtual_network.html.markdown b/website/source/docs/providers/azurerm/r/virtual_network.html.markdown new file mode 100644 index 0000000000..e9a4eb2ffa --- /dev/null +++ b/website/source/docs/providers/azurerm/r/virtual_network.html.markdown @@ -0,0 +1,82 @@ +--- +layout: "azurerm" +page_title: "Azure Resource Manager: azure_virtual_network" +sidebar_current: "docs-azurerm-resource-network-virtual-network" +description: |- + Creates a new virtual network including any configured subnets. Each subnet can optionally be configured with a security group to be associated with the subnet. +--- + +# azurerm\_virtual\_network + +Creates a new virtual network including any configured subnets. Each subnet can +optionally be configured with a security group to be associated with the subnet. + +## Example Usage + +``` +resource "azurerm_virtual_network" "test" { + name = "virtualNetwork1" + resource_group_name = "${azurerm_resource_group.test.name}" + address_space = ["10.0.0.0/16"] + location = "West US" + + subnet { + name = "subnet1" + address_prefix = "10.0.1.0/24" + } + + subnet { + name = "subnet2" + address_prefix = "10.0.2.0/24" + } + + subnet { + name = "subnet3" + address_prefix = "10.0.3.0/24" + } + + tags { + environment = "Production" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the virtual network. Changing this forces a + new resource to be created. + +* `resource_group_name` - (Required) The name of the resource group in which to + create the virtual network. + +* `address_space` - (Required) The address space that is used the virtual + network. You can supply more than one address space. Changing this forces + a new resource to be created. + +* `location` - (Required) The location/region where the virtual network is + created. Changing this forces a new resource to be created. + +* `dns_servers` - (Optional) List of names of DNS servers previously registered + on Azure. + +* `subnet` - (Optional) Can be specified multiple times to define multiple + subnets. Each `subnet` block supports fields documented below. + +* `tags` - (Optional) A mapping of tags to assign to the resource. + +The `subnet` block supports: + +* `name` - (Required) The name of the subnet. + +* `address_prefix` - (Required) The address prefix to use for the subnet. + +* `security_group` - (Optional) The Network Security Group to associate with + the subnet. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The virtual NetworkConfiguration ID. diff --git a/website/source/docs/providers/chef/index.html.markdown b/website/source/docs/providers/chef/index.html.markdown new file mode 100644 index 0000000000..57b9e7d387 --- /dev/null +++ b/website/source/docs/providers/chef/index.html.markdown @@ -0,0 +1,60 @@ +--- +layout: "chef" +page_title: "Provider: Chef" +sidebar_current: "docs-chef-index" +description: |- + Chef is a systems and cloud infrastructure automation framework. +--- + +# Chef Provider + +[Chef](https://www.chef.io/) is a systems and cloud infrastructure automation +framework. The Chef provider allows Terraform to manage various resources +that exist within [Chef Server](http://docs.chef.io/chef_server.html). + +Use the navigation to the left to read about the available resources. + +## Example Usage + +``` +# Configure the Chef provider +provider "chef" { + server_url = "https://api.opscode.com/organizations/example/" + + // You can set up a "Client" within the Chef Server management console. + client_name = "terraform" + private_key_pem = "${file(\"chef-terraform.pem\")}" +} + +# Create a Chef Environment +resource "chef_environment" "production" { + name = "production" +} + +# Create a Chef Role +resource "chef_role" "app_server" { + name = "app_server" + run_list = [ + "recipe[terraform]" + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `server_url` - (Required) The HTTP(S) API URL of the Chef server to use. If + the target Chef server supports organizations, use the full URL of the + organization you wish to configure. May be provided instead via the + ``CHEF_SERVER_URL`` environment variable. +* `client_name` - (Required) The name of the client account to use when making + requests. This must have been already configured on the Chef server. + May be provided instead via the ``CHEF_CLIENT_NAME`` environment variable. +* `private_key_pem` - (Required) The PEM-formatted private key belonging to + the configured client. This is issued by the server when a new client object + is created. May be provided instead in a file whose path is in the + ``CHEF_PRIVATE_KEY_FILE`` environment variable. +* `allow_unverified_ssl` - (Optional) Boolean indicating whether to make + requests to a Chef server whose SSL certicate cannot be verified. Defaults + to ``false``. diff --git a/website/source/docs/providers/chef/r/data_bag.html.markdown b/website/source/docs/providers/chef/r/data_bag.html.markdown new file mode 100644 index 0000000000..6df60d84f5 --- /dev/null +++ b/website/source/docs/providers/chef/r/data_bag.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "chef" +page_title: "Chef: chef_data_bag" +sidebar_current: "docs-chef-resource-data-bag" +description: |- + Creates and manages a data bag in Chef Server. +--- + +# chef\_data\_bag + +A [data bag](http://docs.chef.io/data_bags.html) is a collection of +configuration objects that are stored as JSON in Chef Server and can be +retrieved and used in Chef recipes. + +This resource creates the data bag itself. Inside each data bag is a collection +of items which can be created using the ``chef_data_bag_item`` resource. + +## Example Usage + +``` +resource "chef_data_bag" "example" { + name = "example-data-bag" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The unique name to assign to the data bag. This is the + name that other server clients will use to find and retrieve data from the + data bag. + +## Attributes Reference + +The following attributes are exported: + +* `api_url` - The URL representing this data bag in the Chef server API. diff --git a/website/source/docs/providers/chef/r/data_bag_item.html.markdown b/website/source/docs/providers/chef/r/data_bag_item.html.markdown new file mode 100644 index 0000000000..2265c16e4f --- /dev/null +++ b/website/source/docs/providers/chef/r/data_bag_item.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "chef" +page_title: "Chef: chef_data_bag_item" +sidebar_current: "docs-chef-resource-data-bag-item" +description: |- + Creates and manages an object within a data bag in Chef Server. +--- + +# chef\_data\_bag\_item + +A [data bag](http://docs.chef.io/data_bags.html) is a collection of +configuration objects that are stored as JSON in Chef Server and can be +retrieved and used in Chef recipes. + +This resource creates objects within an existing data bag. To create the +data bag itself, use the ``chef_data_bag`` resource. + +## Example Usage + +``` +resource "chef_data_bag_item" "example" { + data_bag_name = "example-data-bag" + content_json = < ## Ports @@ -64,6 +85,19 @@ the following: * `protocol` - (Optional, string) Protocol that can be used over this port, defaults to TCP. + +## Extra Hosts + +`host_entry` is a block within the configuration that can be repeated to specify +the extra host mappings for the container. Each `host_entry` block supports +the following: + +* `host` - (Required, int) Hostname to add. +* `ip` - (Required, int) IP address this hostname should resolve to.. + +This is equivalent to using the `--add-host` option when using the `run` +command of the Docker CLI. + ## Volumes @@ -73,12 +107,16 @@ the following: * `from_container` - (Optional, string) The container where the volume is coming from. -* `container_path` - (Optional, string) The path in the container where the - volume will be mounted. * `host_path` - (Optional, string) The path on the host where the volume is coming from. +* `volume_name` - (Optional, string) The name of the docker volume which + should be mounted. +* `container_path` - (Optional, string) The path in the container where the + volume will be mounted. * `read_only` - (Optional, bool) If true, this volume will be readonly. Defaults to false. + +One of `from_container`, `host_path` or `volume_name` must be set. ## Attributes Reference diff --git a/website/source/docs/providers/docker/r/network.html.markdown b/website/source/docs/providers/docker/r/network.html.markdown new file mode 100644 index 0000000000..77d4d02f17 --- /dev/null +++ b/website/source/docs/providers/docker/r/network.html.markdown @@ -0,0 +1,49 @@ +--- +layout: "docker" +page_title: "Docker: docker_network" +sidebar_current: "docs-docker-resource-network" +description: |- + Manages a Docker Network. +--- + +# docker\_network + +Manages a Docker Network. This can be used alongside +[docker\_container](/docs/providers/docker/r/container.html) +to create virtual networks within the docker environment. + +## Example Usage + +``` +# Find the latest Ubuntu precise image. +resource "docker_network" "private_network" { + name = "my_network" +} + +# Access it somewhere else with ${docker_image.docker_network.name} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required, string) The name of the Docker network. +* `check_duplicate` - (Optional, boolean) Requests daemon to check for networks with same name. +* `driver` - (Optional, string) Name of the network driver to use. Defaults to `bridge` driver. +* `options` - (Optional, map of strings) Network specific options to be used by the drivers. +* `ipam_driver` - (Optional, string) Driver used by the custom IP scheme of the network. +* `ipam_config` - (Optional, block) Configuration of the custom IP scheme of the network. + +The `ipam_config` block supports: + +* `subnet` - (Optional, string) +* `ip_range` - (Optional, string) +* `gateway` - (Optional, string) +* `aux_address` - (Optional, map of string) + +## Attributes Reference + +The following attributes are exported in addition to the above configuration: + +* `id` (string) +* `scope` (string) diff --git a/website/source/docs/providers/docker/r/volume.html.markdown b/website/source/docs/providers/docker/r/volume.html.markdown new file mode 100644 index 0000000000..5b13efc022 --- /dev/null +++ b/website/source/docs/providers/docker/r/volume.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "docker" +page_title: "Docker: docker_volume" +sidebar_current: "docs-docker-resource-volume" +description: |- + Creates and destroys docker volumes. +--- + +# docker\_volume + +Creates and destroys a volume in Docker. This can be used alongside +[docker\_container](/docs/providers/docker/r/container.html) +to prepare volumes that can be shared across containers. + +## Example Usage + +``` +# Creates a docker volume "shared_volume". +resource "docker_volume" "shared_volume" { + name = "shared_volume" +} + +# Reference the volume with ${docker_volume.shared_volume.name} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Optional, string) The name of the Docker volume (generated if not provided). +* `driver` - (Optional, string) Driver type for the volume (defaults to local). +* `driver_opts` - (Optional, map of strings) Options specific to the driver. + +## Attributes Reference + +The following attributes are exported in addition to the above configuration: + +* `mountpoint` (string) - The mountpoint of the volume. diff --git a/website/source/docs/providers/google/index.html.markdown b/website/source/docs/providers/google/index.html.markdown index 14a208d6a2..2dfc69cfde 100644 --- a/website/source/docs/providers/google/index.html.markdown +++ b/website/source/docs/providers/google/index.html.markdown @@ -73,7 +73,10 @@ the process more straightforwarded, it is documented here: 1. Log into the [Google Developers Console](https://console.developers.google.com) and select a project. -2. Under the "APIs & Auth" section, click "Credentials." +2. Click the menu button in the top left corner, and navigate to "Permissions", + then "Service accounts", and finally "Create service account". -3. Create a new OAuth client ID and select "Service account" as the type - of account. Once created, and after a P12 key is downloaded, a JSON file should be downloaded. This is your _account file_. +3. Provide a name and ID in the corresponding fields, select + "Furnish a new private key", and select "JSON" as the key type. + +4. Clicking "Create" will download your `credentials`. diff --git a/website/source/docs/providers/google/r/compute_instance.html.markdown b/website/source/docs/providers/google/r/compute_instance.html.markdown index c7bc410015..1074d01176 100644 --- a/website/source/docs/providers/google/r/compute_instance.html.markdown +++ b/website/source/docs/providers/google/r/compute_instance.html.markdown @@ -133,6 +133,10 @@ The `access_config` block supports: * `nat_ip` - (Optional) The IP address that will be 1:1 mapped to the instance's network ip. If not given, one will be generated. +* `assigned_nat_ip` - (Optional) The IP address that is assigned to the + instance. If `nat_ip` is filled, it will appear here. If `nat_ip` is left + blank, the ephemeral assigned IP will appear here. + (DEPRECATED) The `network` block supports: * `source` - (Required) The name of the network to attach this interface to. diff --git a/website/source/docs/providers/google/r/compute_instance_group_manager.html.markdown b/website/source/docs/providers/google/r/compute_instance_group_manager.html.markdown index 30527c80ac..8d87a13f24 100644 --- a/website/source/docs/providers/google/r/compute_instance_group_manager.html.markdown +++ b/website/source/docs/providers/google/r/compute_instance_group_manager.html.markdown @@ -20,10 +20,17 @@ resource "google_compute_instance_group_manager" "foobar" { description = "Terraform test instance group manager" name = "terraform-test" instance_template = "${google_compute_instance_template.foobar.self_link}" + update_strategy= "NONE" target_pools = ["${google_compute_target_pool.foobar.self_link}"] base_instance_name = "foobar" zone = "us-central1-a" target_size = 2 + + named_port { + name = "customHTTP" + port = 8888 + } + } ``` @@ -41,7 +48,13 @@ instance name. group manager. * `instance_template` - (Required) The full URL to an instance template from -which all new instances will be created. +which all new instances will be created. + +* `update_strategy` - (Optional, Default `"RESTART"`) If the `instance_template` resource is +modified, a value of `"NONE"` will prevent any of the managed instances from +being restarted by Terraform. A value of `"RESTART"` will restart all of the +instances at once. In the future, as the GCE API matures we will support +`"ROLLING_UPDATE"` as well. * `name` - (Required) The name of the instance group manager. Must be 1-63 characters long and comply with [RFC1035](https://www.ietf.org/rfc/rfc1035.txt). @@ -56,6 +69,12 @@ affect existing instances. * `zone` - (Required) The zone that instances in this group should be created in. +The `named_port` block supports: (Include a named_port block for each named-port required). + +* `name` - (Required) The name of the port. + +* `port` - (Required) The port number. + ## Attributes Reference The following attributes are exported: diff --git a/website/source/docs/providers/google/r/compute_target_http_proxy.html.markdown b/website/source/docs/providers/google/r/compute_target_http_proxy.html.markdown index 9734f6ff86..c0199fd382 100644 --- a/website/source/docs/providers/google/r/compute_target_http_proxy.html.markdown +++ b/website/source/docs/providers/google/r/compute_target_http_proxy.html.markdown @@ -10,8 +10,8 @@ description: |- Creates a target HTTP proxy resource in GCE. For more information see [the official -documentation](http://cloud.google.com/compute/docs/load-balancing/http/target-proxies) and -[API](http://cloud.google.com/compute/docs/reference/latest/targetHttpProxies). +documentation](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies) and +[API](https://cloud.google.com/compute/docs/reference/latest/targetHttpProxies). ## Example Usage diff --git a/website/source/docs/providers/google/r/container_cluster.html.markdown b/website/source/docs/providers/google/r/container_cluster.html.markdown index 5a66ec9aaf..9fe63cbe24 100644 --- a/website/source/docs/providers/google/r/container_cluster.html.markdown +++ b/website/source/docs/providers/google/r/container_cluster.html.markdown @@ -14,14 +14,23 @@ description: |- ``` resource "google_container_cluster" "primary" { - name = "marcellus-wallace" - zone = "us-central1-a" - initial_node_count = 3 + name = "marcellus-wallace" + zone = "us-central1-a" + initial_node_count = 3 - master_auth { - username = "mr.yoda" - password = "adoy.rm" - } + master_auth { + username = "mr.yoda" + password = "adoy.rm" + } + + node_config { + oauth_scopes = [ + "https://www.googleapis.com/auth/compute", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring" + ] + } } ``` @@ -41,7 +50,7 @@ resource "google_container_cluster" "primary" { * `monitoring_service` - (Optional) The monitoring service that the cluster should write metrics to. Available options include `monitoring.googleapis.com` and `none`. Defaults to `monitoring.googleapis.com` * `network` - (Optional) The name of the Google Compute Engine network to which the cluster is connected -* `node_config` - (Optional)The machine type and image to use for all nodes in this cluster +* `node_config` - (Optional) The machine type and image to use for all nodes in this cluster **Master Auth** supports the following arguments: diff --git a/website/source/docs/providers/google/r/dns_record_set.markdown b/website/source/docs/providers/google/r/dns_record_set.markdown index 79ad2eb308..a4fd97af47 100644 --- a/website/source/docs/providers/google/r/dns_record_set.markdown +++ b/website/source/docs/providers/google/r/dns_record_set.markdown @@ -40,7 +40,7 @@ resource "google_dns_record_set" "frontend" { name = "frontend.${google_dns_managed_zone.prod.dns_name}" type = "A" ttl = 300 - rrdatas = ["${google_compute_instance.frontend.network_interface.0.access_config.0.nat_ip}"] + rrdatas = ["${google_compute_instance.frontend.network_interface.0.access_config.0.assigned_nat_ip}"] } ``` diff --git a/website/source/docs/providers/google/r/pubsub_subscription.html.markdown b/website/source/docs/providers/google/r/pubsub_subscription.html.markdown new file mode 100644 index 0000000000..7917205364 --- /dev/null +++ b/website/source/docs/providers/google/r/pubsub_subscription.html.markdown @@ -0,0 +1,56 @@ +--- +layout: "google" +page_title: "Google: google_pubsub_subscription" +sidebar_current: "docs-google-pubsub-subscription" +description: |- + Creates a subscription in Google's pubsub queueing system +--- + +# google\_pubsub\_subscripion + +Creates a subscription in Google's pubsub queueing system. For more information see +[the official documentation](https://cloud.google.com/pubsub/docs) and +[API](https://cloud.google.com/pubsub/reference/rest/v1/projects.subscriptions). + + +## Example Usage + +``` +resource "google_pubsub_subscription" "default" { + name = "default-subscription" + topic = "default-topic" + ack_deadline_seconds = 20 + push_config { + endpoint = "https://example.com/push" + attributes { + x-goog-version = "v1" + } + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A unique name for the resource, required by pubsub. + Changing this forces a new resource to be created. + +* `topic` - (Required) A topic to bind this subscription to, required by pubsub. + Changing this forces a new resource to be created. + +* `ack_deadline_seconds` - (Optional) The maximum number of seconds a + subscriber has to acknowledge a received message, otherwise the message is + redelivered. Changing this forces a new resource to be created. + +The optional `push_config` block supports: + +* `push_endpoint` - (Optional) The URL of the endpoint to which messages should + be pushed. Changing this forces a new resource to be created. + +* `attributes` - (Optional) Key-value pairs of API supported attributes used + to control aspects of the message delivery. Currently, only + `x-goog-version` is supported, which controls the format of the data + delivery. For more information, read [the API docs + here](https://cloud.google.com/pubsub/reference/rest/v1/projects.subscriptions#PushConfig.FIELDS.attributes). + Changing this forces a new resource to be created. diff --git a/website/source/docs/providers/google/r/pubsub_topic.html.markdown b/website/source/docs/providers/google/r/pubsub_topic.html.markdown new file mode 100644 index 0000000000..e371ddef19 --- /dev/null +++ b/website/source/docs/providers/google/r/pubsub_topic.html.markdown @@ -0,0 +1,35 @@ +--- +layout: "google" +page_title: "Google: google_pubsub_topic" +sidebar_current: "docs-google-pubsub-topic" +description: |- + Creates a topic in Google's pubsub queueing system +--- + +# google\_pubsub\_topic + +Creates a topic in Google's pubsub queueing system. For more information see +[the official documentation](https://cloud.google.com/pubsub/docs) and +[API](https://cloud.google.com/pubsub/reference/rest/v1/projects.topics). + + +## Example Usage + +``` +resource "google_pubsub_topic" "default" { + name = "default-topic" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A unique name for the resource, required by pubsub. + Changing this forces a new resource to be created. + +## Attributes Reference + +The following attributes are exported: + +* `name` - The name of the resource. diff --git a/website/source/docs/providers/google/r/sql_database_instance.html.markdown b/website/source/docs/providers/google/r/sql_database_instance.html.markdown index 7889f1448e..ae3a1d20a0 100644 --- a/website/source/docs/providers/google/r/sql_database_instance.html.markdown +++ b/website/source/docs/providers/google/r/sql_database_instance.html.markdown @@ -28,7 +28,10 @@ resource "google_sql_database_instance" "master" { The following arguments are supported: -* `name` - (Required) The name of the instance. +* `name` - (Optional, Computed) The name of the instance. If the name is left + blank, Terraform will randomly generate one when the instance is first + created. This is done because after a name is used, it cannot be reused + for up to [two months](https://cloud.google.com/sql/docs/delete-instance). * `region` - (Required) The region the instance will sit in. Note, this does not line up with the Google Compute Engine (GCE) regions - your options are @@ -41,12 +44,6 @@ The following arguments are supported: * `database_version` - (Optional, Default: `MYSQL_5_5`) The MySQL version to use. Can be either `MYSQL_5_5` or `MYSQL_5_6`. -* `pricing_plan` - (Optional) Pricing plan for this instance, can be one of - `PER_USE` or `PACKAGE`. - -* `replication_type` - (Optional) Replication type for this instance, can be one of - `ASYNCHRONOUS` or `SYNCHRONOUS`. - The required `settings` block supports: * `tier` - (Required) The machine tier to use. See @@ -62,6 +59,12 @@ The required `settings` block supports: * `crash_safe_replication` - (Optional) Specific to read instances, indicates when crash-safe replication flags are enabled. +* `pricing_plan` - (Optional) Pricing plan for this instance, can be one of + `PER_USE` or `PACKAGE`. + +* `replication_type` - (Optional) Replication type for this instance, can be one of + `ASYNCHRONOUS` or `SYNCHRONOUS`. + The optional `settings.database_flags` sublist supports: * `name` - (Optional) Name of the flag. diff --git a/website/source/docs/providers/google/r/sql_user.html.markdown b/website/source/docs/providers/google/r/sql_user.html.markdown new file mode 100644 index 0000000000..6bb4632911 --- /dev/null +++ b/website/source/docs/providers/google/r/sql_user.html.markdown @@ -0,0 +1,47 @@ +--- +layout: "google" +page_title: "Google: google_sql_user" +sidebar_current: "docs-google-sql-user" +description: |- + Creates a new SQL user in Google Cloud SQL. +--- + +# google\_sql\_user + +Creates a new Google SQL User on a Google SQL User Instance. For more information, see the [official documentation](https://cloud.google.com/sql/), or the [JSON API](https://cloud.google.com/sql/docs/admin-api/v1beta4/users). + +## Example Usage + +Example creating a SQL User. + +``` +resource "google_sql_database_instance" "master" { + name = "master-instance" + + settings { + tier = "D0" + } +} + +resource "google_sql_user" "users" { + name = "me" + instance = "${google_sql_database_instance.master.name}" + host = "me.com" +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the user. + Changing this forces a new resource to be created. + +* `host` - (Required) The host the user can connect from. Can be an IP address. + Changing this forces a new resource to be created. + +* `password` - (Required) The users password. Can be updated. + +* `instance` - (Required) The name of the Cloud SQL instance. + Changing this forces a new resource to be created. diff --git a/website/source/docs/providers/google/r/storage_bucket_object.html.markdown b/website/source/docs/providers/google/r/storage_bucket_object.html.markdown index 1c970530d1..7b481550b8 100644 --- a/website/source/docs/providers/google/r/storage_bucket_object.html.markdown +++ b/website/source/docs/providers/google/r/storage_bucket_object.html.markdown @@ -29,8 +29,15 @@ resource "google_storage_bucket_object" "picture" { The following arguments are supported: * `name` - (Required) The name of the object. + * `bucket` - (Required) The name of the containing bucket. -* `source` - (Required) A path to the data you want to upload. + +* `source` - (Optional) A path to the data you want to upload. Must be defined +if `content` is not. + +* `content` - (Optional) Data as `string` to be uploaded. Must be defined if +`source` is not. + * `predefined_acl` - (Optional, Deprecated) The [canned GCS ACL](https://cloud.google.com/storage/docs/access-control#predefined-acl) apply. Please switch to `google_storage_object_acl.predefined_acl`. @@ -39,4 +46,5 @@ to `google_storage_object_acl.predefined_acl`. The following attributes are exported: * `md5hash` - (Computed) Base 64 MD5 hash of the uploaded data. + * `crc32c` - (Computed) Base 64 CRC32 hash of the uploaded data. diff --git a/website/source/docs/providers/mysql/index.html.markdown b/website/source/docs/providers/mysql/index.html.markdown new file mode 100644 index 0000000000..555c589e20 --- /dev/null +++ b/website/source/docs/providers/mysql/index.html.markdown @@ -0,0 +1,72 @@ +--- +layout: "mysql" +page_title: "Provider: MySQL" +sidebar_current: "docs-mysql-index" +description: |- + A provider for MySQL Server. +--- + +# MySQL Provider + +[MySQL](http://www.mysql.com) is a relational database server. The MySQL +provider exposes resources used to manage the configuration of resources +in a MySQL server. + +Use the navigation to the left to read about the available resources. + +## Example Usage + +The following is a minimal example: + +``` +# Configure the MySQL provider +provider "mysql" { + endpoint = "my-database.example.com:3306" + username = "app-user" + password = "app-password" +} + +# Create a Database +resource "mysql_database" "app" { + name = "my_awesome_app" +} +``` + +This provider can be used in conjunction with other resources that create +MySQL servers. For example, ``aws_db_instance`` is able to create MySQL +servers in Amazon's RDS service. + +``` +# Create a database server +resource "aws_db_instance" "default" { + engine = "mysql" + engine_version = "5.6.17" + instance_class = "db.t1.micro" + name = "initial_db" + username = "rootuser" + password = "rootpasswd" + # etc, etc; see aws_db_instance docs for more +} + +# Configure the MySQL provider based on the outcome of +# creating the aws_db_instance. +provider "mysql" { + endpoint = "${aws_db_instance.default.endpoint}" + username = "${aws_db_instance.default.username}" + password = "${aws_db_instance.default.password}" +} + +# Create a second database, in addition to the "initial_db" created +# by the aws_db_instance resource above. +resource "mysql_database" "app" { + name = "another_db" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `endpoint` - (Required) The address of the MySQL server to use. Most often a "hostname:port" pair, but may also be an absolute path to a Unix socket when the host OS is Unix-compatible. +* `username` - (Required) Username to use to authenticate with the server. +* `password` - (Optional) Password for the given user, if that user has a password. diff --git a/website/source/docs/providers/mysql/r/database.html.markdown b/website/source/docs/providers/mysql/r/database.html.markdown new file mode 100644 index 0000000000..36459ab9e0 --- /dev/null +++ b/website/source/docs/providers/mysql/r/database.html.markdown @@ -0,0 +1,54 @@ +--- +layout: "mysql" +page_title: "MySQL: mysql_database" +sidebar_current: "docs-mysql-resource-database" +description: |- + Creates and manages a database on a MySQL server. +--- + +# mysql\_database + +The ``mysql_database`` resource creates and manages a database on a MySQL +server. + +~> **Caution:** The ``mysql_database`` resource can completely delete your +database just as easily as it can create it. To avoid costly accidents, +consider setting +[``prevent_destroy``](/docs/configuration/resources.html#prevent_destroy) +on your database resources as an extra safety measure. + +## Example Usage + +``` +resource "mysql_database" "app" { + name = "my_awesome_app" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the database. This must be unique within + a given MySQL server and may or may not be case-sensitive depending on + the operating system on which the MySQL server is running. + +* `default_character_set` - (Optional) The default character set to use when + a table is created without specifying an explicit character set. Defaults + to "utf8". + +* `default_collation` - (Optional) The default collation to use when a table + is created without specifying an explicit collation. Defaults to + ``utf8_general_ci``. Each character set has its own set of collations, so + changing the character set requires also changing the collation. + +Note that the defaults for character set and collation above do not respect +any defaults set on the MySQL server, so that the configuration can be set +appropriately even though Terraform cannot see the server-level defaults. If +you wish to use the server's defaults you must consult the server's +configuration and then set the ``default_character_set`` and +``default_collation`` to match. + +## Attributes Reference + +No further attributes are exported. diff --git a/website/source/docs/providers/openstack/index.html.markdown b/website/source/docs/providers/openstack/index.html.markdown index be918a4655..e248571931 100644 --- a/website/source/docs/providers/openstack/index.html.markdown +++ b/website/source/docs/providers/openstack/index.html.markdown @@ -64,7 +64,7 @@ The following arguments are supported: service catalog. It can be set using the OS_ENDPOINT_TYPE environment variable. If not set, public endpoints is used. -## Testing +## Testing and Development In order to run the Acceptance Tests for development, the following environment variables must also be set: @@ -79,3 +79,15 @@ variables must also be set: * `OS_POOL_NAME` - The name of a Floating IP pool. * `OS_NETWORK_ID` - The UUID of a network in your test environment. + +To make development easier, the `builtin/providers/openstack/devstack/deploy.sh` +script will assist in installing and configuring a standardized +[DevStack](http://docs.openstack.org/developer/devstack/) environment along with +Golang, Terraform, and all development dependencies. It will also set the required +environment variables in the `devstack/openrc` file. + +Do not run the `deploy.sh` script on your workstation or any type of production +server. Instead, run the script within a disposable virtual machine. +[Here's](https://github.com/berendt/terraform-configurations) an example of a +Terraform configuration that will create an OpenStack instance and then install and +configure DevStack inside. diff --git a/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown index 73e6636461..8dced11dcc 100644 --- a/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown +++ b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown @@ -85,9 +85,13 @@ The following arguments are supported: * `volume` - (Optional) Attach an existing volume to the instance. The volume structure is described below. -* `scheduler_hints` - (Optional) Provider the Nova scheduler with hints on how +* `scheduler_hints` - (Optional) Provide the Nova scheduler with hints on how the instance should be launched. The available hints are described below. +* `personality` - (Optional) Customize the personality of an instance by + defining one or more files and their contents. The personality structure + is described below. + The `network` block supports: * `uuid` - (Required unless `port` or `name` is provided) The network UUID to @@ -143,6 +147,12 @@ The `scheduler_hints` block supports: * `build_near_host_ip` - (Optional) An IP Address in CIDR form. The instance will be placed on a compute node that is in the same subnet. +The `personality` block supports: + +* `file` - (Required) The absolute path of the destination file. + +* `contents` - (Required) The contents of the file. Limited to 255 bytes. + ## Attributes Reference The following attributes are exported: diff --git a/website/source/docs/providers/openstack/r/compute_secgroup_v2.html.markdown b/website/source/docs/providers/openstack/r/compute_secgroup_v2.html.markdown index 49e1c3ebfb..2005c9aea0 100644 --- a/website/source/docs/providers/openstack/r/compute_secgroup_v2.html.markdown +++ b/website/source/docs/providers/openstack/r/compute_secgroup_v2.html.markdown @@ -62,17 +62,18 @@ range to open. Changing this creates a new security group rule. * `ip_protocol` - (Required) The protocol type that will be allowed. Changing this creates a new security group rule. -* `cidr` - (Optional) Required if `from_group_id` is empty. The IP range that -will be the source of network traffic to the security group. Use 0.0.0.0./0 -to allow all IP addresses. Changing this creates a new security group rule. +* `cidr` - (Optional) Required if `from_group_id` or `self` is empty. The IP range +that will be the source of network traffic to the security group. Use 0.0.0.0/0 +to allow all IP addresses. Changing this creates a new security group rule. Cannot +be combined with `from_group_id` or `self`. -* `from_group_id` - (Optional) Required if `cidr` is empty. The ID of a group -from which to forward traffic to the parent group. Changing -this creates a new security group rule. +* `from_group_id` - (Optional) Required if `cidr` or `self` is empty. The ID of a +group from which to forward traffic to the parent group. Changing this creates a +new security group rule. Cannot be combined with `cidr` or `self`. * `self` - (Optional) Required if `cidr` and `from_group_id` is empty. If true, -the security group itself will be added as a source to this ingress rule. `cidr` -and `from_group_id` will be ignored if either are set while `self` is true. +the security group itself will be added as a source to this ingress rule. Cannot +be combined with `cidr` or `from_group_id`. ## Attributes Reference @@ -99,3 +100,17 @@ rule { ``` A list of ICMP types and codes can be found [here](https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol#Control_messages). + +### Referencing Security Groups + +When referencing a security group in a configuration (for example, a configuration creates a new security group and then needs to apply it to an instance being created in the same configuration), it is currently recommended to reference the security group by name and not by ID, like this: + +``` +resource "openstack_compute_instance_v2" "test-server" { + name = "tf-test" + image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743" + flavor_id = "3" + key_pair = "my_key_pair_name" + security_groups = ["${openstack_compute_secgroup_v2.secgroup_1.name}"] +} +``` diff --git a/website/source/docs/providers/openstack/r/lb_pool_v1.html.markdown b/website/source/docs/providers/openstack/r/lb_pool_v1.html.markdown index 5ddbdf1af8..95a797ede3 100644 --- a/website/source/docs/providers/openstack/r/lb_pool_v1.html.markdown +++ b/website/source/docs/providers/openstack/r/lb_pool_v1.html.markdown @@ -68,7 +68,7 @@ new member. * `port` - (Required) An integer representing the port on which the member is hosted. Changing this creates a new member. -* `admin_state_up` - (Optional) The administrative state of the member. +* `admin_state_up` - (Required) The administrative state of the member. Acceptable values are 'true' and 'false'. Changing this value updates the state of the existing member. diff --git a/website/source/docs/providers/packet/index.html.markdown b/website/source/docs/providers/packet/index.html.markdown index bbe9f5d1ea..5898c3c9a8 100644 --- a/website/source/docs/providers/packet/index.html.markdown +++ b/website/source/docs/providers/packet/index.html.markdown @@ -22,9 +22,9 @@ provider "packet" { } # Create a project -resource "packet_project" "tf_project_1" { +resource "packet_project" "cool_project" { name = "My First Terraform Project" - payment_method = "PAYMENT_METHOD_ID" + payment_method = "PAYMENT_METHOD_ID" # Only required for a non-default payment method } # Create a device and add it to tf_project_1 @@ -34,7 +34,7 @@ resource "packet_device" "web1" { facility = "ewr1" operating_system = "coreos_stable" billing_cycle = "hourly" - project_id = "${packet_project.tf_project_1.id}" + project_id = "${packet_project.cool_project.id}" } ``` @@ -44,4 +44,3 @@ The following arguments are supported: * `auth_token` - (Required) This is your Packet API Auth token. This can also be specified with the `PACKET_AUTH_TOKEN` shell environment variable. - diff --git a/website/source/docs/providers/packet/r/device.html.markdown b/website/source/docs/providers/packet/r/device.html.markdown index 6d57dcbb51..75a1f501d9 100644 --- a/website/source/docs/providers/packet/r/device.html.markdown +++ b/website/source/docs/providers/packet/r/device.html.markdown @@ -14,14 +14,14 @@ modify, and delete devices. ## Example Usage ``` -# Create a device and add it to tf_project_1 +# Create a device and add it to cool_project resource "packet_device" "web1" { hostname = "tf.coreos2" plan = "baremetal_1" facility = "ewr1" operating_system = "coreos_stable" billing_cycle = "hourly" - project_id = "${packet_project.tf_project_1.id}" + project_id = "${packet_project.cool_project.id}" } ``` @@ -33,7 +33,7 @@ The following arguments are supported: * `project_id` - (Required) The id of the project in which to create the device * `operating_system` - (Required) The operating system slug * `facility` - (Required) The facility in which to create the device -* `plan` - (Required) The config type slug +* `plan` - (Required) The hardware config slug * `billing_cycle` - (Required) monthly or hourly * `user_data` (Optional) - A string of the desired User Data for the device. @@ -43,13 +43,13 @@ The following attributes are exported: * `id` - The ID of the device * `hostname`- The hostname of the device -* `project_id`- The Id of the project the device belonds to -* `facility` - The facility the device is in -* `plan` - The config type of the device +* `project_id`- The ID of the project the device belongs to +* `facility` - The facility the device is in +* `plan` - The hardware config of the device * `network` - The private and public v4 and v6 IPs assigned to the device -* `locked` - Is the device locked +* `locked` - Whether the device is locked * `billing_cycle` - The billing cycle of the device (monthly or hourly) * `operating_system` - The operating system running on the device * `status` - The status of the device * `created` - The timestamp for when the device was created -* `updated` - The timestamp for the last time the device was udpated +* `updated` - The timestamp for the last time the device was updated diff --git a/website/source/docs/providers/packet/r/project.html.markdown b/website/source/docs/providers/packet/r/project.html.markdown index c34b49c209..9b92e1c89a 100644 --- a/website/source/docs/providers/packet/r/project.html.markdown +++ b/website/source/docs/providers/packet/r/project.html.markdown @@ -25,16 +25,16 @@ resource "packet_project" "tf_project_1" { The following arguments are supported: -* `name` - (Required) The name of the Project in Packet.net -* `payment_method` - (Required) The id of the payment method on file to use for services created -on this project. +* `name` - (Required) The name of the Project on Packet.net +* `payment_method` - (Optional) The unique ID of the payment method on file to use for services created +in this project. If not given, the project will use the default payment method for your user. ## Attributes Reference The following attributes are exported: * `id` - The unique ID of the project -* `payment_method` - The id of the payment method on file to use for services created -on this project. +* `payment_method` - The unique ID of the payment method on file to use for services created +in this project. * `created` - The timestamp for when the Project was created * `updated` - The timestamp for the last time the Project was updated diff --git a/website/source/docs/providers/packet/r/ssh_key.html.markdown b/website/source/docs/providers/packet/r/ssh_key.html.markdown index cb27aaa774..f3ca1e3c6b 100644 --- a/website/source/docs/providers/packet/r/ssh_key.html.markdown +++ b/website/source/docs/providers/packet/r/ssh_key.html.markdown @@ -9,7 +9,7 @@ description: |- # packet\_ssh_key Provides a Packet SSH key resource to allow you manage SSH -keys on your account. All ssh keys on your account are loaded on +keys on your account. All SSH keys on your account are loaded on all new devices, they do not have to be explicitly declared on device creation. @@ -40,4 +40,4 @@ The following attributes are exported: * `public_key` - The text of the public key * `fingerprint` - The fingerprint of the SSH key * `created` - The timestamp for when the SSH key was created -* `updated` - The timestamp for the last time the SSH key was udpated +* `updated` - The timestamp for the last time the SSH key was updated diff --git a/website/source/docs/providers/postgresql/index.html.markdown b/website/source/docs/providers/postgresql/index.html.markdown new file mode 100644 index 0000000000..36761b626a --- /dev/null +++ b/website/source/docs/providers/postgresql/index.html.markdown @@ -0,0 +1,63 @@ +--- +layout: "postgresql" +page_title: "Provider: PostgreSQL" +sidebar_current: "docs-postgresql-index" +description: |- + A provider for PostgreSQL Server. +--- + +# PostgreSQL Provider + +The PostgreSQL provider gives the ability to deploy and configure resources in a PostgreSQL server. + +Use the navigation to the left to read about the available resources. + +## Usage + +``` +provider "postgresql" { + host = "postgres_server_ip" + port = 5432 + username = "postgres_user" + password = "postgres_password" +} + +``` + +Configuring multiple servers can be done by specifying the alias option. + +``` +provider "postgresql" { + alias = "pg1" + host = "postgres_server_ip1" + username = "postgres_user1" + password = "postgres_password1" +} + +provider "postgresql" { + alias = "pg2" + host = "postgres_server_ip2" + username = "postgres_user2" + password = "postgres_password2" +} + +resource "postgresql_database" "my_db1" { + provider = "postgresql.pg1" + name = "my_db1" +} +resource "postgresql_database" "my_db2" { + provider = "postgresql.pg2" + name = "my_db2" +} + + +``` + +## Argument Reference + +The following arguments are supported: + +* `host` - (Required) The address for the postgresql server connection. +* `port` - (Optional) The port for the postgresql server connection. (Default 5432) +* `username` - (Required) Username for the server connection. +* `password` - (Optional) Password for the server connection. \ No newline at end of file diff --git a/website/source/docs/providers/postgresql/r/postgresql_database.html.markdown b/website/source/docs/providers/postgresql/r/postgresql_database.html.markdown new file mode 100644 index 0000000000..0c23a7d129 --- /dev/null +++ b/website/source/docs/providers/postgresql/r/postgresql_database.html.markdown @@ -0,0 +1,30 @@ +--- +layout: "postgresql" +page_title: "PostgreSQL: postgresql_database" +sidebar_current: "docs-postgresql-resource-postgresql_database" +description: |- + Creates and manages a database on a PostgreSQL server. +--- + +# postgresql\_database + +The ``postgresql_database`` resource creates and manages a database on a PostgreSQL +server. + + +## Usage + +``` +resource "postgresql_database" "my_db" { + name = "my_db" + owner = "my_role +} + +``` + +## Argument Reference + +* `name` - (Required) The name of the database. Must be unique on the PostgreSQL server instance + where it is configured. + +* `owner` - (Optional) The owner role of the database. If not specified the default is the user executing the command. To create a database owned by another role, you must be a direct or indirect member of that role, or be a superuser. diff --git a/website/source/docs/providers/postgresql/r/postgresql_role.html.markdown b/website/source/docs/providers/postgresql/r/postgresql_role.html.markdown new file mode 100644 index 0000000000..a5d5c17d87 --- /dev/null +++ b/website/source/docs/providers/postgresql/r/postgresql_role.html.markdown @@ -0,0 +1,37 @@ +--- +layout: "postgresql" +page_title: "PostgreSQL: postgresql_role" +sidebar_current: "docs-postgresql-resource-postgresql_role" +description: |- + Creates and manages a database on a PostgreSQL server. +--- + +# postgresql\_role + +The ``postgresql_role`` resource creates and manages a role on a PostgreSQL +server. + + +## Usage + +``` +resource "postgresql_role" "my_role" { + name = "my_role" + login = true + password = "mypass" + encrypted = true +} + +``` + +## Argument Reference + +* `name` - (Required) The name of the role. Must be unique on the PostgreSQL server instance + where it is configured. + +* `login` - (Optional) Configures whether a role is allowed to log in; that is, whether the role can be given as the initial session authorization name during client connection. Corresponds to the LOGIN/NOLOGIN +clauses in 'CREATE ROLE'. Default value is false. + +* `password` - (Optional) Sets the role's password. (A password is only of use for roles having the LOGIN attribute, but you can nonetheless define one for roles without it.) If you do not plan to use password authentication you can omit this option. If no password is specified, the password will be set to null and password authentication will always fail for that user. + +* `encrypted` - (Optional) Corresponds to ENCRYPTED, UNENCRYPTED in PostgreSQL. This controls whether the password is stored encrypted in the system catalogs. Default is false. \ No newline at end of file diff --git a/website/source/docs/providers/template/r/cloudinit_config.html.markdown b/website/source/docs/providers/template/r/cloudinit_config.html.markdown new file mode 100644 index 0000000000..d2d03d3788 --- /dev/null +++ b/website/source/docs/providers/template/r/cloudinit_config.html.markdown @@ -0,0 +1,81 @@ +--- +layout: "template" +page_title: "Template: cloudinit_multipart" +sidebar_current: "docs-template-resource-cloudinit-config" +description: |- + Renders a multi-part cloud-init config from source files. +--- + +# template\_cloudinit\_config + +Renders a multi-part cloud-init config from source files. + +## Example Usage + +``` +# Render a part using a `template_file` +resource "template_file" "script" { + template = "${file("${path.module}/init.tpl")}" + + vars { + consul_address = "${aws_instance.consul.private_ip}" + } +} + +# Render a multi-part cloudinit config making use of the part +# above, and other source files +resource "template_cloudinit_config" "config" { + gzip = true + base64_encode = true + + # Setup hello world script to be called by the cloud-config + part { + filename = "init.cfg" + content_type = "text/part-handler" + content = "${template_file.script.rendered}" + } + + part { + content_type = "text/x-shellscript" + content = "baz" + } + + part { + content_type = "text/x-shellscript" + content = "ffbaz" + } +} + +# Start an AWS instance with the cloudinit config as user data +resource "aws_instance" "web" { + ami = "ami-d05e75b8" + instance_type = "t2.micro" + user_data = "${template_cloudinit_config.config.rendered}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `gzip` - (Optional) Specify whether or not to gzip the rendered output. + +* `base64_encode` - (Optional) Base64 encoding of the rendered output. + +* `part` - (Required) One may specify this many times, this creates a fragment of the rendered cloud-init config file. The order of the parts is maintained in the configuration is maintained in the rendered template. + +The `part` block supports: + +* `filename` - (Optional) Filename to save part as. + +* `content_type` - (Optional) Content type to send file as. + +* `content` - (Required) Body for the part. + +* `merge_type` - (Optional) Gives the ability to merge multiple blocks of cloud-config together. + +## Attributes Reference + +The following attributes are exported: + +* `rendered` - The final rendered multi-part cloudinit config. diff --git a/website/source/docs/providers/terraform/index.html.markdown b/website/source/docs/providers/terraform/index.html.markdown new file mode 100644 index 0000000000..e5ccbff59e --- /dev/null +++ b/website/source/docs/providers/terraform/index.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "terraform" +page_title: "Provider: Terraform" +sidebar_current: "docs-terraform-index" +description: |- + The Terraform provider is used to access meta data from shared infrastructure. +--- + +# Terraform Provider + +The terraform provider exposes resources to access state meta data +for Terraform outputs from shared infrastructure. + +The terraform provider is what we call a _logical provider_. This has no +impact on how it behaves, but conceptually it is important to understand. +The terraform provider doesn't manage any _physical_ resources; it isn't +creating servers, writing files, etc. It is used to access the outputs +of other Terraform states to be used as inputs for resources. +Examples will explain this best. + +Use the navigation to the left to read about the available resources. + +## Example Usage + +``` +# Shared infrastructure state stored in Atlas +resource "terraform_remote_state" "vpc" { + backend = "atlas" + config { + path = "hashicorp/vpc-prod" + } +} + +resource "aws_instance" "foo" { + # ... + subnet_id = "${terraform_remote_state.vpc.output.subnet_id}" +} +``` diff --git a/website/source/docs/providers/terraform/r/remote_state.html.md b/website/source/docs/providers/terraform/r/remote_state.html.md new file mode 100644 index 0000000000..b02ddfee99 --- /dev/null +++ b/website/source/docs/providers/terraform/r/remote_state.html.md @@ -0,0 +1,42 @@ +--- +layout: "terraform" +page_title: "Terraform: terraform_remote_state" +sidebar_current: "docs-terraform-resource-remote-state" +description: |- + Accesses state meta data from a remote backend. +--- + +# remote\_state + +Retrieves state meta data from a remote backend + +## Example Usage + +``` +resource "terraform_remote_state" "vpc" { + backend = "atlas" + config { + path = "hashicorp/vpc-prod" + } +} + +resource "aws_instance" "foo" { + # ... + subnet_id = "${terraform_remote_state.vpc.output.subnet_id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `backend` - (Required) The remote backend to use. +* `config` - (Optional) The configuration of the remote backend. + +## Attributes Reference + +The following attributes are exported: + +* `backend` - See Argument Reference above. +* `config` - See Argument Reference above. +* `output` - The values of the configured `outputs` for the root module referenced by the remote state. diff --git a/website/source/docs/providers/tls/r/locally_signed_cert.html.md b/website/source/docs/providers/tls/r/locally_signed_cert.html.md new file mode 100644 index 0000000000..c052c5ff97 --- /dev/null +++ b/website/source/docs/providers/tls/r/locally_signed_cert.html.md @@ -0,0 +1,118 @@ +--- +layout: "tls" +page_title: "TLS: tls_locally_signed_cert" +sidebar_current: "docs-tls-resourse-locally-signed-cert" +description: |- + Creates a locally-signed TLS certificate in PEM format. +--- + +# tls\_locally\_signed\_cert + +Generates a TLS ceritifcate using a *Certificate Signing Request* (CSR) and +signs it with a provided certificate authority (CA) private key. + +Locally-signed certificates are generally only trusted by client software when +setup to use the provided CA. They are normally used in development environments +or when deployed internally to an organization. + +## Example Usage + +``` +resource "tls_locally_signed_cert" "example" { + cert_request_pem = "${file(\"cert_request.pem\")}" + + ca_key_algorithm = "ECDSA" + ca_private_key_pem = "${file(\"ca_private_key.pem\")}" + ca_cert_pem = "${file(\"ca_cert.pem\")}" + + validity_period_hours = 12 + + allowed_uses = [ + "key_encipherment", + "digital_signature", + "server_auth", + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `cert_request_pem` - (Required) PEM-encoded request certificate data. + +* `ca_key_algorithm` - (Required) The name of the algorithm for the key provided + in `ca_private_key_pem`. + +* `ca_private_key_pem` - (Required) PEM-encoded private key data for the CA. + This can be read from a separate file using the ``file`` interpolation + function. + +* `ca_cert_pem` - (Required) PEM-encoded certificate data for the CA. + +* `validity_period_hours` - (Required) The number of hours after initial issuing that the + certificate will become invalid. + +* `allowed_uses` - (Required) List of keywords each describing a use that is permitted + for the issued certificate. The valid keywords are listed below. + +* `early_renewal_hours` - (Optional) If set, the resource will consider the certificate to + have expired the given number of hours before its actual expiry time. This can be useful + to deploy an updated certificate in advance of the expiration of the current certificate. + Note however that the old certificate remains valid until its true expiration time, since + this resource does not (and cannot) support certificate revocation. Note also that this + advance update can only be performed should the Terraform configuration be applied during the + early renewal period. + +* `is_ca_certificate` - (Optional) Boolean controlling whether the CA flag will be set in the + generated certificate. Defaults to `false`, meaning that the certificate does not represent + a certificate authority. + +The `allowed_uses` list accepts the following keywords, combining the set of flags defined by +both [Key Usage](https://tools.ietf.org/html/rfc5280#section-4.2.1.3) and +[Extended Key Usage](https://tools.ietf.org/html/rfc5280#section-4.2.1.12) in +[RFC5280](https://tools.ietf.org/html/rfc5280): + +* `digital_signature` +* `content_commitment` +* `key_encipherment` +* `data_encipherment` +* `key_agreement` +* `cert_signing` +* `encipher_only` +* `decipher_only` +* `any_extended` +* `server_auth` +* `client_auth` +* `code_signing` +* `email_protection` +* `ipsec_end_system` +* `ipsec_tunnel` +* `ipsec_user` +* `timestamping` +* `ocsp_signing` +* `microsoft_server_gated_crypto` +* `netscape_server_gated_crypto` + +## Attributes Reference + +The following attributes are exported: + +* `cert_pem` - The certificate data in PEM format. +* `validity_start_time` - The time after which the certificate is valid, as an + [RFC3339](https://tools.ietf.org/html/rfc3339) timestamp. +* `validity_end_time` - The time until which the certificate is invalid, as an + [RFC3339](https://tools.ietf.org/html/rfc3339) timestamp. + +## Automatic Renewal + +This resource considers its instances to have been deleted after either their validity +periods ends or the early renewal period is reached. At this time, applying the +Terraform configuration will cause a new certificate to be generated for the instance. + +Therefore in a development environment with frequent deployments it may be convenient +to set a relatively-short expiration time and use early renewal to automatically provision +a new certificate when the current one is about to expire. + +The creation of a new certificate may of course cause dependent resources to be updated +or replaced, depending on the lifecycle rules applying to those resources. diff --git a/website/source/docs/providers/tls/r/private_key.html.md b/website/source/docs/providers/tls/r/private_key.html.md index 1a4a2cec43..0afd116756 100644 --- a/website/source/docs/providers/tls/r/private_key.html.md +++ b/website/source/docs/providers/tls/r/private_key.html.md @@ -50,6 +50,12 @@ The following attributes are exported: * `algorithm` - The algorithm that was selected for the key. * `private_key_pem` - The private key data in PEM format. +* `public_key_pem` - The public key data in PEM format. +* `public_key_openssh` - The public key data in OpenSSH `authorized_keys` + format, if the selected private key format is compatible. All RSA keys + are supported, and ECDSA keys with curves "P256", "P384" and "P251" + are supported. This attribute is empty if an incompatible ECDSA curve + is selected. ## Generating a New Key diff --git a/website/source/docs/providers/vcd/index.html.markdown b/website/source/docs/providers/vcd/index.html.markdown new file mode 100644 index 0000000000..b671470b4c --- /dev/null +++ b/website/source/docs/providers/vcd/index.html.markdown @@ -0,0 +1,58 @@ +--- +layout: "vcd" +page_title: "Provider: VMware vCloudDirector" +sidebar_current: "docs-vcd-index" +description: |- + The VMware vCloud Director provider is used to interact with the resources supported by VMware vCloud Director. The provider needs to be configured with the proper credentials before it can be used. +--- + +# VMware vCloud Director Provider + +The VMware vCloud Director provider is used to interact with the resources supported by VMware vCloud Director. The provider needs to be configured with the proper credentials before it can be used. + +Use the navigation to the left to read about the available resources. + +~> **NOTE:** The VMware vCloud Director Provider currently represents _initial support_ and therefore may undergo significant changes as the community improves it. + +## Example Usage + +``` +# Configure the VMware vCloud Director Provider +provider "vcd" { + user = "${var.vcd_user}" + password = "${var.vcd_pass}" + org = "${var.vcd_org}" + url = "${var.vcd_url}" + vdc = "${var.vcd_vdc}" + maxRetryTimeout = "${var.vcd_maxRetryTimeout}" +} + +# Create a new network +resource "vcd_network" "net" { + ... +} +``` + +## Argument Reference + +The following arguments are used to configure the VMware vCloud Director Provider: + +* `user` - (Required) This is the username for vCloud Director API operations. Can also + be specified with the `VCD_USER` environment variable. +* `password` - (Required) This is the password for vCloud Director API operations. Can + also be specified with the `VCD_PASSWORD` environment variable. +* `org` - (Required) This is the vCloud Director Org on which to run API + operations. Can also be specified with the `VCD_ORG` environment + variable. +* `url` - (Required) This is the URL for the vCloud Director API endpoint. e.g. + https://server.domain.com/api. Can also be specified with the `VCD_URL` environment variable. +* `vdc` - (Optional) This is the virtual datacenter within vCloud Director to run + API operations against. If not set the plugin will select the first virtual + datacenter available to your Org. Can also be specified with the `VCD_VDC` environment + variable. +* `maxRetryTimeout` - (Optional) This provides you with the ability to specify the maximum + amount of time (in seconds) you are prepared to wait for interactions on resources managed + by vCloud Director to be successful. If a resource action fails, the action will be retried + (as long as it is still within the `maxRetryTimeout` value) to try and ensure success. + Defaults to 60 seconds if not set. + Can also be specified with the `VCD_MAX_RETRY_TIMEOUT` environment variable. diff --git a/website/source/docs/providers/vcd/r/dnat.html.markdown b/website/source/docs/providers/vcd/r/dnat.html.markdown new file mode 100644 index 0000000000..dd6fb92b0a --- /dev/null +++ b/website/source/docs/providers/vcd/r/dnat.html.markdown @@ -0,0 +1,32 @@ +--- +layout: "vcd" +page_title: "vCloudDirector: vcd_dnat" +sidebar_current: "docs-vcd-resource-dnat" +description: |- + Provides a vCloud Director DNAT resource. This can be used to create, modify, and delete destination NATs to map external IPs to a VM. +--- + +# vcd\_dnat + +Provides a vCloud Director DNAT resource. This can be used to create, modify, +and delete destination NATs to map an external IP/port to a VM. + +## Example Usage + +``` +resource "vcd_dnat" "web" { + edge_gateway = "Edge Gateway Name" + external_ip = "78.101.10.20" + port = 80 + internal_ip = "10.10.0.5" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `edge_gateway` - (Required) The name of the edge gateway on which to apply the DNAT +* `external_ip` - (Required) One of the external IPs available on your Edge Gateway +* `port` - (Required) The port number to map +* `internal_ip` - (Required) The IP of the VM to map to diff --git a/website/source/docs/providers/vcd/r/firewall_rules.html.markdown b/website/source/docs/providers/vcd/r/firewall_rules.html.markdown new file mode 100644 index 0000000000..172237322a --- /dev/null +++ b/website/source/docs/providers/vcd/r/firewall_rules.html.markdown @@ -0,0 +1,83 @@ +--- +layout: "vcd" +page_title: "vCloudDirector: vcd_firewall_rules" +sidebar_current: "docs-vcd-resource-firewall-rules" +description: |- + Provides a vCloud Director Firewall resource. This can be used to create, modify, and delete firewall settings and rules. +--- + +# vcd\_firewall\_rules + +Provides a vCloud Director Firewall resource. This can be used to create, +modify, and delete firewall settings and rules. + +## Example Usage + +``` +resource "vcd_firewall_rules" "fw" { + edge_gateway = "Edge Gateway Name" + default_action = "drop" + + rule { + description = "deny-ftp-out" + policy = "deny" + protocol = "tcp" + destination_port = "21" + destination_ip = "any" + source_port = "any" + source_ip = "10.10.0.0/24" + } + + rule { + description = "allow-outbound" + policy = "allow" + protocol = "any" + destination_port = "any" + destination_ip = "any" + source_port = "any" + source_ip = "10.10.0.0/24" + } + +} + +resource "vcd_vapp" "web" { + ... +} + +resource "vcd_firewall_rules" "fw-web" { + edge_gateway = "Edge Gateway Name" + default_action = "drop" + + rule { + description = "allow-web" + policy = "allow" + protocol = "tcp" + destination_port = "80" + destination_ip = "${vcd_vapp.web.ip}" + source_port = "any" + source_ip = "any" + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `edge_gateway` - (Required) The name of the edge gateway on which to apply the Firewall Rules +* `default_action` - (Required) Either "allow" or "deny". Specifies what to do should none of the rules match +* `rule` - (Optional) Configures a firewall rule; see [Rules](#rules) below for details. + + +## Rules + +Each firewall rule supports the following attributes: + +* `description` - (Required) Description of the fireall rule +* `policy` - (Required) Specifies what to do when this rule is matched. Either "allow" or "deny" +* `protocol` - (Required) The protocol to match. One of "tcp", "udp", "icmp" or "any" +* `destination_port` - (Required) The destination port to match. Either a port number or "any" +* `destination_ip` - (Required) The destination IP to match. Either an IP address, IP range or "any" +* `source_port` - (Required) The source port to match. Either a port number or "any" +* `source_ip` - (Required) The source IP to match. Either an IP address, IP range or "any" diff --git a/website/source/docs/providers/vcd/r/network.html.markdown b/website/source/docs/providers/vcd/r/network.html.markdown new file mode 100644 index 0000000000..eead8c58ea --- /dev/null +++ b/website/source/docs/providers/vcd/r/network.html.markdown @@ -0,0 +1,57 @@ +--- +layout: "vcd" +page_title: "vCloudDirector: vcd_network" +sidebar_current: "docs-vcd-resource-network" +description: |- + Provides a vCloud Director VDC Network. This can be used to create, modify, and delete internal networks for vApps to connect. +--- + +# vcd\_network + +Provides a vCloud Director VDC Network. This can be used to create, +modify, and delete internal networks for vApps to connect. + +## Example Usage + +``` +resource "vcd_network" "net" { + name = "my-net" + edge_gateway = "Edge Gateway Name" + gateway = "10.10.0.1" + + dhcp_pool { + start_address = "10.10.0.2" + end_address = "10.10.0.100" + } + + static_ip_pool { + start_address = "10.10.0.152" + end_address = "10.10.0.254" + } + +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A unique name for the network +* `edge_gateway` - (Required) The name of the edge gateway +* `netmask` - (Optional) The netmask for the new network. Defaults to `255.255.255.0` +* `gateway` (Required) The gateway for this network +* `dns1` - (Optional) First DNS server to use. Defaults to `8.8.8.8` +* `dns2` - (Optional) Second DNS server to use. Defaults to `8.8.4.4` +* `dns_suffix` - (Optional) A FQDN for the virtual machines on this network +* `dhcp_pool` - (Optional) A range of IPs to issue to virtual machines that don't + have a static IP; see [IP Pools](#ip-pools) below for details. +* `static_ip_pool` - (Optional) A range of IPs permitted to be used as static IPs for + virtual machines; see [IP Pools](#ip-pools) below for details. + + +## IP Pools + +Network interfaces support the following attributes: + +* `start_address` - (Required) The first address in the IP Range +* `end_address` - (Required) The final address in the IP Range diff --git a/website/source/docs/providers/vcd/r/snat.html.markdown b/website/source/docs/providers/vcd/r/snat.html.markdown new file mode 100644 index 0000000000..dc8b567c7c --- /dev/null +++ b/website/source/docs/providers/vcd/r/snat.html.markdown @@ -0,0 +1,30 @@ +--- +layout: "vcd" +page_title: "vCloudDirector: vcd_snat" +sidebar_current: "docs-vcd-resource-snat" +description: |- + Provides a vCloud Director SNAT resource. This can be used to create, modify, and delete source NATs to allow vApps to send external traffic. +--- + +# vcd\_snat + +Provides a vCloud Director SNAT resource. This can be used to create, modify, +and delete source NATs to allow vApps to send external traffic. + +## Example Usage + +``` +resource "vcd_snat" "outbound" { + edge_gateway = "Edge Gateway Name" + external_ip = "78.101.10.20" + internal_ip = "10.10.0.0/24" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `edge_gateway` - (Required) The name of the edge gateway on which to apply the SNAT +* `external_ip` - (Required) One of the external IPs available on your Edge Gateway +* `internal_ip` - (Required) The IP or IP Range of the VM(s) to map from diff --git a/website/source/docs/providers/vcd/r/vapp.html.markdown b/website/source/docs/providers/vcd/r/vapp.html.markdown new file mode 100644 index 0000000000..0a2a2e234e --- /dev/null +++ b/website/source/docs/providers/vcd/r/vapp.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "vcd" +page_title: "vCloudDirector: vcd_vapp" +sidebar_current: "docs-vcd-resource-vapp" +description: |- + Provides a vCloud Director vApp resource. This can be used to create, modify, and delete vApps. +--- + +# vcd\_vapp + +Provides a vCloud Director vApp resource. This can be used to create, +modify, and delete vApps. + +## Example Usage + +``` +resource "vcd_network" "net" { + ... +} + +resource "vcd_vapp" "web" { + name = "web" + catalog_name = "Boxes" + template_name = "lampstack-1.10.1-ubuntu-10.04" + memory = 2048 + cpus = 1 + + network_name = "${vcd_network.net.name}" + network_href = "${vcd_network.net.href}" + ip = "10.10.104.160" + + metadata { + role = "web" + env = "staging" + version = "v1" + } + +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A unique name for the vApp +* `catalog_name` - (Required) The catalog name in which to find the given vApp Template +* `template_name` - (Required) The name of the vApp Template to use +* `memory` - (Optional) The amount of RAM (in MB) to allocate to the vApp +* `cpus` - (Optional) The number of virtual CPUs to allocate to the vApp +* `initscript` (Optional) A script to be run only on initial boot +* `network_name` - (Required) Name of the network this vApp should join +* `network_href` - (Optional) The vCloud Director generated href of the network this vApp + should join. If empty it will use the network name and query vCloud Director to discover + this +* `ip` - (Optional) The IP to assign to this vApp. If given the address must be within the `static_ip_pool` + set for the network. If left blank, and the network has `dhcp_pool` set with at least one available IP then + this will be set with DHCP +* `metadata` - (Optional) Key value map of metadata to assign to this vApp +* `power_on` - (Optional) A boolean value stating if this vApp should be powered on. Default to `true` diff --git a/website/source/docs/providers/vsphere/index.html.markdown b/website/source/docs/providers/vsphere/index.html.markdown index 48fb9dbefe..a1b479c4e9 100644 --- a/website/source/docs/providers/vsphere/index.html.markdown +++ b/website/source/docs/providers/vsphere/index.html.markdown @@ -27,12 +27,18 @@ provider at this time only supports IPv4 addresses on virtual machines. provider "vsphere" { user = "${var.vsphere_user}" password = "${var.vsphere_password}" - vcenter_server = "${var.vsphere_vcenter_server}" + vsphere_server = "${var.vsphere_server}" } -# Create a virtual machine +# Create a folder +resource "vsphere_folder" "frontend" { + path = "frontend" +} + +# Create a virtual machine within the folder resource "vsphere_virtual_machine" "web" { name = "terraform_web" + folder = "${vsphere_folder.frontend.path}" vcpu = 2 memory = 4096 @@ -41,8 +47,7 @@ resource "vsphere_virtual_machine" "web" { } disk { - size = 1 - iops = 500 + template = "centos-7" } } ``` @@ -55,9 +60,14 @@ The following arguments are used to configure the VMware vSphere Provider: be specified with the `VSPHERE_USER` environment variable. * `password` - (Required) This is the password for vSphere API operations. Can also be specified with the `VSPHERE_PASSWORD` environment variable. -* `vcenter_server` - (Required) This is the vCenter server name for vSphere API - operations. Can also be specified with the `VSPHERE_VCENTER` environment +* `vsphere_server` - (Required) This is the vCenter server name for vSphere API + operations. Can also be specified with the `VSPHERE_SERVER` environment variable. +* `allow_unverified_ssl` - (Optional) Boolean that can be set to true to + disable SSL certificate verification. This should be used with care as it + could allow an attacker to intercept your auth token. If omitted, default + value is `false`. Can also be specified with the `VSPHERE_ALLOW_UNVERIFIED_SSL` + environment variable. ## Acceptance Tests diff --git a/website/source/docs/providers/vsphere/r/folder.html.markdown b/website/source/docs/providers/vsphere/r/folder.html.markdown new file mode 100644 index 0000000000..9825a10edb --- /dev/null +++ b/website/source/docs/providers/vsphere/r/folder.html.markdown @@ -0,0 +1,28 @@ +--- +layout: "vsphere" +page_title: "VMware vSphere: vsphere_folder" +sidebar_current: "docs-vsphere-resource-folder" +description: |- + Provides a VMware vSphere virtual machine folder resource. This can be used to create and delete virtual machine folders. +--- + +# vsphere\_virtual\_machine + +Provides a VMware vSphere virtual machine folder resource. This can be used to create and delete virtual machine folders. + +## Example Usage + +``` +resource "vsphere_folder" "web" { + path = "terraform_web_folder" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `path` - (Required) The path of the folder to be created (relative to the datacenter root); should not begin or end with a "/" +* `datacenter` - (Optional) The name of a Datacenter in which the folder will be created +* `existing_path` - (Computed) The path of any parent folder segments which existed at the time this folder was created; on a +destroy action, the (pre-) existing path is not removed. diff --git a/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown b/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown index 19421aaa9c..008a0aed99 100644 --- a/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown +++ b/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown @@ -24,8 +24,7 @@ resource "vsphere_virtual_machine" "web" { } disk { - size = 1 - iops = 500 + template = "centos-7" } } ``` @@ -48,22 +47,39 @@ The following arguments are supported: * `network_interface` - (Required) Configures virtual network interfaces; see [Network Interfaces](#network-interfaces) below for details. * `disk` - (Required) Configures virtual disks; see [Disks](#disks) below for details * `boot_delay` - (Optional) Time in seconds to wait for machine network to be ready. +* `custom_configuration_parameters` - (Optional) Map of values that is set as virtual machine custom configurations. - -## Network Interfaces - -Network interfaces support the following attributes: +The `network_interface` block supports: * `label` - (Required) Label to assign to this network interface -* `ip_address` - (Optional) Static IP to assign to this network interface. Interface will use DHCP if this is left blank. Currently only IPv4 IP addresses are supported. -* `subnet_mask` - (Optional) Subnet mask to use when statically assigning an IP. +* `ipv4_address` - (Optional) Static IP to assign to this network interface. Interface will use DHCP if this is left blank. Currently only IPv4 IP addresses are supported. +* `ipv4_prefix_length` - (Optional) prefix length to use when statically assigning an IP. - -## Disks +The following arguments are maintained for backwards compatibility and may be +removed in a future version: -Disks support the following attributes: +* `ip_address` - __Deprecated, please use `ipv4_address` instead_. +* `subnet_mask` - __Deprecated, please use `ipv4_prefix_length` instead_. + + +The `disk` block supports: * `template` - (Required if size not provided) Template for this disk. * `datastore` - (Optional) Datastore for this disk * `size` - (Required if template not provided) Size of this disk (in GB). * `iops` - (Optional) Number of virtual iops to allocate for this disk. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The instance ID. +* `name` - See Argument Reference above. +* `vcpu` - See Argument Reference above. +* `memory` - See Argument Reference above. +* `datacenter` - See Argument Reference above. +* `network_interface/label` - See Argument Reference above. +* `network_interface/ipv4_address` - See Argument Reference above. +* `network_interface/ipv4_prefix_length` - See Argument Reference above. +* `network_interface/ipv6_address` - Assigned static IPv6 address. +* `network_interface/ipv6_prefix_length` - Prefix length of assigned static IPv6 address. diff --git a/website/source/docs/provisioners/chef.html.markdown b/website/source/docs/provisioners/chef.html.markdown index 60f6d577a0..e4b89f23f4 100644 --- a/website/source/docs/provisioners/chef.html.markdown +++ b/website/source/docs/provisioners/chef.html.markdown @@ -52,6 +52,12 @@ The following arguments are supported: * `attributes (map)` - (Optional) A map with initial node attributes for the new node. See example. +* `client_options (array)` - (Optional) A list of optional Chef Client configuration + options. See the Chef Client [documentation](https://docs.chef.io/config_rb_client.html) for all available options. + +* `disable_reporting (boolean)` - (Optional) If true the Chef Client will not try to send + reporting data (used by Chef Reporting) to the Chef Server (defaults false) + * `environment (string)` - (Optional) The Chef environment the new node will be joining (defaults `_default`). diff --git a/website/source/docs/provisioners/connection.html.markdown b/website/source/docs/provisioners/connection.html.markdown index 83fa8ebb4a..52f7be7589 100644 --- a/website/source/docs/provisioners/connection.html.markdown +++ b/website/source/docs/provisioners/connection.html.markdown @@ -73,7 +73,9 @@ provisioner "file" { function](/docs/configuration/interpolation.html#file_path_). This takes preference over the password if provided. -* `agent` - Set to false to disable using ssh-agent to authenticate. +* `agent` - Set to false to disable using ssh-agent to authenticate. On Windows the + only supported SSH authentication agent is + [Pageant](http://the.earth.li/~sgtatham/putty/0.66/htmldoc/Chapter9.html#pageant) **Additional arguments only supported by the "winrm" connection type:** diff --git a/website/source/docs/state/remote/artifactory.html.md b/website/source/docs/state/remote/artifactory.html.md new file mode 100644 index 0000000000..295e7c0ab3 --- /dev/null +++ b/website/source/docs/state/remote/artifactory.html.md @@ -0,0 +1,54 @@ +--- +layout: "remotestate" +page_title: "Remote State Backend: artifactory" +sidebar_current: "docs-state-remote-artifactory" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# artifactory + +Stores the state as an artifact in a given repository in [Artifactory](https://www.jfrog.com/artifactory/). + +Generic HTTP repositories are supported, and state from different +configurations may be kept at different subpaths within the repository. + +-> **Note:** The URL must include the path to the Artifactory installation. +It will likely end in `/artifactory`. + +## Example Usage + +``` +terraform remote config \ + -backend=artifactory \ + -backend-config="username=SheldonCooper" \ + -backend-config="password=AmyFarrahFowler" \ + -backend-config="url=https://custom.artifactoryonline.com/artifactory" \ + -backend-config="repo=foo" \ + -backend-config="subpath=terraform-bar" +``` + +## Example Referencing + +``` +resource "terraform_remote_state" "foo" { + backend = "artifactory" + config { + username = "SheldonCooper" + password = "AmyFarrahFowler" + url = "https://custom.artifactoryonline.com/artifactory" + repo = "foo" + subpath = "terraform-bar" + } +} +``` + +## Configuration variables + +The following configuration options / environment variables are supported: + + * `username` / `ARTIFACTORY_USERNAME` (Required) - The username + * `password` / `ARTIFACTORY_PASSWORD` (Required) - The password + * `url` / `ARTIFACTORY_URL` (Required) - The URL. Note that this is the base url to artifactory not the full repo and subpath. + * `repo` (Required) - The repository name + * `subpath` (Required) - Path within the repository diff --git a/website/source/docs/state/remote/atlas.html.md b/website/source/docs/state/remote/atlas.html.md new file mode 100644 index 0000000000..df458d4da4 --- /dev/null +++ b/website/source/docs/state/remote/atlas.html.md @@ -0,0 +1,43 @@ +--- +layout: "remotestate" +page_title: "Remote State Backend: atlas" +sidebar_current: "docs-state-remote-atlas" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# atlas + +Stores the state in [Atlas](https://atlas.hashicorp.com/). + +You can create a new environment in the [Environments section](https://atlas.hashicorp.com/environments) +and generate new token in the [Tokens page](https://atlas.hashicorp.com/settings/tokens) under Settings. + +## Example Usage + +``` +terraform remote config \ + -backend=atlas \ + -backend-config="name=bigbang/example" \ + -backend-config="access_token=X2iTFefU5aWOjg.atlasv1.YaDa" \ +``` + +## Example Referencing + +``` +resource "terraform_remote_state" "foo" { + backend = "atlas" + config { + name = "bigbang/example" + access_token = "X2iTFefU5aWOjg.atlasv1.YaDa" + } +} +``` + +## Configuration variables + +The following configuration options / environment variables are supported: + + * `name` - (Required) Full name of the environment (`/`) + * `access_token` / `ATLAS_TOKEN` - (Required) Atlas API token + * `address` - (Optional) Address to alternative Atlas location (Atlas Enterprise endpoint) diff --git a/website/source/docs/state/remote/consul.html.md b/website/source/docs/state/remote/consul.html.md new file mode 100644 index 0000000000..1e4422d3ba --- /dev/null +++ b/website/source/docs/state/remote/consul.html.md @@ -0,0 +1,48 @@ +--- +layout: "remotestate" +page_title: "Remote State Backend: consul" +sidebar_current: "docs-state-remote-consul" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# consul + +Stores the state in the [Consul](https://www.consul.io/) KV store at a given path. + +-> **Note:** Specifying `access_token` directly makes it included in +cleartext inside the persisted, shard state. +Use of the environment variable `CONSUL_HTTP_TOKEN` is recommended. + +## Example Usage + +``` +terraform remote config \ + -backend=consul \ + -backend-config="path=full/path" +``` + +## Example Referencing + +``` +resource "terraform_remote_state" "foo" { + backend = "consul" + config { + path = "full/path" + } +} +``` + +## Configuration variables + +The following configuration options / environment variables are supported: + + * `path` - (Required) Path in the Consul KV store + * `access_token` / `CONSUL_HTTP_TOKEN` - (Required) Access token + * `address` / `CONSUL_HTTP_ADDR` - (Optional) DNS name and port of your Consul endpoint specified in the + format `dnsname:port`. Defaults to the local agent HTTP listener. + * `scheme` - (Optional) Specifies what protocol to use when talking to the given + `address`, either `http` or `https`. SSL support can also be triggered + by setting then environment variable `CONSUL_HTTP_SSL` to `true`. + * `http_auth` / `CONSUL_HTTP_AUTH` - (Optional) HTTP Basic Authentication credentials to be used when + communicating with Consul, in the format of either `user` or `user:pass`. diff --git a/website/source/docs/state/remote/etcd.html.md b/website/source/docs/state/remote/etcd.html.md new file mode 100644 index 0000000000..d58179a869 --- /dev/null +++ b/website/source/docs/state/remote/etcd.html.md @@ -0,0 +1,41 @@ +--- +layout: "remotestate" +page_title: "Remote State Backend: etcd" +sidebar_current: "docs-state-remote-etcd" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# etcd + +Stores the state in [etcd](https://coreos.com/etcd/) at a given path. + +## Example Usage + +``` +terraform remote config \ + -backend=etcd \ + -backend-config="path=path/to/terraform.tfstate" \ + -backend-config="endpoints=http://one:4001 http://two:4001" +``` + +## Example Referencing + +``` +resource "terraform_remote_state" "foo" { + backend = "etcd" + config { + path = "path/to/terraform.tfstate" + endpoints = "http://one:4001 http://two:4001" + } +} +``` + +## Configuration variables + +The following configuration options are supported: + + * `path` - (Required) The path where to store the state + * `endpoints` - (Required) A space-separated list of the etcd endpoints + * `username` - (Optional) The username + * `password` - (Optional) The password diff --git a/website/source/docs/state/remote/http.html.md b/website/source/docs/state/remote/http.html.md new file mode 100644 index 0000000000..7e354a4fd5 --- /dev/null +++ b/website/source/docs/state/remote/http.html.md @@ -0,0 +1,40 @@ +--- +layout: "remotestate" +page_title: "Remote State Backend: http" +sidebar_current: "docs-state-remote-http" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# http + +Stores the state using a simple [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) client. + +State will be fetched via GET, updated via POST, and purged with DELETE. + +## Example Usage + +``` +terraform remote config \ + -backend=http \ + -backend-config="address=http://my.rest.api.com" +``` + +## Example Referencing + +``` +resource "terraform_remote_state" "foo" { + backend = "http" + config { + address = "http://my.rest.api.com" + } +} +``` + +## Configuration variables + +The following configuration options are supported: + + * `address` - (Required) The address of the REST endpoint + * `skip_cert_verification` - (Optional) Whether to skip TLS verification. + Defaults to `false`. diff --git a/website/source/docs/state/remote.html.md b/website/source/docs/state/remote/index.html.md similarity index 84% rename from website/source/docs/state/remote.html.md rename to website/source/docs/state/remote/index.html.md index 3ab01fa79b..39af76a8ba 100644 --- a/website/source/docs/state/remote.html.md +++ b/website/source/docs/state/remote/index.html.md @@ -1,7 +1,7 @@ --- -layout: "docs" +layout: "remotestate" page_title: "Remote State" -sidebar_current: "docs-state-remote" +sidebar_current: "docs-state-remote_index" description: |- Terraform can store the state remotely, making it easier to version and work with in a team. --- @@ -41,24 +41,7 @@ teams to run their own infrastructure. As a more specific example with AWS: you can expose things such as VPC IDs, subnets, NAT instance IDs, etc. through remote state and have other Terraform states consume that. -An example is shown below: - -``` -resource "terraform_remote_state" "vpc" { - backend = "atlas" - config { - name = "hashicorp/vpc-prod" - } -} - -resource "aws_instance" "foo" { - # ... - subnet_id = "${terraform_remote_state.vpc.output.subnet_id}" -} -``` - -This makes teamwork and componentization of infrastructure frictionless -within your infrastructure. +For example usage see the [terraform_remote_state](/docs/providers/terraform/r/remote_state.html) resource. ## Locking and Teamwork @@ -73,4 +56,3 @@ locking for you. In the future, we'd like to extend the remote state system to allow some minimal locking functionality, but it is a difficult problem without a central system that we currently aren't focused on solving. - diff --git a/website/source/docs/state/remote/s3.html.md b/website/source/docs/state/remote/s3.html.md new file mode 100644 index 0000000000..ebeefd6162 --- /dev/null +++ b/website/source/docs/state/remote/s3.html.md @@ -0,0 +1,54 @@ +--- +layout: "remotestate" +page_title: "Remote State Backend: s3" +sidebar_current: "docs-state-remote-s3" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# s3 + +Stores the state as a given key in a given bucket on [Amazon S3](https://aws.amazon.com/s3/). + +-> **Note:** Passing credentials directly via config options will +make them included in cleartext inside the persisted state. +Use of environment variables or config file is recommended. + +## Example Usage + +``` +terraform remote config \ + -backend=s3 \ + -backend-config="bucket=terraform-state-prod" \ + -backend-config="key=network/terraform.tfstate" \ + -backend-config="region=us-east-1" +``` + +## Example Referencing + +``` +resource "terraform_remote_state" "foo" { + backend = "s3" + config { + bucket = "terraform-state-prod" + key = "network/terraform.tfstate" + region = "us-east-1" + } +} +``` + +## Configuration variables + +The following configuration options / environment variables are supported: + + * `bucket` - (Required) The name of the S3 bucket + * `key` - (Required) The path where to place/look for state file inside the bucket + * `region` / `AWS_DEFAULT_REGION` - (Optional) The region of the S3 bucket + * `endpoint` / `AWS_S3_ENDPOINT` - (Optional) A custom endpoint for the S3 API + * `encrypt` - (Optional) Whether to enable [server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) + of the state file + * `acl` - [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) + to be applied to the state file. + * `access_key` / `AWS_ACCESS_KEY_ID` - (Optional) AWS access key + * `secret_key` / `AWS_SECRET_ACCESS_KEY` - (Optional) AWS secret key + * `kms_key_id` - (Optional) Set to to the ARN of a KMS Key to use that key to encrypt the state. diff --git a/website/source/docs/state/remote/swift.html.md b/website/source/docs/state/remote/swift.html.md new file mode 100644 index 0000000000..a2ee56cd61 --- /dev/null +++ b/website/source/docs/state/remote/swift.html.md @@ -0,0 +1,44 @@ +--- +layout: "remotestate" +page_title: "Remote State Backend: swift" +sidebar_current: "docs-state-remote-swift" +description: |- + Terraform can store the state remotely, making it easier to version and work with in a team. +--- + +# swift + +Stores the state as an artifact in [Swift](http://docs.openstack.org/developer/swift/). + +## Example Usage + +``` +terraform remote config \ + -backend=swift \ + -backend-config="path=random/path" +``` + +## Example Referencing + +``` +resource "terraform_remote_state" "foo" { + backend = "swift" + config { + path = "random/path" + } +} +``` + +## Configuration variables + +The following configuration option is supported: + + * `path` - (Required) The path where to store `terraform.tfstate` + +The following environment variables are supported: + + * `OS_AUTH_URL` - (Required) The identity endpoint + * `OS_USERNAME` - (Required) The username + * `OS_PASSWORD` - (Required) The password + * `OS_REGION_NAME` - (Required) The region + * `OS_TENANT_NAME` - (Required) The name of the tenant diff --git a/website/source/downloads.html.erb b/website/source/downloads.html.erb index f5e0945a00..7849b7e376 100644 --- a/website/source/downloads.html.erb +++ b/website/source/downloads.html.erb @@ -25,7 +25,7 @@ description: |- verify the checksums signature file - which has been signed using HashiCorp's GPG key. + which has been signed using HashiCorp's GPG key. You can also download older versions of Terraform from the releases service.

diff --git a/website/source/index.html.erb b/website/source/index.html.erb index 92a6c3f17b..c2bf8467bc 100644 --- a/website/source/index.html.erb +++ b/website/source/index.html.erb @@ -196,8 +196,8 @@

resource "aws_instance" "app" {

count = 5

-

ami = "ami-043a5034"

-

instance_type = "m1.small"

+

ami = "ami-408c7f28"

+

instance_type = "t1.micro"

}

diff --git a/website/source/intro/getting-started/build.html.md b/website/source/intro/getting-started/build.html.md index a40369fade..970689c03a 100644 --- a/website/source/intro/getting-started/build.html.md +++ b/website/source/intro/getting-started/build.html.md @@ -12,7 +12,7 @@ With Terraform installed, let's dive right into it and start creating some infrastructure. We'll build infrastructure on -[AWS](http://aws.amazon.com) for the getting started guide +[AWS](https://aws.amazon.com) for the getting started guide since it is popular and generally understood, but Terraform can [manage many providers](/docs/providers/index.html), including multiple providers in a single configuration. @@ -20,17 +20,17 @@ Some examples of this are in the [use cases section](/intro/use-cases.html). If you don't have an AWS account, -[create one now](http://aws.amazon.com/free/). +[create one now](https://aws.amazon.com/free/). For the getting started guide, we'll only be using resources which qualify under the AWS -[free-tier](http://aws.amazon.com/free/), +[free-tier](https://aws.amazon.com/free/), meaning it will be free. If you already have an AWS account, you may be charged some amount of money, but it shouldn't be more than a few dollars at most. ~> **Warning!** If you're not using an account that qualifies under the AWS -[free-tier](http://aws.amazon.com/free/), you may be charged to run these +[free-tier](https://aws.amazon.com/free/), you may be charged to run these examples. The most you should be charged should only be a few dollars, but we're not responsible for any charges that may incur. diff --git a/website/source/intro/getting-started/change.html.md b/website/source/intro/getting-started/change.html.md index 4850bc808e..3856e5ad93 100644 --- a/website/source/intro/getting-started/change.html.md +++ b/website/source/intro/getting-started/change.html.md @@ -28,7 +28,7 @@ resource in your configuration and change it to the following: ``` resource "aws_instance" "example" { - ami = "ami-aa7ab6c2" + ami = "ami-b8b061d0" instance_type = "t1.micro" } ``` @@ -47,7 +47,7 @@ $ terraform plan ... -/+ aws_instance.example - ami: "ami-408c7f28" => "ami-aa7ab6c2" (forces new resource) + ami: "ami-408c7f28" => "ami-b8b061d0" (forces new resource) availability_zone: "us-east-1c" => "" key_name: "" => "" private_dns: "domU-12-31-39-12-38-AB.compute-1.internal" => "" @@ -79,7 +79,7 @@ the change. $ terraform apply aws_instance.example: Destroying... aws_instance.example: Modifying... - ami: "ami-408c7f28" => "ami-aa7ab6c2" + ami: "ami-408c7f28" => "ami-b8b061d0" Apply complete! Resources: 0 added, 1 changed, 1 destroyed. diff --git a/website/source/intro/getting-started/dependencies.html.md b/website/source/intro/getting-started/dependencies.html.md index fe3397afed..bd9eeccf98 100644 --- a/website/source/intro/getting-started/dependencies.html.md +++ b/website/source/intro/getting-started/dependencies.html.md @@ -39,7 +39,7 @@ This should look familiar from the earlier example of adding an EC2 instance resource, except this time we're building an "aws\_eip" resource type. This resource type allocates and associates an -[elastic IP](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) +[elastic IP](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) to an EC2 instance. The only parameter for @@ -67,7 +67,7 @@ $ terraform plan public_ip: "" => "" + aws_instance.example - ami: "" => "ami-aa7ab6c2" + ami: "" => "ami-b8b061d0" availability_zone: "" => "" instance_type: "" => "t1.micro" key_name: "" => "" @@ -90,7 +90,7 @@ following: ``` aws_instance.example: Creating... - ami: "" => "ami-aa7ab6c2" + ami: "" => "ami-b8b061d0" instance_type: "" => "t1.micro" aws_eip.ip: Creating... instance: "" => "i-0e737b25" @@ -144,7 +144,7 @@ created in parallel to everything else. ``` resource "aws_instance" "another" { - ami = "ami-aa7ab6c2" + ami = "ami-b8b061d0" instance_type = "t1.micro" } ``` diff --git a/website/source/intro/getting-started/install.html.markdown b/website/source/intro/getting-started/install.html.markdown index 2392e35491..fb0c2879f7 100644 --- a/website/source/intro/getting-started/install.html.markdown +++ b/website/source/intro/getting-started/install.html.markdown @@ -23,11 +23,18 @@ Terraform will be installed. The directory will contain a set of binary programs, such as `terraform`, `terraform-provider-aws`, etc. The final step is to make sure the directory you installed Terraform to is on the PATH. See -[this page](http://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux) +[this page](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux) for instructions on setting the PATH on Linux and Mac. -[This page](http://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) +[This page](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) contains instructions for setting the PATH on Windows. +Example for Linux/Mac - Type the following into your terminal: +>`PATH=/usr/local/terraform/bin:/home/your-user-name/terraform:$PATH` + +Example for Windows - Type the following into Powershell: +>`set PATH=%PATH%;C:\terraform` + + ## Verifying the Installation After installing Terraform, verify the installation worked by opening a new diff --git a/website/source/intro/getting-started/modules.html.md b/website/source/intro/getting-started/modules.html.md index 79c28aa397..18452f7793 100644 --- a/website/source/intro/getting-started/modules.html.md +++ b/website/source/intro/getting-started/modules.html.md @@ -22,7 +22,7 @@ Writing modules is covered in more detail in the [modules documentation](/docs/modules/index.html). ~> **Warning!** The examples on this page are _**not** eligible_ for the AWS -[free-tier](http://aws.amazon.com/free/). Do not execute the examples on this +[free-tier](https://aws.amazon.com/free/). Do not execute the examples on this page unless you're willing to spend a small amount of money. ## Using Modules @@ -50,7 +50,7 @@ module "consul" { key_name = "AWS SSH KEY NAME" key_path = "PATH TO ABOVE PRIVATE KEY" - region = "AWS REGION" + region = "us-east-1" servers = "3" } ``` @@ -94,14 +94,22 @@ With the modules downloaded, we can now plan and apply it. If you run ``` $ terraform plan ... -+ module.consul - 4 resource(s) ++ module.consul.aws_instance.server.0 +... ++ module.consul.aws_instance.server.1 +... ++ module.consul.aws_instance.server.2 +... ++ module.consul.aws_security_group.consul +... +Plan: 4 to add, 0 to change, 0 to destroy. ``` -As you can see, the module is treated like a black box. In the plan, Terraform -shows the module managed as a whole. It does not show what resources within -the module will be created. If you care, you can see that by specifying -a `-module-depth=-1` flag. +Conceptually, the module is treated like a black box. In the plan, however +Terraform shows each resource the module manages so you can see each detail +about what the plan will do. If you'd like compressed plan output, you can +specify the `-module-depth=` flag to get Terraform to output summaries by +module. Next, run `terraform apply` to create the module. Note that as we warned above, the resources this module creates are outside of the AWS free tier, so this diff --git a/website/source/intro/getting-started/outputs.html.md b/website/source/intro/getting-started/outputs.html.md index 2d537097e1..d5a6ca1e72 100644 --- a/website/source/intro/getting-started/outputs.html.md +++ b/website/source/intro/getting-started/outputs.html.md @@ -35,7 +35,7 @@ output "ip" { } ``` -This defines an output variables named "ip". The `value` field +This defines an output variable named "ip". The `value` field specifies what the value will be, and almost always contains one or more interpolations, since the output data is typically dynamic. In this case, we're outputting the diff --git a/website/source/intro/getting-started/provision.html.md b/website/source/intro/getting-started/provision.html.md index 4c6a5cfeeb..38153029c9 100644 --- a/website/source/intro/getting-started/provision.html.md +++ b/website/source/intro/getting-started/provision.html.md @@ -25,7 +25,7 @@ To define a provisioner, modify the resource block defining the ``` resource "aws_instance" "example" { - ami = "ami-aa7ab6c2" + ami = "ami-b8b061d0" instance_type = "t1.micro" provisioner "local-exec" { @@ -61,7 +61,7 @@ then run `apply`: ``` $ terraform apply aws_instance.example: Creating... - ami: "" => "ami-aa7ab6c2" + ami: "" => "ami-b8b061d0" instance_type: "" => "t1.micro" aws_eip.ip: Creating... instance: "" => "i-213f350a" diff --git a/website/source/intro/getting-started/variables.html.md b/website/source/intro/getting-started/variables.html.md index 24154ca25d..2fbda60ffa 100644 --- a/website/source/intro/getting-started/variables.html.md +++ b/website/source/intro/getting-started/variables.html.md @@ -123,8 +123,8 @@ support for the "us-west-2" region as well: ``` variable "amis" { default = { - us-east-1 = "ami-aa7ab6c2" - us-west-2 = "ami-23f78e13" + us-east-1 = "ami-b8b061d0" + us-west-2 = "ami-ef5e24df" } } ``` diff --git a/website/source/intro/hashicorp-ecosystem.html.markdown b/website/source/intro/hashicorp-ecosystem.html.markdown index 4e3e8e873d..6dfbed6264 100644 --- a/website/source/intro/hashicorp-ecosystem.html.markdown +++ b/website/source/intro/hashicorp-ecosystem.html.markdown @@ -12,19 +12,19 @@ HashiCorp is the creator of the open source projects Vagrant, Packer, Terraform, If you are using Terraform to create, combine, and modify infrastructure, it’s likely that you are using base images to configure that infrastructure. Packer is our tool for building those base images, such as AMIs, OpenStack images, Docker containers, and more. -Below are summaries of HashiCorp’s open source projects and a graphic showing how Atlas connects them to create a full application delivery workflow. +Below are summaries of HashiCorp’s open source projects and a graphic showing how Atlas connects them to create a full application delivery workflow. # HashiCorp Ecosystem ![Atlas Workflow](docs/atlas-workflow.png) [Atlas](https://atlas.hashicorp.com/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is HashiCorp's only commercial product. It unites Packer, Terraform, and Consul to make application delivery a versioned, auditable, repeatable, and collaborative process. -[Packer](https://packer.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating machine images and deployable artifacts such as AMIs, OpenStack images, Docker containers, etc. +[Packer](https://www.packer.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating machine images and deployable artifacts such as AMIs, OpenStack images, Docker containers, etc. -[Terraform](https://terraform.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating, combining, and modifying infrastructure. In the Atlas workflow Terraform reads from the artifact registry and provisions infrastructure. +[Terraform](https://www.terraform.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating, combining, and modifying infrastructure. In the Atlas workflow Terraform reads from the artifact registry and provisions infrastructure. -[Consul](https://consul.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for service discovery, service registry, and health checks. In the Atlas workflow Consul is configured at the Packer build stage and identifies the service(s) contained in each artifact. Since Consul is configured at the build phase with Packer, when the artifact is deployed with Terraform, it is fully configured with dependencies and service discovery pre-baked. This greatly reduces the risk of an unhealthy node in production due to configuration failure at runtime. +[Consul](https://www.consul.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for service discovery, service registry, and health checks. In the Atlas workflow Consul is configured at the Packer build stage and identifies the service(s) contained in each artifact. Since Consul is configured at the build phase with Packer, when the artifact is deployed with Terraform, it is fully configured with dependencies and service discovery pre-baked. This greatly reduces the risk of an unhealthy node in production due to configuration failure at runtime. -[Serf](https://serfdom.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for cluster membership and failure detection. Consul uses Serf’s gossip protocol as the foundation for service discovery. +[Serf](https://www.serfdom.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for cluster membership and failure detection. Consul uses Serf’s gossip protocol as the foundation for service discovery. [Vagrant](https://www.vagrantup.com/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for managing development environments that mirror production. Vagrant environments reduce the friction of developing a project and reduce the risk of unexpected behavior appearing after deployment. Vagrant boxes can be built in parallel with production artifacts with Packer to maintain parity between development and production. diff --git a/website/source/intro/use-cases.html.markdown b/website/source/intro/use-cases.html.markdown index 794d3cb126..41357e7b00 100644 --- a/website/source/intro/use-cases.html.markdown +++ b/website/source/intro/use-cases.html.markdown @@ -89,7 +89,7 @@ implementations have a control layer and infrastructure layer. Terraform can be used to codify the configuration for software defined networks. This configuration can then be used by Terraform to automatically setup and modify settings by interfacing with the control layer. This allows configuration to be -versioned and changes to be automated. As an example, [AWS VPC](http://aws.amazon.com/vpc/) +versioned and changes to be automated. As an example, [AWS VPC](https://aws.amazon.com/vpc/) is one of the most commonly used SDN implementations, and [can be configured by Terraform](/docs/providers/aws/r/vpc.html). diff --git a/website/source/layouts/_header.erb b/website/source/layouts/_header.erb index 1dcb8234fc..f6a3533533 100644 --- a/website/source/layouts/_header.erb +++ b/website/source/layouts/_header.erb @@ -5,7 +5,7 @@