diff --git a/.travis.yml b/.travis.yml index 4a776dc19c..c36571ca10 100644 --- a/.travis.yml +++ b/.travis.yml @@ -9,9 +9,7 @@ go: install: make updatedeps script: - - go test ./... - - make vet - #- go test -race ./... + - make test branches: only: diff --git a/CHANGELOG.md b/CHANGELOG.md index 8c8fb36f77..99f07534fe 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,51 +2,83 @@ FEATURES: + * **New provider: `tls`** - A utility provider for generating TLS keys/self-signed certificates for development and testing [GH-2778] + * **New provider: `dyn`** - Manage DNS records on Dyn * **New resource: `aws_cloudformation_stack`** [GH-2636] * **New resource: `aws_cloudtrail`** [GH-3094] * **New resource: `aws_route`** [GH-3548] * **New resource: `aws_codecommit_repository`** [GH-3274] - * **New provider: `tls`** - A utility provider for generating TLS keys/self-signed certificates for development and testing [GH-2778] + * **New resource: `aws_kinesis_firehose_delivery_stream`** [GH-3833] * **New resource: `google_sql_database` and `google_sql_database_instance`** [GH-3617] * **New resource: `google_compute_global_address`** [GH-3701] + * **New resource: `google_compute_https_health_check`** [GH-3883] * **New resource: `google_compute_ssl_certificate`** [GH-3723] * **New resource: `google_compute_url_map`** [GH-3722] * **New resource: `google_compute_target_http_proxy`** [GH-3727] * **New resource: `google_compute_target_https_proxy`** [GH-3728] * **New resource: `google_compute_global_forwarding_rule`** [GH-3702] * **New resource: `openstack_networking_port_v2`** [GH-3731] + * New interpolation function: `coalesce` [GH-3814] IMPROVEMENTS: + * core: Improve message to list only resources which will be destroyed when using `--target` [GH-3859] + * connection/ssh: accept private_key contents instead of paths [GH-3846] * provider/google: preemptible option for instance_template [GH-3667] * provider/google: Accurate Terraform Version [GH-3554] * provider/google: Simplified auth (DefaultClient support) [GH-3553] * provider/google: automatic_restart, preemptible, on_host_maintenance options [GH-3643] + * provider/google: read credentials as contents instead of path [GH-3901] * null_resource: enhance and document [GH-3244, GH-3659] * provider/aws: Add CORS settings to S3 bucket [GH-3387] * provider/aws: Add notification topic ARN for ElastiCache clusters [GH-3674] * provider/aws: Add `kinesis_endpoint` for configuring Kinesis [GH-3255] * provider/aws: Add a computed ARN for S3 Buckets [GH-3685] + * provider/aws: Add S3 support for Lambda Function resource [GH-3794] + * provider/aws: Add `name_prefix` option to launch configurations [GH-3802] + * provider/aws: Provide `source_security_group_id` for ELBs inside a VPC [GH-3780] + * provider/aws: Add snapshot window and retention limits for ElastiCache (Redis) [GH-3707] + * provider/aws: Add username updates for `aws_iam_user` [GH-3227] + * provider/aws: Add AutoMinorVersionUpgrade to RDS Instances [GH-3677] + * provider/aws: Add `access_logs` to ELB resource [GH-3756] + * provider/aws: Add a retry function to rescue an error in creating Autoscaling Lifecycle Hooks [GH-3694] + * provider/aws: `engine_version` is now optional for DB Instance [GH-3744] * provider/aws: Add configuration to enable copying RDS tags to final snapshot [GH-3529] * provider/aws: RDS Cluster additions (`backup_retention_period`, `preferred_backup_window`, `preferred_maintenance_window`) [GH-3757] + * provider/aws: Document and validate ELB ssl_cert and protocol requirements [GH-3887] + * provider/azure: Read publish_settings as contents instead of path [GH-3899] * provider/openstack: Use IPv4 as the defeault IP version for subnets [GH-3091] * provider/aws: Apply security group after restoring db_instance from snapshot [GH-3513] * provider/aws: Making the AutoScalingGroup name optional [GH-3710] * provider/openstack: Add "delete on termination" boot-from-volume option [GH-3232] * provider/digitalocean: Make user_data force a new droplet [GH-3740] * provider/vsphere: Do not add network interfaces by default [GH-3652] + * provider/openstack: Configure Fixed IPs through ports [GH-3772] + * provider/openstack: Specify a port ID on a Router Interface [GH-3903] BUG FIXES: * `terraform remote config`: update `--help` output [GH-3632] * core: modules on Git branches now update properly [GH-1568] + * core: Fix issue preventing input prompts for unset variables during plan [GH-3843] + * core: Orphan resources can now be targets [GH-3912] * provider/google: Timeout when deleting large instance_group_manager [GH-3591] * provider/aws: Fix issue with order of Termincation Policies in AutoScaling Groups. This will introduce plans on upgrade to this version, in order to correct the ordering [GH-2890] * provider/aws: Allow cluster name, not only ARN for `aws_ecs_service` [GH-3668] + * provider/aws: Only set `weight` on an `aws_route53_record` if it has been set in configuration [GH-3900] * provider/aws: ignore association not exist on route table destroy [GH-3615] * provider/aws: Fix policy encoding issue with SNS Topics [GH-3700] + * provider/aws: Correctly export ARN in `aws_iam_saml_provider` [GH-3827] * provider/aws: Tolerate ElastiCache clusters being deleted outside Terraform [GH-3767] + * provider/aws: Downcase Route 53 record names in statefile to match API output [GH-3574] + * provider/aws: Fix issue that could occur if no ECS Cluster was found for a give name [GH-3829] + * provider/aws: Fix issue with SNS topic policy if omitted [GH-3777] + * provider/aws: Support scratch volumes in `aws_ecs_task_definition` [GH-3810] + * provider/aws: Treat `aws_ecs_service` w/ Status==INACTIVE as deleted [GH-3828] + * provider/aws: Expand ~ to homedir in `aws_s3_bucket_object.source` [GH-3910] + * provider/aws: Fix issue with updating the `aws_ecs_task_definition` where `aws_ecs_service` didn't wait for a new computed ARN [GH-3924] + * provider/aws: Prevent crashing when deleting `aws_ecs_service` that is already gone [GH-3914] * provider/azure: various bugfixes [GH-3695] * provider/digitalocean: fix issue preventing SSH fingerprints from working [GH-3633] * provider/digitalocean: Fixing the DigitalOcean Droplet 404 potential on refresh of state [GH-3768] @@ -57,6 +89,10 @@ BUG FIXES: * provider/openstack: Fix boot from volume [GH-3206] * provider/openstack: Fix crashing when image is no longer accessible [GH-2189] * provider/openstack: Better handling of network resource state changes [GH-3712] + * provider/openstack: Fix crashing when no security group is specified [GH-3801] + * provider/packet: Fix issue that could cause errors when provisioning many devices at once [GH-3847] + * provider/openstack: Fix issue preventing security group rules from being removed [GH-3796] + * provider/template: template_file: source contents instead of path [GH-3909] ## 0.6.6 (October 23, 2015) diff --git a/builtin/bins/provider-dyn/main.go b/builtin/bins/provider-dyn/main.go new file mode 100644 index 0000000000..22809f46a2 --- /dev/null +++ b/builtin/bins/provider-dyn/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/dyn" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: dyn.Provider, + }) +} diff --git a/builtin/bins/provider-dyn/main_test.go b/builtin/bins/provider-dyn/main_test.go new file mode 100644 index 0000000000..06ab7d0f9a --- /dev/null +++ b/builtin/bins/provider-dyn/main_test.go @@ -0,0 +1 @@ +package main diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index 3e835c1063..d8a9ff862d 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -27,6 +27,7 @@ import ( "github.com/aws/aws-sdk-go/service/elasticache" elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/elb" + "github.com/aws/aws-sdk-go/service/firehose" "github.com/aws/aws-sdk-go/service/glacier" "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/kinesis" @@ -74,6 +75,7 @@ type AWSClient struct { rdsconn *rds.RDS iamconn *iam.IAM kinesisconn *kinesis.Kinesis + firehoseconn *firehose.Firehose elasticacheconn *elasticache.ElastiCache lambdaconn *lambda.Lambda opsworksconn *opsworks.OpsWorks @@ -168,6 +170,9 @@ func (c *Config) Client() (interface{}, error) { errs = append(errs, authErr) } + log.Println("[INFO] Initializing Kinesis Firehose Connection") + client.firehoseconn = firehose.New(sess) + log.Println("[INFO] Initializing AutoScaling connection") client.autoscalingconn = autoscaling.New(sess) diff --git a/builtin/providers/aws/conversions.go b/builtin/providers/aws/conversions.go index 1b69cee063..9b215db6c0 100644 --- a/builtin/providers/aws/conversions.go +++ b/builtin/providers/aws/conversions.go @@ -1,8 +1,9 @@ package aws import ( - "github.com/awslabs/aws-sdk-go/aws" "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" ) func makeAwsStringList(in []interface{}) []*string { diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go index b5392429aa..666978eb38 100644 --- a/builtin/providers/aws/provider.go +++ b/builtin/providers/aws/provider.go @@ -5,11 +5,12 @@ import ( "sync" "time" - "github.com/aws/aws-sdk-go/aws/credentials" - "github.com/awslabs/aws-sdk-go/aws/credentials/ec2rolecreds" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" ) // Provider returns a terraform.ResourceProvider. @@ -163,107 +164,108 @@ func Provider() terraform.ResourceProvider { }, ResourcesMap: map[string]*schema.Resource{ - "aws_ami": resourceAwsAmi(), - "aws_ami_copy": resourceAwsAmiCopy(), - "aws_ami_from_instance": resourceAwsAmiFromInstance(), - "aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(), - "aws_autoscaling_group": resourceAwsAutoscalingGroup(), - "aws_autoscaling_notification": resourceAwsAutoscalingNotification(), - "aws_autoscaling_policy": resourceAwsAutoscalingPolicy(), - "aws_cloudformation_stack": resourceAwsCloudFormationStack(), - "aws_cloudtrail": resourceAwsCloudTrail(), - "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), - "aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(), - "aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(), - "aws_codedeploy_app": resourceAwsCodeDeployApp(), - "aws_codedeploy_deployment_group": resourceAwsCodeDeployDeploymentGroup(), - "aws_codecommit_repository": resourceAwsCodeCommitRepository(), - "aws_customer_gateway": resourceAwsCustomerGateway(), - "aws_db_instance": resourceAwsDbInstance(), - "aws_db_parameter_group": resourceAwsDbParameterGroup(), - "aws_db_security_group": resourceAwsDbSecurityGroup(), - "aws_db_subnet_group": resourceAwsDbSubnetGroup(), - "aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(), - "aws_dynamodb_table": resourceAwsDynamoDbTable(), - "aws_ebs_volume": resourceAwsEbsVolume(), - "aws_ecs_cluster": resourceAwsEcsCluster(), - "aws_ecs_service": resourceAwsEcsService(), - "aws_ecs_task_definition": resourceAwsEcsTaskDefinition(), - "aws_efs_file_system": resourceAwsEfsFileSystem(), - "aws_efs_mount_target": resourceAwsEfsMountTarget(), - "aws_eip": resourceAwsEip(), - "aws_elasticache_cluster": resourceAwsElasticacheCluster(), - "aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(), - "aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(), - "aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(), - "aws_elasticsearch_domain": resourceAwsElasticSearchDomain(), - "aws_elb": resourceAwsElb(), - "aws_flow_log": resourceAwsFlowLog(), - "aws_glacier_vault": resourceAwsGlacierVault(), - "aws_iam_access_key": resourceAwsIamAccessKey(), - "aws_iam_group_policy": resourceAwsIamGroupPolicy(), - "aws_iam_group": resourceAwsIamGroup(), - "aws_iam_group_membership": resourceAwsIamGroupMembership(), - "aws_iam_instance_profile": resourceAwsIamInstanceProfile(), - "aws_iam_policy": resourceAwsIamPolicy(), - "aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(), - "aws_iam_role_policy": resourceAwsIamRolePolicy(), - "aws_iam_role": resourceAwsIamRole(), - "aws_iam_saml_provider": resourceAwsIamSamlProvider(), - "aws_iam_server_certificate": resourceAwsIAMServerCertificate(), - "aws_iam_user_policy": resourceAwsIamUserPolicy(), - "aws_iam_user": resourceAwsIamUser(), - "aws_instance": resourceAwsInstance(), - "aws_internet_gateway": resourceAwsInternetGateway(), - "aws_key_pair": resourceAwsKeyPair(), - "aws_kinesis_stream": resourceAwsKinesisStream(), - "aws_lambda_function": resourceAwsLambdaFunction(), - "aws_launch_configuration": resourceAwsLaunchConfiguration(), - "aws_lb_cookie_stickiness_policy": resourceAwsLBCookieStickinessPolicy(), - "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), - "aws_network_acl": resourceAwsNetworkAcl(), - "aws_network_interface": resourceAwsNetworkInterface(), - "aws_opsworks_stack": resourceAwsOpsworksStack(), - "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), - "aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(), - "aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(), - "aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(), - "aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(), - "aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(), - "aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(), - "aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(), - "aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(), - "aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(), - "aws_placement_group": resourceAwsPlacementGroup(), - "aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(), - "aws_rds_cluster": resourceAwsRDSCluster(), - "aws_rds_cluster_instance": resourceAwsRDSClusterInstance(), - "aws_route53_delegation_set": resourceAwsRoute53DelegationSet(), - "aws_route53_record": resourceAwsRoute53Record(), - "aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(), - "aws_route53_zone": resourceAwsRoute53Zone(), - "aws_route53_health_check": resourceAwsRoute53HealthCheck(), - "aws_route": resourceAwsRoute(), - "aws_route_table": resourceAwsRouteTable(), - "aws_route_table_association": resourceAwsRouteTableAssociation(), - "aws_s3_bucket": resourceAwsS3Bucket(), - "aws_s3_bucket_object": resourceAwsS3BucketObject(), - "aws_security_group": resourceAwsSecurityGroup(), - "aws_security_group_rule": resourceAwsSecurityGroupRule(), - "aws_spot_instance_request": resourceAwsSpotInstanceRequest(), - "aws_sqs_queue": resourceAwsSqsQueue(), - "aws_sns_topic": resourceAwsSnsTopic(), - "aws_sns_topic_subscription": resourceAwsSnsTopicSubscription(), - "aws_subnet": resourceAwsSubnet(), - "aws_volume_attachment": resourceAwsVolumeAttachment(), - "aws_vpc_dhcp_options_association": resourceAwsVpcDhcpOptionsAssociation(), - "aws_vpc_dhcp_options": resourceAwsVpcDhcpOptions(), - "aws_vpc_peering_connection": resourceAwsVpcPeeringConnection(), - "aws_vpc": resourceAwsVpc(), - "aws_vpc_endpoint": resourceAwsVpcEndpoint(), - "aws_vpn_connection": resourceAwsVpnConnection(), - "aws_vpn_connection_route": resourceAwsVpnConnectionRoute(), - "aws_vpn_gateway": resourceAwsVpnGateway(), + "aws_ami": resourceAwsAmi(), + "aws_ami_copy": resourceAwsAmiCopy(), + "aws_ami_from_instance": resourceAwsAmiFromInstance(), + "aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(), + "aws_autoscaling_group": resourceAwsAutoscalingGroup(), + "aws_autoscaling_notification": resourceAwsAutoscalingNotification(), + "aws_autoscaling_policy": resourceAwsAutoscalingPolicy(), + "aws_cloudformation_stack": resourceAwsCloudFormationStack(), + "aws_cloudtrail": resourceAwsCloudTrail(), + "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), + "aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(), + "aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(), + "aws_codedeploy_app": resourceAwsCodeDeployApp(), + "aws_codedeploy_deployment_group": resourceAwsCodeDeployDeploymentGroup(), + "aws_codecommit_repository": resourceAwsCodeCommitRepository(), + "aws_customer_gateway": resourceAwsCustomerGateway(), + "aws_db_instance": resourceAwsDbInstance(), + "aws_db_parameter_group": resourceAwsDbParameterGroup(), + "aws_db_security_group": resourceAwsDbSecurityGroup(), + "aws_db_subnet_group": resourceAwsDbSubnetGroup(), + "aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(), + "aws_dynamodb_table": resourceAwsDynamoDbTable(), + "aws_ebs_volume": resourceAwsEbsVolume(), + "aws_ecs_cluster": resourceAwsEcsCluster(), + "aws_ecs_service": resourceAwsEcsService(), + "aws_ecs_task_definition": resourceAwsEcsTaskDefinition(), + "aws_efs_file_system": resourceAwsEfsFileSystem(), + "aws_efs_mount_target": resourceAwsEfsMountTarget(), + "aws_eip": resourceAwsEip(), + "aws_elasticache_cluster": resourceAwsElasticacheCluster(), + "aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(), + "aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(), + "aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(), + "aws_elasticsearch_domain": resourceAwsElasticSearchDomain(), + "aws_elb": resourceAwsElb(), + "aws_flow_log": resourceAwsFlowLog(), + "aws_glacier_vault": resourceAwsGlacierVault(), + "aws_iam_access_key": resourceAwsIamAccessKey(), + "aws_iam_group_policy": resourceAwsIamGroupPolicy(), + "aws_iam_group": resourceAwsIamGroup(), + "aws_iam_group_membership": resourceAwsIamGroupMembership(), + "aws_iam_instance_profile": resourceAwsIamInstanceProfile(), + "aws_iam_policy": resourceAwsIamPolicy(), + "aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(), + "aws_iam_role_policy": resourceAwsIamRolePolicy(), + "aws_iam_role": resourceAwsIamRole(), + "aws_iam_saml_provider": resourceAwsIamSamlProvider(), + "aws_iam_server_certificate": resourceAwsIAMServerCertificate(), + "aws_iam_user_policy": resourceAwsIamUserPolicy(), + "aws_iam_user": resourceAwsIamUser(), + "aws_instance": resourceAwsInstance(), + "aws_internet_gateway": resourceAwsInternetGateway(), + "aws_key_pair": resourceAwsKeyPair(), + "aws_kinesis_firehose_delivery_stream": resourceAwsKinesisFirehoseDeliveryStream(), + "aws_kinesis_stream": resourceAwsKinesisStream(), + "aws_lambda_function": resourceAwsLambdaFunction(), + "aws_launch_configuration": resourceAwsLaunchConfiguration(), + "aws_lb_cookie_stickiness_policy": resourceAwsLBCookieStickinessPolicy(), + "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), + "aws_network_acl": resourceAwsNetworkAcl(), + "aws_network_interface": resourceAwsNetworkInterface(), + "aws_opsworks_stack": resourceAwsOpsworksStack(), + "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), + "aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(), + "aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(), + "aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(), + "aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(), + "aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(), + "aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(), + "aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(), + "aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(), + "aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(), + "aws_placement_group": resourceAwsPlacementGroup(), + "aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(), + "aws_rds_cluster": resourceAwsRDSCluster(), + "aws_rds_cluster_instance": resourceAwsRDSClusterInstance(), + "aws_route53_delegation_set": resourceAwsRoute53DelegationSet(), + "aws_route53_record": resourceAwsRoute53Record(), + "aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(), + "aws_route53_zone": resourceAwsRoute53Zone(), + "aws_route53_health_check": resourceAwsRoute53HealthCheck(), + "aws_route": resourceAwsRoute(), + "aws_route_table": resourceAwsRouteTable(), + "aws_route_table_association": resourceAwsRouteTableAssociation(), + "aws_s3_bucket": resourceAwsS3Bucket(), + "aws_s3_bucket_object": resourceAwsS3BucketObject(), + "aws_security_group": resourceAwsSecurityGroup(), + "aws_security_group_rule": resourceAwsSecurityGroupRule(), + "aws_spot_instance_request": resourceAwsSpotInstanceRequest(), + "aws_sqs_queue": resourceAwsSqsQueue(), + "aws_sns_topic": resourceAwsSnsTopic(), + "aws_sns_topic_subscription": resourceAwsSnsTopicSubscription(), + "aws_subnet": resourceAwsSubnet(), + "aws_volume_attachment": resourceAwsVolumeAttachment(), + "aws_vpc_dhcp_options_association": resourceAwsVpcDhcpOptionsAssociation(), + "aws_vpc_dhcp_options": resourceAwsVpcDhcpOptions(), + "aws_vpc_peering_connection": resourceAwsVpcPeeringConnection(), + "aws_vpc": resourceAwsVpc(), + "aws_vpc_endpoint": resourceAwsVpcEndpoint(), + "aws_vpn_connection": resourceAwsVpnConnection(), + "aws_vpn_connection_route": resourceAwsVpnConnectionRoute(), + "aws_vpn_gateway": resourceAwsVpnGateway(), }, ConfigureFunc: providerConfigure, diff --git a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go index faacadb7a2..5c3458acf4 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go +++ b/builtin/providers/aws/resource_aws_autoscaling_lifecycle_hook.go @@ -3,9 +3,13 @@ package aws import ( "fmt" "log" + "strings" + "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -55,14 +59,26 @@ func resourceAwsAutoscalingLifecycleHook() *schema.Resource { } func resourceAwsAutoscalingLifecycleHookPut(d *schema.ResourceData, meta interface{}) error { - autoscalingconn := meta.(*AWSClient).autoscalingconn - + conn := meta.(*AWSClient).autoscalingconn params := getAwsAutoscalingPutLifecycleHookInput(d) - log.Printf("[DEBUG] AutoScaling PutLifecyleHook: %#v", params) - _, err := autoscalingconn.PutLifecycleHook(¶ms) + log.Printf("[DEBUG] AutoScaling PutLifecyleHook: %s", params) + err := resource.Retry(5*time.Minute, func() error { + _, err := conn.PutLifecycleHook(¶ms) + + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if strings.Contains(awsErr.Message(), "Unable to publish test message to notification target") { + return fmt.Errorf("[DEBUG] Retrying AWS AutoScaling Lifecycle Hook: %s", params) + } + } + return resource.RetryError{Err: fmt.Errorf("Error putting lifecycle hook: %s", err)} + } + return nil + }) + if err != nil { - return fmt.Errorf("Error putting lifecycle hook: %s", err) + return err } d.SetId(d.Get("name").(string)) diff --git a/builtin/providers/aws/resource_aws_codedeploy_app_test.go b/builtin/providers/aws/resource_aws_codedeploy_app_test.go index 9c016f1842..9610a01a74 100644 --- a/builtin/providers/aws/resource_aws_codedeploy_app_test.go +++ b/builtin/providers/aws/resource_aws_codedeploy_app_test.go @@ -23,7 +23,7 @@ func TestAccAWSCodeDeployApp_basic(t *testing.T) { ), }, resource.TestStep{ - Config: testAccAWSCodeDeployAppModifier, + Config: testAccAWSCodeDeployAppModified, Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployAppExists("aws_codedeploy_app.foo"), ), @@ -72,7 +72,7 @@ resource "aws_codedeploy_app" "foo" { name = "foo" }` -var testAccAWSCodeDeployAppModifier = ` +var testAccAWSCodeDeployAppModified = ` resource "aws_codedeploy_app" "foo" { name = "bar" }` diff --git a/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go b/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go index 7608b1f585..3b873fe3ba 100644 --- a/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go +++ b/builtin/providers/aws/resource_aws_codedeploy_deployment_group_test.go @@ -23,7 +23,7 @@ func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) { ), }, resource.TestStep{ - Config: testAccAWSCodeDeployDeploymentGroupModifier, + Config: testAccAWSCodeDeployDeploymentGroupModified, Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo"), ), @@ -133,7 +133,7 @@ resource "aws_codedeploy_deployment_group" "foo" { } }` -var testAccAWSCodeDeployDeploymentGroupModifier = ` +var testAccAWSCodeDeployDeploymentGroupModified = ` resource "aws_codedeploy_app" "foo_app" { name = "foo_app" } diff --git a/builtin/providers/aws/resource_aws_db_instance.go b/builtin/providers/aws/resource_aws_db_instance.go index 37662b201f..d00dce597a 100644 --- a/builtin/providers/aws/resource_aws_db_instance.go +++ b/builtin/providers/aws/resource_aws_db_instance.go @@ -54,7 +54,8 @@ func resourceAwsDbInstance() *schema.Resource { "engine_version": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, + Computed: true, }, "storage_encrypted": &schema.Schema{ @@ -245,8 +246,8 @@ func resourceAwsDbInstance() *schema.Resource { "auto_minor_version_upgrade": &schema.Schema{ Type: schema.TypeBool, - Computed: false, Optional: true, + Default: true, }, "allow_major_version_upgrade": &schema.Schema{ @@ -293,14 +294,11 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error } } else if _, ok := d.GetOk("snapshot_identifier"); ok { opts := rds.RestoreDBInstanceFromDBSnapshotInput{ - DBInstanceClass: aws.String(d.Get("instance_class").(string)), - DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), - DBSnapshotIdentifier: aws.String(d.Get("snapshot_identifier").(string)), - Tags: tags, - } - - if attr, ok := d.GetOk("auto_minor_version_upgrade"); ok { - opts.AutoMinorVersionUpgrade = aws.Bool(attr.(bool)) + DBInstanceClass: aws.String(d.Get("instance_class").(string)), + DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), + DBSnapshotIdentifier: aws.String(d.Get("snapshot_identifier").(string)), + AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), + Tags: tags, } if attr, ok := d.GetOk("availability_zone"); ok { @@ -386,17 +384,17 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error } } else { opts := rds.CreateDBInstanceInput{ - AllocatedStorage: aws.Int64(int64(d.Get("allocated_storage").(int))), - CopyTagsToSnapshot: aws.Bool(d.Get("copy_tags_to_snapshot").(bool)), - DBName: aws.String(d.Get("name").(string)), - DBInstanceClass: aws.String(d.Get("instance_class").(string)), - DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), - MasterUsername: aws.String(d.Get("username").(string)), - MasterUserPassword: aws.String(d.Get("password").(string)), - Engine: aws.String(d.Get("engine").(string)), - EngineVersion: aws.String(d.Get("engine_version").(string)), - StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), - Tags: tags, + AllocatedStorage: aws.Int64(int64(d.Get("allocated_storage").(int))), + DBName: aws.String(d.Get("name").(string)), + DBInstanceClass: aws.String(d.Get("instance_class").(string)), + DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), + MasterUsername: aws.String(d.Get("username").(string)), + MasterUserPassword: aws.String(d.Get("password").(string)), + Engine: aws.String(d.Get("engine").(string)), + EngineVersion: aws.String(d.Get("engine_version").(string)), + StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), + AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), + Tags: tags, } attr := d.Get("backup_retention_period") @@ -509,6 +507,7 @@ func resourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error { d.Set("engine_version", v.EngineVersion) d.Set("allocated_storage", v.AllocatedStorage) d.Set("copy_tags_to_snapshot", v.CopyTagsToSnapshot) + d.Set("auto_minor_version_upgrade", v.AutoMinorVersionUpgrade) d.Set("storage_type", v.StorageType) d.Set("instance_class", v.DBInstanceClass) d.Set("availability_zone", v.AvailabilityZone) @@ -711,6 +710,11 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error req.StorageType = aws.String(d.Get("storage_type").(string)) requestUpdate = true } + if d.HasChange("auto_minor_version_upgrade") { + d.SetPartial("auto_minor_version_upgrade") + req.AutoMinorVersionUpgrade = aws.Bool(d.Get("auto_minor_version_upgrade").(bool)) + requestUpdate = true + } if d.HasChange("vpc_security_group_ids") { if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { diff --git a/builtin/providers/aws/resource_aws_db_instance_test.go b/builtin/providers/aws/resource_aws_db_instance_test.go index e63be73a82..a2c2f69cad 100644 --- a/builtin/providers/aws/resource_aws_db_instance_test.go +++ b/builtin/providers/aws/resource_aws_db_instance_test.go @@ -31,8 +31,6 @@ func TestAccAWSDBInstance_basic(t *testing.T) { "aws_db_instance.bar", "allocated_storage", "10"), resource.TestCheckResourceAttr( "aws_db_instance.bar", "engine", "mysql"), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "engine_version", "5.6.21"), resource.TestCheckResourceAttr( "aws_db_instance.bar", "license_model", "general-public-license"), resource.TestCheckResourceAttr( @@ -111,7 +109,7 @@ func testAccCheckAWSDBInstanceAttributes(v *rds.DBInstance) resource.TestCheckFu return fmt.Errorf("bad engine: %#v", *v.Engine) } - if *v.EngineVersion != "5.6.21" { + if *v.EngineVersion == "" { return fmt.Errorf("bad engine_version: %#v", *v.EngineVersion) } diff --git a/builtin/providers/aws/resource_aws_ecs_cluster.go b/builtin/providers/aws/resource_aws_ecs_cluster.go index 7f5d0ea1e4..f9e3a4abb9 100644 --- a/builtin/providers/aws/resource_aws_ecs_cluster.go +++ b/builtin/providers/aws/resource_aws_ecs_cluster.go @@ -59,9 +59,16 @@ func resourceAwsEcsClusterRead(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Received ECS clusters: %s", out.Clusters) - d.SetId(*out.Clusters[0].ClusterArn) - d.Set("name", *out.Clusters[0].ClusterName) + for _, c := range out.Clusters { + if *c.ClusterName == clusterName { + d.SetId(*c.ClusterArn) + d.Set("name", c.ClusterName) + return nil + } + } + log.Printf("[ERR] No matching ECS Cluster found for (%s)", d.Id()) + d.SetId("") return nil } diff --git a/builtin/providers/aws/resource_aws_ecs_service.go b/builtin/providers/aws/resource_aws_ecs_service.go index ab8562acb9..805d968407 100644 --- a/builtin/providers/aws/resource_aws_ecs_service.go +++ b/builtin/providers/aws/resource_aws_ecs_service.go @@ -156,10 +156,20 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error { } if len(out.Services) < 1 { + log.Printf("[DEBUG] Removing ECS service %s (%s) because it's gone", d.Get("name").(string), d.Id()) + d.SetId("") return nil } service := out.Services[0] + + // Status==INACTIVE means deleted service + if *service.Status == "INACTIVE" { + log.Printf("[DEBUG] Removing ECS service %q because it's INACTIVE", *service.ServiceArn) + d.SetId("") + return nil + } + log.Printf("[DEBUG] Received ECS service %s", service) d.SetId(*service.ServiceArn) @@ -239,6 +249,12 @@ func resourceAwsEcsServiceDelete(d *schema.ResourceData, meta interface{}) error if err != nil { return err } + + if len(resp.Services) == 0 { + log.Printf("[DEBUG] ECS Service %q is already gone", d.Id()) + return nil + } + log.Printf("[DEBUG] ECS service %s is currently %s", d.Id(), *resp.Services[0].Status) if *resp.Services[0].Status == "INACTIVE" { diff --git a/builtin/providers/aws/resource_aws_ecs_service_test.go b/builtin/providers/aws/resource_aws_ecs_service_test.go index 7f88f1536d..a2f71ad2f8 100644 --- a/builtin/providers/aws/resource_aws_ecs_service_test.go +++ b/builtin/providers/aws/resource_aws_ecs_service_test.go @@ -319,7 +319,7 @@ resource "aws_iam_role" "ecs_service" { name = "EcsService" assume_role_policy = < 35 { + es = append(es, fmt.Errorf( + "snapshot retention limit cannot be more than 35 days")) + } + return + }, + }, + "tags": tagsSchema(), // apply_immediately is used to determine when the update modifications @@ -187,6 +205,14 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{ req.CacheParameterGroupName = aws.String(v.(string)) } + if v, ok := d.GetOk("snapshot_retention_limit"); ok { + req.SnapshotRetentionLimit = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("snapshot_window"); ok { + req.SnapshotWindow = aws.String(v.(string)) + } + if v, ok := d.GetOk("maintenance_window"); ok { req.PreferredMaintenanceWindow = aws.String(v.(string)) } @@ -267,6 +293,8 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{}) d.Set("security_group_ids", c.SecurityGroups) d.Set("parameter_group_name", c.CacheParameterGroup) d.Set("maintenance_window", c.PreferredMaintenanceWindow) + d.Set("snapshot_window", c.SnapshotWindow) + d.Set("snapshot_retention_limit", c.SnapshotRetentionLimit) if c.NotificationConfiguration != nil { if *c.NotificationConfiguration.TopicStatus == "active" { d.Set("notification_topic_arn", c.NotificationConfiguration.TopicArn) @@ -350,6 +378,16 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ requestUpdate = true } + if d.HasChange("snapshot_window") { + req.SnapshotWindow = aws.String(d.Get("snapshot_window").(string)) + requestUpdate = true + } + + if d.HasChange("snapshot_retention_limit") { + req.SnapshotRetentionLimit = aws.Int64(int64(d.Get("snapshot_retention_limit").(int))) + requestUpdate = true + } + if d.HasChange("num_cache_nodes") { req.NumCacheNodes = aws.Int64(int64(d.Get("num_cache_nodes").(int))) requestUpdate = true diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go index b930600288..a17c5d9b1e 100644 --- a/builtin/providers/aws/resource_aws_elasticache_cluster_test.go +++ b/builtin/providers/aws/resource_aws_elasticache_cluster_test.go @@ -33,6 +33,45 @@ func TestAccAWSElasticacheCluster_basic(t *testing.T) { }) } +func TestAccAWSElasticacheCluster_snapshotsWithUpdates(t *testing.T) { + var ec elasticache.CacheCluster + + ri := genRandInt() + preConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshots, ri, ri, ri) + postConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshotsUpdated, ri, ri, ri) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: preConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), + testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "snapshot_window", "05:00-09:00"), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "snapshot_retention_limit", "3"), + ), + }, + + resource.TestStep{ + Config: postConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), + testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "snapshot_window", "07:00-09:00"), + resource.TestCheckResourceAttr( + "aws_elasticache_cluster.bar", "snapshot_retention_limit", "7"), + ), + }, + }, + }) +} + func TestAccAWSElasticacheCluster_vpc(t *testing.T) { var csg elasticache.CacheSubnetGroup var ec elasticache.CacheCluster @@ -152,6 +191,75 @@ resource "aws_elasticache_cluster" "bar" { } `, genRandInt(), genRandInt(), genRandInt()) +var testAccAWSElasticacheClusterConfig_snapshots = ` +provider "aws" { + region = "us-east-1" +} +resource "aws_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + ingress { + from_port = -1 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_elasticache_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + security_group_names = ["${aws_security_group.bar.name}"] +} + +resource "aws_elasticache_cluster" "bar" { + cluster_id = "tf-test-%03d" + engine = "redis" + node_type = "cache.m1.small" + num_cache_nodes = 1 + port = 6379 + parameter_group_name = "default.redis2.8" + security_group_names = ["${aws_elasticache_security_group.bar.name}"] + snapshot_window = "05:00-09:00" + snapshot_retention_limit = 3 +} +` + +var testAccAWSElasticacheClusterConfig_snapshotsUpdated = ` +provider "aws" { + region = "us-east-1" +} +resource "aws_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + ingress { + from_port = -1 + to_port = -1 + protocol = "icmp" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_elasticache_security_group" "bar" { + name = "tf-test-security-group-%03d" + description = "tf-test-security-group-descr" + security_group_names = ["${aws_security_group.bar.name}"] +} + +resource "aws_elasticache_cluster" "bar" { + cluster_id = "tf-test-%03d" + engine = "redis" + node_type = "cache.m1.small" + num_cache_nodes = 1 + port = 6379 + parameter_group_name = "default.redis2.8" + security_group_names = ["${aws_elasticache_security_group.bar.name}"] + snapshot_window = "07:00-09:00" + snapshot_retention_limit = 7 + apply_immediately = true +} +` + var testAccAWSElasticacheClusterInVPCConfig = fmt.Sprintf(` resource "aws_vpc" "foo" { cidr_block = "192.168.0.0/16" diff --git a/builtin/providers/aws/resource_aws_elb.go b/builtin/providers/aws/resource_aws_elb.go index 9955c7cf0a..5ff3b3b28a 100644 --- a/builtin/providers/aws/resource_aws_elb.go +++ b/builtin/providers/aws/resource_aws_elb.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/elb" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" @@ -74,6 +75,11 @@ func resourceAwsElb() *schema.Resource { Computed: true, }, + "source_security_group_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "subnets": &schema.Schema{ Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, @@ -101,6 +107,29 @@ func resourceAwsElb() *schema.Resource { Default: 300, }, + "access_logs": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "interval": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 60, + }, + "bucket": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "bucket_prefix": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + }, + }, + Set: resourceAwsElbAccessLogsHash, + }, + "listener": &schema.Schema{ Type: schema.TypeSet, Required: true, @@ -300,11 +329,28 @@ func resourceAwsElbRead(d *schema.ResourceData, meta interface{}) error { d.Set("security_groups", lb.SecurityGroups) if lb.SourceSecurityGroup != nil { d.Set("source_security_group", lb.SourceSecurityGroup.GroupName) + + // Manually look up the ELB Security Group ID, since it's not provided + var elbVpc string + if lb.VPCId != nil { + elbVpc = *lb.VPCId + } + sgId, err := sourceSGIdByName(meta, *lb.SourceSecurityGroup.GroupName, elbVpc) + if err != nil { + return fmt.Errorf("[WARN] Error looking up ELB Security Group ID: %s", err) + } else { + d.Set("source_security_group_id", sgId) + } } d.Set("subnets", lb.Subnets) d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout) d.Set("connection_draining", lbAttrs.ConnectionDraining.Enabled) d.Set("connection_draining_timeout", lbAttrs.ConnectionDraining.Timeout) + if lbAttrs.AccessLog != nil { + if err := d.Set("access_logs", flattenAccessLog(lbAttrs.AccessLog)); err != nil { + return err + } + } resp, err := elbconn.DescribeTags(&elb.DescribeTagsInput{ LoadBalancerNames: []*string{lb.LoadBalancerName}, @@ -405,7 +451,7 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error { d.SetPartial("instances") } - if d.HasChange("cross_zone_load_balancing") || d.HasChange("idle_timeout") { + if d.HasChange("cross_zone_load_balancing") || d.HasChange("idle_timeout") || d.HasChange("access_logs") { attrs := elb.ModifyLoadBalancerAttributesInput{ LoadBalancerName: aws.String(d.Get("name").(string)), LoadBalancerAttributes: &elb.LoadBalancerAttributes{ @@ -418,6 +464,30 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error { }, } + logs := d.Get("access_logs").(*schema.Set).List() + if len(logs) > 1 { + return fmt.Errorf("Only one access logs config per ELB is supported") + } else if len(logs) == 1 { + log := logs[0].(map[string]interface{}) + accessLog := &elb.AccessLog{ + Enabled: aws.Bool(true), + EmitInterval: aws.Int64(int64(log["interval"].(int))), + S3BucketName: aws.String(log["bucket"].(string)), + } + + if log["bucket_prefix"] != "" { + accessLog.S3BucketPrefix = aws.String(log["bucket_prefix"].(string)) + } + + attrs.LoadBalancerAttributes.AccessLog = accessLog + } else if len(logs) == 0 { + // disable access logs + attrs.LoadBalancerAttributes.AccessLog = &elb.AccessLog{ + Enabled: aws.Bool(false), + } + } + + log.Printf("[DEBUG] ELB Modify Load Balancer Attributes Request: %#v", attrs) _, err := elbconn.ModifyLoadBalancerAttributes(&attrs) if err != nil { return fmt.Errorf("Failure configuring ELB attributes: %s", err) @@ -550,6 +620,19 @@ func resourceAwsElbHealthCheckHash(v interface{}) int { return hashcode.String(buf.String()) } +func resourceAwsElbAccessLogsHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%d-", m["interval"].(int))) + buf.WriteString(fmt.Sprintf("%s-", + strings.ToLower(m["bucket"].(string)))) + if v, ok := m["bucket_prefix"]; ok { + buf.WriteString(fmt.Sprintf("%s-", strings.ToLower(v.(string)))) + } + + return hashcode.String(buf.String()) +} + func resourceAwsElbListenerHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) @@ -594,3 +677,52 @@ func validateElbName(v interface{}, k string) (ws []string, errors []error) { return } + +func sourceSGIdByName(meta interface{}, sg, vpcId string) (string, error) { + conn := meta.(*AWSClient).ec2conn + var filters []*ec2.Filter + var sgFilterName, sgFilterVPCID *ec2.Filter + sgFilterName = &ec2.Filter{ + Name: aws.String("group-name"), + Values: []*string{aws.String(sg)}, + } + + if vpcId != "" { + sgFilterVPCID = &ec2.Filter{ + Name: aws.String("vpc-id"), + Values: []*string{aws.String(vpcId)}, + } + } + + filters = append(filters, sgFilterName) + + if sgFilterVPCID != nil { + filters = append(filters, sgFilterVPCID) + } + + req := &ec2.DescribeSecurityGroupsInput{ + Filters: filters, + } + resp, err := conn.DescribeSecurityGroups(req) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok { + if ec2err.Code() == "InvalidSecurityGroupID.NotFound" || + ec2err.Code() == "InvalidGroup.NotFound" { + resp = nil + err = nil + } + } + + if err != nil { + log.Printf("Error on ELB SG look up: %s", err) + return "", err + } + } + + if resp == nil || len(resp.SecurityGroups) == 0 { + return "", fmt.Errorf("No security groups found for name %s and vpc id %s", sg, vpcId) + } + + group := resp.SecurityGroups[0] + return *group.GroupId, nil +} diff --git a/builtin/providers/aws/resource_aws_elb_test.go b/builtin/providers/aws/resource_aws_elb_test.go index dadf4aba3c..6dad03e568 100644 --- a/builtin/providers/aws/resource_aws_elb_test.go +++ b/builtin/providers/aws/resource_aws_elb_test.go @@ -75,6 +75,52 @@ func TestAccAWSELB_fullCharacterRange(t *testing.T) { }) } +func TestAccAWSELB_AccessLogs(t *testing.T) { + var conf elb.LoadBalancerDescription + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSELBDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSELBAccessLogs, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.foo", &conf), + resource.TestCheckResourceAttr( + "aws_elb.foo", "name", "FoobarTerraform-test123"), + ), + }, + + resource.TestStep{ + Config: testAccAWSELBAccessLogsOn, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.foo", &conf), + resource.TestCheckResourceAttr( + "aws_elb.foo", "name", "FoobarTerraform-test123"), + resource.TestCheckResourceAttr( + "aws_elb.foo", "access_logs.#", "1"), + resource.TestCheckResourceAttr( + "aws_elb.foo", "access_logs.1713209538.bucket", "terraform-access-logs-bucket"), + resource.TestCheckResourceAttr( + "aws_elb.foo", "access_logs.1713209538.interval", "5"), + ), + }, + + resource.TestStep{ + Config: testAccAWSELBAccessLogs, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.foo", &conf), + resource.TestCheckResourceAttr( + "aws_elb.foo", "name", "FoobarTerraform-test123"), + resource.TestCheckResourceAttr( + "aws_elb.foo", "access_logs.#", "0"), + ), + }, + }, + }) +} + func TestAccAWSELB_generatedName(t *testing.T) { var conf elb.LoadBalancerDescription generatedNameRegexp := regexp.MustCompile("^tf-lb-") @@ -611,6 +657,15 @@ func testAccCheckAWSELBExists(n string, res *elb.LoadBalancerDescription) resour *res = *describe.LoadBalancerDescriptions[0] + // Confirm source_security_group_id for ELBs in a VPC + // See https://github.com/hashicorp/terraform/pull/3780 + if res.VPCId != nil { + sgid := rs.Primary.Attributes["source_security_group_id"] + if sgid == "" { + return fmt.Errorf("Expected to find source_security_group_id for ELB, but was empty") + } + } + return nil } } @@ -650,6 +705,64 @@ resource "aws_elb" "foo" { } ` +const testAccAWSELBAccessLogs = ` +resource "aws_elb" "foo" { + name = "FoobarTerraform-test123" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } +} +` +const testAccAWSELBAccessLogsOn = ` +# an S3 bucket configured for Access logs +# The 797873946194 is the AWS ID for us-west-2, so this test +# must be ran in us-west-2 +resource "aws_s3_bucket" "acceslogs_bucket" { + bucket = "terraform-access-logs-bucket" + acl = "private" + force_destroy = true + policy = < 0 { + destination := s.Destinations[0] + d.Set("destination_id", *destination.DestinationId) + } + + return nil +} + +func resourceAwsKinesisFirehoseDeliveryStreamDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).firehoseconn + + sn := d.Get("name").(string) + _, err := conn.DeleteDeliveryStream(&firehose.DeleteDeliveryStreamInput{ + DeliveryStreamName: aws.String(sn), + }) + + if err != nil { + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"DELETING"}, + Target: "DESTROYED", + Refresh: firehoseStreamStateRefreshFunc(conn, sn), + Timeout: 5 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for Delivery Stream (%s) to be destroyed: %s", + sn, err) + } + + d.SetId("") + return nil +} + +func firehoseStreamStateRefreshFunc(conn *firehose.Firehose, sn string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + describeOpts := &firehose.DescribeDeliveryStreamInput{ + DeliveryStreamName: aws.String(sn), + } + resp, err := conn.DescribeDeliveryStream(describeOpts) + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() == "ResourceNotFoundException" { + return 42, "DESTROYED", nil + } + return nil, awsErr.Code(), err + } + return nil, "failed", err + } + + return resp.DeliveryStreamDescription, *resp.DeliveryStreamDescription.DeliveryStreamStatus, nil + } +} diff --git a/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream_test.go b/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream_test.go new file mode 100644 index 0000000000..611e196ce5 --- /dev/null +++ b/builtin/providers/aws/resource_aws_kinesis_firehose_delivery_stream_test.go @@ -0,0 +1,189 @@ +package aws + +import ( + "fmt" + "log" + "math/rand" + "strings" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/firehose" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSKinesisFirehoseDeliveryStream_basic(t *testing.T) { + var stream firehose.DeliveryStreamDescription + + ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + config := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_basic, ri, ri) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: config, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists("aws_kinesis_firehose_delivery_stream.test_stream", &stream), + testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(&stream), + ), + }, + }, + }) +} + +func TestAccAWSKinesisFirehoseDeliveryStream_s3ConfigUpdates(t *testing.T) { + var stream firehose.DeliveryStreamDescription + + ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + preconfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3, ri, ri) + postConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3Updates, ri, ri) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: preconfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists("aws_kinesis_firehose_delivery_stream.test_stream", &stream), + testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(&stream), + resource.TestCheckResourceAttr( + "aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_size", "5"), + resource.TestCheckResourceAttr( + "aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_interval", "300"), + resource.TestCheckResourceAttr( + "aws_kinesis_firehose_delivery_stream.test_stream", "s3_data_compression", "UNCOMPRESSED"), + ), + }, + + resource.TestStep{ + Config: postConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists("aws_kinesis_firehose_delivery_stream.test_stream", &stream), + testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(&stream), + resource.TestCheckResourceAttr( + "aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_size", "10"), + resource.TestCheckResourceAttr( + "aws_kinesis_firehose_delivery_stream.test_stream", "s3_buffer_interval", "400"), + resource.TestCheckResourceAttr( + "aws_kinesis_firehose_delivery_stream.test_stream", "s3_data_compression", "GZIP"), + ), + }, + }, + }) +} + +func testAccCheckKinesisFirehoseDeliveryStreamExists(n string, stream *firehose.DeliveryStreamDescription) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + log.Printf("State: %#v", s.RootModule().Resources) + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Kinesis Firehose ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).firehoseconn + describeOpts := &firehose.DescribeDeliveryStreamInput{ + DeliveryStreamName: aws.String(rs.Primary.Attributes["name"]), + } + resp, err := conn.DescribeDeliveryStream(describeOpts) + if err != nil { + return err + } + + *stream = *resp.DeliveryStreamDescription + + return nil + } +} + +func testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(stream *firehose.DeliveryStreamDescription) resource.TestCheckFunc { + return func(s *terraform.State) error { + if !strings.HasPrefix(*stream.DeliveryStreamName, "terraform-kinesis-firehose") { + return fmt.Errorf("Bad Stream name: %s", *stream.DeliveryStreamName) + } + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_kinesis_firehose_delivery_stream" { + continue + } + if *stream.DeliveryStreamARN != rs.Primary.Attributes["arn"] { + return fmt.Errorf("Bad Delivery Stream ARN\n\t expected: %s\n\tgot: %s\n", rs.Primary.Attributes["arn"], *stream.DeliveryStreamARN) + } + } + return nil + } +} + +func testAccCheckKinesisFirehoseDeliveryStreamDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_kinesis_firehose_delivery_stream" { + continue + } + conn := testAccProvider.Meta().(*AWSClient).firehoseconn + describeOpts := &firehose.DescribeDeliveryStreamInput{ + DeliveryStreamName: aws.String(rs.Primary.Attributes["name"]), + } + resp, err := conn.DescribeDeliveryStream(describeOpts) + if err == nil { + if resp.DeliveryStreamDescription != nil && *resp.DeliveryStreamDescription.DeliveryStreamStatus != "DELETING" { + return fmt.Errorf("Error: Delivery Stream still exists") + } + } + + return nil + + } + + return nil +} + +var testAccKinesisFirehoseDeliveryStreamConfig_basic = ` +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-%d" + acl = "private" +} + +resource "aws_kinesis_firehose_delivery_stream" "test_stream" { + name = "terraform-kinesis-firehose-basictest-%d" + destination = "s3" + role_arn = "arn:aws:iam::946579370547:role/firehose_delivery_role" + s3_bucket_arn = "${aws_s3_bucket.bucket.arn}" +}` + +var testAccKinesisFirehoseDeliveryStreamConfig_s3 = ` +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-%d" + acl = "private" +} + +resource "aws_kinesis_firehose_delivery_stream" "test_stream" { + name = "terraform-kinesis-firehose-s3test-%d" + destination = "s3" + role_arn = "arn:aws:iam::946579370547:role/firehose_delivery_role" + s3_bucket_arn = "${aws_s3_bucket.bucket.arn}" +}` + +var testAccKinesisFirehoseDeliveryStreamConfig_s3Updates = ` +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-01-%d" + acl = "private" +} + +resource "aws_kinesis_firehose_delivery_stream" "test_stream" { + name = "terraform-kinesis-firehose-s3test-%d" + destination = "s3" + role_arn = "arn:aws:iam::946579370547:role/firehose_delivery_role" + s3_bucket_arn = "${aws_s3_bucket.bucket.arn}" + s3_buffer_size = 10 + s3_buffer_interval = 400 + s3_data_compression = "GZIP" +}` diff --git a/builtin/providers/aws/resource_aws_lambda_function.go b/builtin/providers/aws/resource_aws_lambda_function.go index 4ce8981744..324016455e 100644 --- a/builtin/providers/aws/resource_aws_lambda_function.go +++ b/builtin/providers/aws/resource_aws_lambda_function.go @@ -13,6 +13,8 @@ import ( "github.com/aws/aws-sdk-go/service/lambda" "github.com/mitchellh/go-homedir" + "errors" + "github.com/hashicorp/terraform/helper/schema" ) @@ -25,13 +27,28 @@ func resourceAwsLambdaFunction() *schema.Resource { Schema: map[string]*schema.Schema{ "filename": &schema.Schema{ - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"s3_bucket", "s3_key", "s3_object_version"}, + }, + "s3_bucket": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"filename"}, + }, + "s3_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"filename"}, + }, + "s3_object_version": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"filename"}, }, "description": &schema.Schema{ Type: schema.TypeString, Optional: true, - ForceNew: true, // TODO make this editable }, "function_name": &schema.Schema{ Type: schema.TypeString, @@ -93,22 +110,36 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Creating Lambda Function %s with role %s", functionName, iamRole) - filename, err := homedir.Expand(d.Get("filename").(string)) - if err != nil { - return err + var functionCode *lambda.FunctionCode + if v, ok := d.GetOk("filename"); ok { + filename, err := homedir.Expand(v.(string)) + if err != nil { + return err + } + zipfile, err := ioutil.ReadFile(filename) + if err != nil { + return err + } + d.Set("source_code_hash", sha256.Sum256(zipfile)) + functionCode = &lambda.FunctionCode{ + ZipFile: zipfile, + } + } else { + s3Bucket, bucketOk := d.GetOk("s3_bucket") + s3Key, keyOk := d.GetOk("s3_key") + s3ObjectVersion, versionOk := d.GetOk("s3_object_version") + if !bucketOk || !keyOk || !versionOk { + return errors.New("s3_bucket, s3_key and s3_object_version must all be set while using S3 code source") + } + functionCode = &lambda.FunctionCode{ + S3Bucket: aws.String(s3Bucket.(string)), + S3Key: aws.String(s3Key.(string)), + S3ObjectVersion: aws.String(s3ObjectVersion.(string)), + } } - zipfile, err := ioutil.ReadFile(filename) - if err != nil { - return err - } - d.Set("source_code_hash", sha256.Sum256(zipfile)) - - log.Printf("[DEBUG] ") params := &lambda.CreateFunctionInput{ - Code: &lambda.FunctionCode{ - ZipFile: zipfile, - }, + Code: functionCode, Description: aws.String(d.Get("description").(string)), FunctionName: aws.String(functionName), Handler: aws.String(d.Get("handler").(string)), @@ -118,6 +149,7 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e Timeout: aws.Int64(int64(d.Get("timeout").(int))), } + var err error for i := 0; i < 5; i++ { _, err = conn.CreateFunction(params) if awsErr, ok := err.(awserr.Error); ok { diff --git a/builtin/providers/aws/resource_aws_launch_configuration.go b/builtin/providers/aws/resource_aws_launch_configuration.go index 9b464e5d2a..1cc010634e 100644 --- a/builtin/providers/aws/resource_aws_launch_configuration.go +++ b/builtin/providers/aws/resource_aws_launch_configuration.go @@ -26,10 +26,11 @@ func resourceAwsLaunchConfiguration() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { // https://github.com/boto/botocore/blob/9f322b1/botocore/data/autoscaling/2011-01-01/service-2.json#L1932-L1939 value := v.(string) @@ -41,6 +42,22 @@ func resourceAwsLaunchConfiguration() *schema.Resource { }, }, + "name_prefix": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + // https://github.com/boto/botocore/blob/9f322b1/botocore/data/autoscaling/2011-01-01/service-2.json#L1932-L1939 + // uuid is 26 characters, limit the prefix to 229. + value := v.(string) + if len(value) > 229 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 229 characters, name is limited to 255", k)) + } + return + }, + }, + "image_id": &schema.Schema{ Type: schema.TypeString, Required: true, @@ -386,6 +403,8 @@ func resourceAwsLaunchConfigurationCreate(d *schema.ResourceData, meta interface var lcName string if v, ok := d.GetOk("name"); ok { lcName = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + lcName = resource.PrefixedUniqueId(v.(string)) } else { lcName = resource.UniqueId() } diff --git a/builtin/providers/aws/resource_aws_launch_configuration_test.go b/builtin/providers/aws/resource_aws_launch_configuration_test.go index f8d4d89783..c6a0086a14 100644 --- a/builtin/providers/aws/resource_aws_launch_configuration_test.go +++ b/builtin/providers/aws/resource_aws_launch_configuration_test.go @@ -30,6 +30,14 @@ func TestAccAWSLaunchConfiguration_basic(t *testing.T) { "aws_launch_configuration.bar", "terraform-"), ), }, + resource.TestStep{ + Config: testAccAWSLaunchConfigurationPrefixNameConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchConfigurationExists("aws_launch_configuration.baz", &conf), + testAccCheckAWSLaunchConfigurationGeneratedNamePrefix( + "aws_launch_configuration.baz", "baz-"), + ), + }, }, }) } @@ -255,3 +263,13 @@ resource "aws_launch_configuration" "bar" { associate_public_ip_address = false } ` + +const testAccAWSLaunchConfigurationPrefixNameConfig = ` +resource "aws_launch_configuration" "baz" { + name_prefix = "baz-" + image_id = "ami-21f78e11" + instance_type = "t1.micro" + user_data = "foobar-user-data-change" + associate_public_ip_address = false +} +` diff --git a/builtin/providers/aws/resource_aws_opsworks_stack_test.go b/builtin/providers/aws/resource_aws_opsworks_stack_test.go index b740b6a20d..2a3e95b3cb 100644 --- a/builtin/providers/aws/resource_aws_opsworks_stack_test.go +++ b/builtin/providers/aws/resource_aws_opsworks_stack_test.go @@ -4,11 +4,12 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/opsworks" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) // These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` @@ -49,7 +50,7 @@ resource "aws_opsworks_stack" "tf-acc" { custom_cookbooks_source { type = "git" revision = "master" - url = "https://github.com/awslabs/opsworks-example-cookbooks.git" + url = "https://github.com/aws/opsworks-example-cookbooks.git" } } ` @@ -129,7 +130,7 @@ resource "aws_opsworks_stack" "tf-acc" { custom_cookbooks_source { type = "git" revision = "master" - url = "https://github.com/awslabs/opsworks-example-cookbooks.git" + url = "https://github.com/aws/opsworks-example-cookbooks.git" } } ` @@ -259,7 +260,7 @@ var testAccAwsOpsworksStackCheckResourceAttrsUpdate = resource.ComposeTestCheckF resource.TestCheckResourceAttr( "aws_opsworks_stack.tf-acc", "custom_cookbooks_source.0.url", - "https://github.com/awslabs/opsworks-example-cookbooks.git", + "https://github.com/aws/opsworks-example-cookbooks.git", ), ) diff --git a/builtin/providers/aws/resource_aws_route53_record.go b/builtin/providers/aws/resource_aws_route53_record.go index 1966c33de4..a5b9ef4685 100644 --- a/builtin/providers/aws/resource_aws_route53_record.go +++ b/builtin/providers/aws/resource_aws_route53_record.go @@ -28,6 +28,10 @@ func resourceAwsRoute53Record() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + StateFunc: func(v interface{}) string { + value := v.(string) + return strings.ToLower(value) + }, }, "fqdn": &schema.Schema{ @@ -192,12 +196,13 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er // Generate an ID vars := []string{ zone, - d.Get("name").(string), + strings.ToLower(d.Get("name").(string)), d.Get("type").(string), } if v, ok := d.GetOk("set_identifier"); ok { vars = append(vars, v.(string)) } + d.SetId(strings.Join(vars, "_")) // Wait until we are done @@ -242,6 +247,8 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro StartRecordType: aws.String(d.Get("type").(string)), } + log.Printf("[DEBUG] List resource records sets for zone: %s, opts: %s", + zone, lopts) resp, err := conn.ListResourceRecordSets(lopts) if err != nil { return err @@ -251,7 +258,7 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro found := false for _, record := range resp.ResourceRecordSets { name := cleanRecordName(*record.Name) - if FQDN(name) != FQDN(*lopts.StartRecordName) { + if FQDN(strings.ToLower(name)) != FQDN(strings.ToLower(*lopts.StartRecordName)) { continue } if strings.ToUpper(*record.Type) != strings.ToUpper(*lopts.StartRecordType) { @@ -279,6 +286,7 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro } if !found { + log.Printf("[DEBUG] No matching record found for: %s, removing from state file", en) d.SetId("") } @@ -409,7 +417,10 @@ func resourceAwsRoute53RecordBuildSet(d *schema.ResourceData, zoneName string) ( if v, ok := d.GetOk("set_identifier"); ok { rec.SetIdentifier = aws.String(v.(string)) - rec.Weight = aws.Int64(int64(d.Get("weight").(int))) + } + + if v, ok := d.GetOk("weight"); ok { + rec.Weight = aws.Int64(int64(v.(int))) } return rec, nil @@ -440,7 +451,7 @@ func cleanRecordName(name string) string { // If it does not, add the zone name to form a fully qualified name // and keep AWS happy. func expandRecordName(name, zone string) string { - rn := strings.TrimSuffix(name, ".") + rn := strings.ToLower(strings.TrimSuffix(name, ".")) zone = strings.TrimSuffix(zone, ".") if !strings.HasSuffix(rn, zone) { rn = strings.Join([]string{name, zone}, ".") diff --git a/builtin/providers/aws/resource_aws_route53_record_test.go b/builtin/providers/aws/resource_aws_route53_record_test.go index bbeb859cd8..690e0fe47a 100644 --- a/builtin/providers/aws/resource_aws_route53_record_test.go +++ b/builtin/providers/aws/resource_aws_route53_record_test.go @@ -122,6 +122,23 @@ func TestAccAWSRoute53Record_wildcard(t *testing.T) { }) } +func TestAccAWSRoute53Record_failover(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53RecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccRoute53FailoverCNAMERecord, + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53RecordExists("aws_route53_record.www-primary"), + testAccCheckRoute53RecordExists("aws_route53_record.www-secondary"), + ), + }, + }, + }) +} + func TestAccAWSRoute53Record_weighted(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -291,7 +308,7 @@ func testAccCheckRoute53RecordExists(n string) resource.TestCheckFunc { // rec := resp.ResourceRecordSets[0] for _, rec := range resp.ResourceRecordSets { recName := cleanRecordName(*rec.Name) - if FQDN(recName) == FQDN(en) && *rec.Type == rType { + if FQDN(strings.ToLower(recName)) == FQDN(strings.ToLower(en)) && *rec.Type == rType { return nil } } @@ -306,7 +323,7 @@ resource "aws_route53_zone" "main" { resource "aws_route53_record" "default" { zone_id = "${aws_route53_zone.main.zone_id}" - name = "www.notexample.com" + name = "www.NOTexamplE.com" type = "A" ttl = "30" records = ["127.0.0.1", "127.0.0.27"] @@ -384,6 +401,46 @@ resource "aws_route53_record" "default" { } ` +const testAccRoute53FailoverCNAMERecord = ` +resource "aws_route53_zone" "main" { + name = "notexample.com" +} + +resource "aws_route53_health_check" "foo" { + fqdn = "dev.notexample.com" + port = 80 + type = "HTTP" + resource_path = "/" + failure_threshold = "2" + request_interval = "30" + + tags = { + Name = "tf-test-health-check" + } +} + +resource "aws_route53_record" "www-primary" { + zone_id = "${aws_route53_zone.main.zone_id}" + name = "www" + type = "CNAME" + ttl = "5" + failover = "PRIMARY" + health_check_id = "${aws_route53_health_check.foo.id}" + set_identifier = "www-primary" + records = ["primary.notexample.com"] +} + +resource "aws_route53_record" "www-secondary" { + zone_id = "${aws_route53_zone.main.zone_id}" + name = "www" + type = "CNAME" + ttl = "5" + failover = "SECONDARY" + set_identifier = "www-secondary" + records = ["secondary.notexample.com"] +} +` + const testAccRoute53WeightedCNAMERecord = ` resource "aws_route53_zone" "main" { name = "notexample.com" diff --git a/builtin/providers/aws/resource_aws_s3_bucket_object.go b/builtin/providers/aws/resource_aws_s3_bucket_object.go index b1c399dd11..ca10cf4cb3 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_object.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_object.go @@ -8,6 +8,7 @@ import ( "os" "github.com/hashicorp/terraform/helper/schema" + "github.com/mitchellh/go-homedir" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -95,7 +96,11 @@ func resourceAwsS3BucketObjectPut(d *schema.ResourceData, meta interface{}) erro if v, ok := d.GetOk("source"); ok { source := v.(string) - file, err := os.Open(source) + path, err := homedir.Expand(source) + if err != nil { + return fmt.Errorf("Error expanding homedir in source (%s): %s", source, err) + } + file, err := os.Open(path) if err != nil { return fmt.Errorf("Error opening S3 bucket object source (%s): %s", source, err) } diff --git a/builtin/providers/aws/resource_aws_s3_bucket_test.go b/builtin/providers/aws/resource_aws_s3_bucket_test.go index 04e2e60476..db134180bd 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_test.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_test.go @@ -430,7 +430,7 @@ func testAccCheckAWSS3BucketCors(n string, corsRules []*s3.CORSRule) resource.Te // within AWS var randInt = rand.New(rand.NewSource(time.Now().UnixNano())).Int() var testAccWebsiteEndpoint = fmt.Sprintf("tf-test-bucket-%d.s3-website-us-west-2.amazonaws.com", randInt) -var testAccAWSS3BucketPolicy = fmt.Sprintf(`{ "Version": "2008-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::tf-test-bucket-%d/*" } ] }`, randInt) +var testAccAWSS3BucketPolicy = fmt.Sprintf(`{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::tf-test-bucket-%d/*" } ] }`, randInt) var testAccAWSS3BucketConfig = fmt.Sprintf(` resource "aws_s3_bucket" "bucket" { diff --git a/builtin/providers/aws/resource_aws_sns_topic.go b/builtin/providers/aws/resource_aws_sns_topic.go index 6a1c46590f..6bf0127d0c 100644 --- a/builtin/providers/aws/resource_aws_sns_topic.go +++ b/builtin/providers/aws/resource_aws_sns_topic.go @@ -44,10 +44,13 @@ func resourceAwsSnsTopic() *schema.Resource { "policy": &schema.Schema{ Type: schema.TypeString, Optional: true, - ForceNew: false, Computed: true, StateFunc: func(v interface{}) string { - jsonb := []byte(v.(string)) + s, ok := v.(string) + if !ok || s == "" { + return "" + } + jsonb := []byte(s) buffer := new(bytes.Buffer) if err := json.Compact(buffer, jsonb); err != nil { log.Printf("[WARN] Error compacting JSON for Policy in SNS Topic") diff --git a/builtin/providers/aws/resource_aws_sns_topic_test.go b/builtin/providers/aws/resource_aws_sns_topic_test.go index 8cea6312bc..76510c76ee 100644 --- a/builtin/providers/aws/resource_aws_sns_topic_test.go +++ b/builtin/providers/aws/resource_aws_sns_topic_test.go @@ -128,7 +128,7 @@ resource "aws_sns_topic" "test_topic" { name = "example" policy = <") + if err != nil { + t.Fatalf("Error writing XML File: %s", err) + } + fx.Close() home, err := homedir.Dir() if err != nil { @@ -88,12 +93,11 @@ func TestAzure_validateSettingsFile(t *testing.T) { t.Fatalf("Error creating homedir-based temporary file: %s", err) } defer os.Remove(fh.Name()) - - _, err = io.WriteString(fx, "") + _, err = io.WriteString(fh, "") if err != nil { t.Fatalf("Error writing XML File: %s", err) } - fx.Close() + fh.Close() r := strings.NewReplacer(home, "~") homePath := r.Replace(fh.Name()) @@ -103,8 +107,8 @@ func TestAzure_validateSettingsFile(t *testing.T) { W int // expected count of warnings E int // expected count of errors }{ - {"test", 1, 1}, - {f.Name(), 1, 0}, + {"test", 0, 1}, + {f.Name(), 1, 1}, {fx.Name(), 1, 0}, {homePath, 1, 0}, {"", 0, 0}, @@ -114,10 +118,10 @@ func TestAzure_validateSettingsFile(t *testing.T) { w, e := validateSettingsFile(tc.Input, "") if len(w) != tc.W { - t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %#v, errors: %#v", tc.Input, w, e) + t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %v, errors: %v", tc.Input, w, e) } if len(e) != tc.E { - t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %#v, errors: %#v", tc.Input, w, e) + t.Errorf("Error in TestAzureValidateSettingsFile: input: %s , warnings: %v, errors: %v", tc.Input, w, e) } } } @@ -164,33 +168,8 @@ func TestAzure_providerConfigure(t *testing.T) { err = rp.Configure(terraform.NewResourceConfig(rawConfig)) meta := rp.(*schema.Provider).Meta() if (meta == nil) != tc.NilMeta { - t.Fatalf("expected NilMeta: %t, got meta: %#v", tc.NilMeta, meta) - } - } -} - -func TestAzure_isFile(t *testing.T) { - f, err := ioutil.TempFile("", "tf-test-file") - if err != nil { - t.Fatalf("Error creating temporary file with XML in TestAzure_isFile: %s", err) - } - cases := []struct { - Input string // String path to file - B bool // expected true/false - E bool // expect error - }{ - {"test", false, true}, - {f.Name(), true, false}, - } - - for _, tc := range cases { - x, y := isFile(tc.Input) - if tc.B != x { - t.Errorf("Error in TestAzure_isFile: input: %s , returned: %#v, expected: %#v", tc.Input, x, tc.B) - } - - if tc.E != (y != nil) { - t.Errorf("Error in TestAzure_isFile: input: %s , returned: %#v, expected: %#v", tc.Input, y, tc.E) + t.Fatalf("expected NilMeta: %t, got meta: %#v, settings_file: %q", + tc.NilMeta, meta, tc.SettingsFile) } } } diff --git a/builtin/providers/dyn/config.go b/builtin/providers/dyn/config.go new file mode 100644 index 0000000000..091c929d93 --- /dev/null +++ b/builtin/providers/dyn/config.go @@ -0,0 +1,28 @@ +package dyn + +import ( + "fmt" + "log" + + "github.com/nesv/go-dynect/dynect" +) + +type Config struct { + CustomerName string + Username string + Password string +} + +// Client() returns a new client for accessing dyn. +func (c *Config) Client() (*dynect.ConvenientClient, error) { + client := dynect.NewConvenientClient(c.CustomerName) + err := client.Login(c.Username, c.Password) + + if err != nil { + return nil, fmt.Errorf("Error setting up Dyn client: %s", err) + } + + log.Printf("[INFO] Dyn client configured for customer: %s, user: %s", c.CustomerName, c.Username) + + return client, nil +} diff --git a/builtin/providers/dyn/provider.go b/builtin/providers/dyn/provider.go new file mode 100644 index 0000000000..c591745aec --- /dev/null +++ b/builtin/providers/dyn/provider.go @@ -0,0 +1,50 @@ +package dyn + +import ( + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "customer_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("DYN_CUSTOMER_NAME", nil), + Description: "A Dyn customer name.", + }, + + "username": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("DYN_USERNAME", nil), + Description: "A Dyn username.", + }, + + "password": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("DYN_PASSWORD", nil), + Description: "The Dyn password.", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "dyn_record": resourceDynRecord(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + CustomerName: d.Get("customer_name").(string), + Username: d.Get("username").(string), + Password: d.Get("password").(string), + } + + return config.Client() +} diff --git a/builtin/providers/dyn/provider_test.go b/builtin/providers/dyn/provider_test.go new file mode 100644 index 0000000000..da148ff2fe --- /dev/null +++ b/builtin/providers/dyn/provider_test.go @@ -0,0 +1,47 @@ +package dyn + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "dyn": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("DYN_CUSTOMER_NAME"); v == "" { + t.Fatal("DYN_CUSTOMER_NAME must be set for acceptance tests") + } + + if v := os.Getenv("DYN_USERNAME"); v == "" { + t.Fatal("DYN_USERNAME must be set for acceptance tests") + } + + if v := os.Getenv("DYN_PASSWORD"); v == "" { + t.Fatal("DYN_PASSWORD must be set for acceptance tests.") + } + + if v := os.Getenv("DYN_ZONE"); v == "" { + t.Fatal("DYN_ZONE must be set for acceptance tests. The domain is used to ` and destroy record against.") + } +} diff --git a/builtin/providers/dyn/resource_dyn_record.go b/builtin/providers/dyn/resource_dyn_record.go new file mode 100644 index 0000000000..7f7b66fd50 --- /dev/null +++ b/builtin/providers/dyn/resource_dyn_record.go @@ -0,0 +1,198 @@ +package dyn + +import ( + "fmt" + "log" + "sync" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/nesv/go-dynect/dynect" +) + +var mutex = &sync.Mutex{} + +func resourceDynRecord() *schema.Resource { + return &schema.Resource{ + Create: resourceDynRecordCreate, + Read: resourceDynRecordRead, + Update: resourceDynRecordUpdate, + Delete: resourceDynRecordDelete, + + Schema: map[string]*schema.Schema{ + "zone": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "fqdn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "value": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "ttl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "0", // 0 means use zone default + }, + }, + } +} + +func resourceDynRecordCreate(d *schema.ResourceData, meta interface{}) error { + mutex.Lock() + + client := meta.(*dynect.ConvenientClient) + + record := &dynect.Record{ + Name: d.Get("name").(string), + Zone: d.Get("zone").(string), + Type: d.Get("type").(string), + TTL: d.Get("ttl").(string), + Value: d.Get("value").(string), + } + log.Printf("[DEBUG] Dyn record create configuration: %#v", record) + + // create the record + err := client.CreateRecord(record) + if err != nil { + mutex.Unlock() + return fmt.Errorf("Failed to create Dyn record: %s", err) + } + + // publish the zone + err = client.PublishZone(record.Zone) + if err != nil { + mutex.Unlock() + return fmt.Errorf("Failed to publish Dyn zone: %s", err) + } + + // get the record ID + err = client.GetRecordID(record) + if err != nil { + mutex.Unlock() + return fmt.Errorf("%s", err) + } + d.SetId(record.ID) + + mutex.Unlock() + return resourceDynRecordRead(d, meta) +} + +func resourceDynRecordRead(d *schema.ResourceData, meta interface{}) error { + mutex.Lock() + defer mutex.Unlock() + + client := meta.(*dynect.ConvenientClient) + + record := &dynect.Record{ + ID: d.Id(), + Name: d.Get("name").(string), + Zone: d.Get("zone").(string), + TTL: d.Get("ttl").(string), + FQDN: d.Get("fqdn").(string), + Type: d.Get("type").(string), + } + + err := client.GetRecord(record) + if err != nil { + return fmt.Errorf("Couldn't find Dyn record: %s", err) + } + + d.Set("zone", record.Zone) + d.Set("fqdn", record.FQDN) + d.Set("name", record.Name) + d.Set("type", record.Type) + d.Set("ttl", record.TTL) + d.Set("value", record.Value) + + return nil +} + +func resourceDynRecordUpdate(d *schema.ResourceData, meta interface{}) error { + mutex.Lock() + + client := meta.(*dynect.ConvenientClient) + + record := &dynect.Record{ + Name: d.Get("name").(string), + Zone: d.Get("zone").(string), + TTL: d.Get("ttl").(string), + Type: d.Get("type").(string), + Value: d.Get("value").(string), + } + log.Printf("[DEBUG] Dyn record update configuration: %#v", record) + + // update the record + err := client.UpdateRecord(record) + if err != nil { + mutex.Unlock() + return fmt.Errorf("Failed to update Dyn record: %s", err) + } + + // publish the zone + err = client.PublishZone(record.Zone) + if err != nil { + mutex.Unlock() + return fmt.Errorf("Failed to publish Dyn zone: %s", err) + } + + // get the record ID + err = client.GetRecordID(record) + if err != nil { + mutex.Unlock() + return fmt.Errorf("%s", err) + } + d.SetId(record.ID) + + mutex.Unlock() + return resourceDynRecordRead(d, meta) +} + +func resourceDynRecordDelete(d *schema.ResourceData, meta interface{}) error { + mutex.Lock() + defer mutex.Unlock() + + client := meta.(*dynect.ConvenientClient) + + record := &dynect.Record{ + ID: d.Id(), + Name: d.Get("name").(string), + Zone: d.Get("zone").(string), + FQDN: d.Get("fqdn").(string), + Type: d.Get("type").(string), + } + + log.Printf("[INFO] Deleting Dyn record: %s, %s", record.FQDN, record.ID) + + // delete the record + err := client.DeleteRecord(record) + if err != nil { + return fmt.Errorf("Failed to delete Dyn record: %s", err) + } + + // publish the zone + err = client.PublishZone(record.Zone) + if err != nil { + return fmt.Errorf("Failed to publish Dyn zone: %s", err) + } + + return nil +} diff --git a/builtin/providers/dyn/resource_dyn_record_test.go b/builtin/providers/dyn/resource_dyn_record_test.go new file mode 100644 index 0000000000..e233672834 --- /dev/null +++ b/builtin/providers/dyn/resource_dyn_record_test.go @@ -0,0 +1,239 @@ +package dyn + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/nesv/go-dynect/dynect" +) + +func TestAccDynRecord_Basic(t *testing.T) { + var record dynect.Record + zone := os.Getenv("DYN_ZONE") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDynRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckDynRecordConfig_basic, zone), + Check: resource.ComposeTestCheckFunc( + testAccCheckDynRecordExists("dyn_record.foobar", &record), + testAccCheckDynRecordAttributes(&record), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "name", "terraform"), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "zone", zone), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "value", "192.168.0.10"), + ), + }, + }, + }) +} + +func TestAccDynRecord_Updated(t *testing.T) { + var record dynect.Record + zone := os.Getenv("DYN_ZONE") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDynRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckDynRecordConfig_basic, zone), + Check: resource.ComposeTestCheckFunc( + testAccCheckDynRecordExists("dyn_record.foobar", &record), + testAccCheckDynRecordAttributes(&record), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "name", "terraform"), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "zone", zone), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "value", "192.168.0.10"), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckDynRecordConfig_new_value, zone), + Check: resource.ComposeTestCheckFunc( + testAccCheckDynRecordExists("dyn_record.foobar", &record), + testAccCheckDynRecordAttributesUpdated(&record), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "name", "terraform"), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "zone", zone), + resource.TestCheckResourceAttr( + "dyn_record.foobar", "value", "192.168.0.11"), + ), + }, + }, + }) +} + +func TestAccDynRecord_Multiple(t *testing.T) { + var record dynect.Record + zone := os.Getenv("DYN_ZONE") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDynRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckDynRecordConfig_multiple, zone, zone, zone), + Check: resource.ComposeTestCheckFunc( + testAccCheckDynRecordExists("dyn_record.foobar1", &record), + testAccCheckDynRecordAttributes(&record), + resource.TestCheckResourceAttr( + "dyn_record.foobar1", "name", "terraform1"), + resource.TestCheckResourceAttr( + "dyn_record.foobar1", "zone", zone), + resource.TestCheckResourceAttr( + "dyn_record.foobar1", "value", "192.168.0.10"), + resource.TestCheckResourceAttr( + "dyn_record.foobar2", "name", "terraform2"), + resource.TestCheckResourceAttr( + "dyn_record.foobar2", "zone", zone), + resource.TestCheckResourceAttr( + "dyn_record.foobar2", "value", "192.168.1.10"), + resource.TestCheckResourceAttr( + "dyn_record.foobar3", "name", "terraform3"), + resource.TestCheckResourceAttr( + "dyn_record.foobar3", "zone", zone), + resource.TestCheckResourceAttr( + "dyn_record.foobar3", "value", "192.168.2.10"), + ), + }, + }, + }) +} + +func testAccCheckDynRecordDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*dynect.ConvenientClient) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "dyn_record" { + continue + } + + foundRecord := &dynect.Record{ + Zone: rs.Primary.Attributes["zone"], + ID: rs.Primary.ID, + FQDN: rs.Primary.Attributes["fqdn"], + Type: rs.Primary.Attributes["type"], + } + + err := client.GetRecord(foundRecord) + + if err != nil { + return fmt.Errorf("Record still exists") + } + } + + return nil +} + +func testAccCheckDynRecordAttributes(record *dynect.Record) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if record.Value != "192.168.0.10" { + return fmt.Errorf("Bad value: %s", record.Value) + } + + return nil + } +} + +func testAccCheckDynRecordAttributesUpdated(record *dynect.Record) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if record.Value != "192.168.0.11" { + return fmt.Errorf("Bad value: %s", record.Value) + } + + return nil + } +} + +func testAccCheckDynRecordExists(n string, record *dynect.Record) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + client := testAccProvider.Meta().(*dynect.ConvenientClient) + + foundRecord := &dynect.Record{ + Zone: rs.Primary.Attributes["zone"], + ID: rs.Primary.ID, + FQDN: rs.Primary.Attributes["fqdn"], + Type: rs.Primary.Attributes["type"], + } + + err := client.GetRecord(foundRecord) + + if err != nil { + return err + } + + if foundRecord.ID != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + + *record = *foundRecord + + return nil + } +} + +const testAccCheckDynRecordConfig_basic = ` +resource "dyn_record" "foobar" { + zone = "%s" + name = "terraform" + value = "192.168.0.10" + type = "A" + ttl = 3600 +}` + +const testAccCheckDynRecordConfig_new_value = ` +resource "dyn_record" "foobar" { + zone = "%s" + name = "terraform" + value = "192.168.0.11" + type = "A" + ttl = 3600 +}` + +const testAccCheckDynRecordConfig_multiple = ` +resource "dyn_record" "foobar1" { + zone = "%s" + name = "terraform1" + value = "192.168.0.10" + type = "A" + ttl = 3600 +} +resource "dyn_record" "foobar2" { + zone = "%s" + name = "terraform2" + value = "192.168.1.10" + type = "A" + ttl = 3600 +} +resource "dyn_record" "foobar3" { + zone = "%s" + name = "terraform3" + value = "192.168.2.10" + type = "A" + ttl = 3600 +}` diff --git a/builtin/providers/google/config.go b/builtin/providers/google/config.go index 3edb68ef0f..218fda06f9 100644 --- a/builtin/providers/google/config.go +++ b/builtin/providers/google/config.go @@ -3,13 +3,12 @@ package google import ( "encoding/json" "fmt" - "io/ioutil" "log" "net/http" - "os" "runtime" "strings" + "github.com/hashicorp/terraform/helper/pathorcontents" "github.com/hashicorp/terraform/terraform" "golang.org/x/oauth2" "golang.org/x/oauth2/google" @@ -24,7 +23,7 @@ import ( // Config is the configuration structure used to instantiate the Google // provider. type Config struct { - AccountFile string + Credentials string Project string Region string @@ -44,46 +43,17 @@ func (c *Config) loadAndValidate() error { "https://www.googleapis.com/auth/devstorage.full_control", } - if c.AccountFile == "" { - c.AccountFile = os.Getenv("GOOGLE_ACCOUNT_FILE") - } - if c.Project == "" { - c.Project = os.Getenv("GOOGLE_PROJECT") - } - if c.Region == "" { - c.Region = os.Getenv("GOOGLE_REGION") - } - var client *http.Client - if c.AccountFile != "" { - contents := c.AccountFile + if c.Credentials != "" { + contents, _, err := pathorcontents.Read(c.Credentials) + if err != nil { + return fmt.Errorf("Error loading credentials: %s", err) + } // Assume account_file is a JSON string if err := parseJSON(&account, contents); err != nil { - // If account_file was not JSON, assume it is a file path instead - if _, err := os.Stat(c.AccountFile); os.IsNotExist(err) { - return fmt.Errorf( - "account_file path does not exist: %s", - c.AccountFile) - } - - b, err := ioutil.ReadFile(c.AccountFile) - if err != nil { - return fmt.Errorf( - "Error reading account_file from path '%s': %s", - c.AccountFile, - err) - } - - contents = string(b) - - if err := parseJSON(&account, contents); err != nil { - return fmt.Errorf( - "Error parsing account file '%s': %s", - contents, - err) - } + return fmt.Errorf("Error parsing credentials '%s': %s", contents, err) } // Get the token for use in our requests diff --git a/builtin/providers/google/config_test.go b/builtin/providers/google/config_test.go index cc1b6213fa..648f93a688 100644 --- a/builtin/providers/google/config_test.go +++ b/builtin/providers/google/config_test.go @@ -5,11 +5,11 @@ import ( "testing" ) -const testFakeAccountFilePath = "./test-fixtures/fake_account.json" +const testFakeCredentialsPath = "./test-fixtures/fake_account.json" func TestConfigLoadAndValidate_accountFilePath(t *testing.T) { config := Config{ - AccountFile: testFakeAccountFilePath, + Credentials: testFakeCredentialsPath, Project: "my-gce-project", Region: "us-central1", } @@ -21,12 +21,12 @@ func TestConfigLoadAndValidate_accountFilePath(t *testing.T) { } func TestConfigLoadAndValidate_accountFileJSON(t *testing.T) { - contents, err := ioutil.ReadFile(testFakeAccountFilePath) + contents, err := ioutil.ReadFile(testFakeCredentialsPath) if err != nil { t.Fatalf("error: %v", err) } config := Config{ - AccountFile: string(contents), + Credentials: string(contents), Project: "my-gce-project", Region: "us-central1", } @@ -39,7 +39,7 @@ func TestConfigLoadAndValidate_accountFileJSON(t *testing.T) { func TestConfigLoadAndValidate_accountFileJSONInvalid(t *testing.T) { config := Config{ - AccountFile: "{this is not json}", + Credentials: "{this is not json}", Project: "my-gce-project", Region: "us-central1", } diff --git a/builtin/providers/google/provider.go b/builtin/providers/google/provider.go index b63aa389c4..b2d083bc25 100644 --- a/builtin/providers/google/provider.go +++ b/builtin/providers/google/provider.go @@ -3,8 +3,8 @@ package google import ( "encoding/json" "fmt" - "os" + "github.com/hashicorp/terraform/helper/pathorcontents" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" ) @@ -18,6 +18,14 @@ func Provider() terraform.ResourceProvider { Optional: true, DefaultFunc: schema.EnvDefaultFunc("GOOGLE_ACCOUNT_FILE", nil), ValidateFunc: validateAccountFile, + Deprecated: "Use the credentials field instead", + }, + + "credentials": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("GOOGLE_CREDENTIALS", nil), + ValidateFunc: validateCredentials, }, "project": &schema.Schema{ @@ -43,6 +51,7 @@ func Provider() terraform.ResourceProvider { "google_compute_global_address": resourceComputeGlobalAddress(), "google_compute_global_forwarding_rule": resourceComputeGlobalForwardingRule(), "google_compute_http_health_check": resourceComputeHttpHealthCheck(), + "google_compute_https_health_check": resourceComputeHttpsHealthCheck(), "google_compute_instance": resourceComputeInstance(), "google_compute_instance_group_manager": resourceComputeInstanceGroupManager(), "google_compute_instance_template": resourceComputeInstanceTemplate(), @@ -72,8 +81,12 @@ func Provider() terraform.ResourceProvider { } func providerConfigure(d *schema.ResourceData) (interface{}, error) { + credentials := d.Get("credentials").(string) + if credentials == "" { + credentials = d.Get("account_file").(string) + } config := Config{ - AccountFile: d.Get("account_file").(string), + Credentials: credentials, Project: d.Get("project").(string), Region: d.Get("region").(string), } @@ -96,22 +109,34 @@ func validateAccountFile(v interface{}, k string) (warnings []string, errors []e return } - var account accountFile - if err := json.Unmarshal([]byte(value), &account); err != nil { - warnings = append(warnings, ` -account_file is not valid JSON, so we are assuming it is a file path. This -support will be removed in the future. Please update your configuration to use -${file("filename.json")} instead.`) - } else { - return + contents, wasPath, err := pathorcontents.Read(value) + if err != nil { + errors = append(errors, fmt.Errorf("Error loading Account File: %s", err)) + } + if wasPath { + warnings = append(warnings, `account_file was provided as a path instead of +as file contents. This support will be removed in the future. Please update +your configuration to use ${file("filename.json")} instead.`) } - if _, err := os.Stat(value); err != nil { + var account accountFile + if err := json.Unmarshal([]byte(contents), &account); err != nil { errors = append(errors, - fmt.Errorf( - "account_file path could not be read from '%s': %s", - value, - err)) + fmt.Errorf("account_file not valid JSON '%s': %s", contents, err)) + } + + return +} + +func validateCredentials(v interface{}, k string) (warnings []string, errors []error) { + if v == nil || v.(string) == "" { + return + } + creds := v.(string) + var account accountFile + if err := json.Unmarshal([]byte(creds), &account); err != nil { + errors = append(errors, + fmt.Errorf("credentials are not valid JSON '%s': %s", creds, err)) } return diff --git a/builtin/providers/google/provider_test.go b/builtin/providers/google/provider_test.go index 2275e188f6..827a7f5753 100644 --- a/builtin/providers/google/provider_test.go +++ b/builtin/providers/google/provider_test.go @@ -29,8 +29,8 @@ func TestProvider_impl(t *testing.T) { } func testAccPreCheck(t *testing.T) { - if v := os.Getenv("GOOGLE_ACCOUNT_FILE"); v == "" { - t.Fatal("GOOGLE_ACCOUNT_FILE must be set for acceptance tests") + if v := os.Getenv("GOOGLE_CREDENTIALS"); v == "" { + t.Fatal("GOOGLE_CREDENTIALS must be set for acceptance tests") } if v := os.Getenv("GOOGLE_PROJECT"); v == "" { diff --git a/builtin/providers/google/resource_compute_https_health_check.go b/builtin/providers/google/resource_compute_https_health_check.go new file mode 100644 index 0000000000..32a8dfb381 --- /dev/null +++ b/builtin/providers/google/resource_compute_https_health_check.go @@ -0,0 +1,227 @@ +package google + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" +) + +func resourceComputeHttpsHealthCheck() *schema.Resource { + return &schema.Resource{ + Create: resourceComputeHttpsHealthCheckCreate, + Read: resourceComputeHttpsHealthCheckRead, + Delete: resourceComputeHttpsHealthCheckDelete, + Update: resourceComputeHttpsHealthCheckUpdate, + + Schema: map[string]*schema.Schema{ + "check_interval_sec": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 5, + }, + + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "healthy_threshold": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 2, + }, + + "host": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "port": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 443, + }, + + "request_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "/", + }, + + "self_link": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "timeout_sec": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 5, + }, + + "unhealthy_threshold": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 2, + }, + }, + } +} + +func resourceComputeHttpsHealthCheckCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + // Build the parameter + hchk := &compute.HttpsHealthCheck{ + Name: d.Get("name").(string), + } + // Optional things + if v, ok := d.GetOk("description"); ok { + hchk.Description = v.(string) + } + if v, ok := d.GetOk("host"); ok { + hchk.Host = v.(string) + } + if v, ok := d.GetOk("request_path"); ok { + hchk.RequestPath = v.(string) + } + if v, ok := d.GetOk("check_interval_sec"); ok { + hchk.CheckIntervalSec = int64(v.(int)) + } + if v, ok := d.GetOk("healthy_threshold"); ok { + hchk.HealthyThreshold = int64(v.(int)) + } + if v, ok := d.GetOk("port"); ok { + hchk.Port = int64(v.(int)) + } + if v, ok := d.GetOk("timeout_sec"); ok { + hchk.TimeoutSec = int64(v.(int)) + } + if v, ok := d.GetOk("unhealthy_threshold"); ok { + hchk.UnhealthyThreshold = int64(v.(int)) + } + + log.Printf("[DEBUG] HttpsHealthCheck insert request: %#v", hchk) + op, err := config.clientCompute.HttpsHealthChecks.Insert( + config.Project, hchk).Do() + if err != nil { + return fmt.Errorf("Error creating HttpsHealthCheck: %s", err) + } + + // It probably maybe worked, so store the ID now + d.SetId(hchk.Name) + + err = computeOperationWaitGlobal(config, op, "Creating Https Health Check") + if err != nil { + return err + } + + return resourceComputeHttpsHealthCheckRead(d, meta) +} + +func resourceComputeHttpsHealthCheckUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + // Build the parameter + hchk := &compute.HttpsHealthCheck{ + Name: d.Get("name").(string), + } + // Optional things + if v, ok := d.GetOk("description"); ok { + hchk.Description = v.(string) + } + if v, ok := d.GetOk("host"); ok { + hchk.Host = v.(string) + } + if v, ok := d.GetOk("request_path"); ok { + hchk.RequestPath = v.(string) + } + if v, ok := d.GetOk("check_interval_sec"); ok { + hchk.CheckIntervalSec = int64(v.(int)) + } + if v, ok := d.GetOk("healthy_threshold"); ok { + hchk.HealthyThreshold = int64(v.(int)) + } + if v, ok := d.GetOk("port"); ok { + hchk.Port = int64(v.(int)) + } + if v, ok := d.GetOk("timeout_sec"); ok { + hchk.TimeoutSec = int64(v.(int)) + } + if v, ok := d.GetOk("unhealthy_threshold"); ok { + hchk.UnhealthyThreshold = int64(v.(int)) + } + + log.Printf("[DEBUG] HttpsHealthCheck patch request: %#v", hchk) + op, err := config.clientCompute.HttpsHealthChecks.Patch( + config.Project, hchk.Name, hchk).Do() + if err != nil { + return fmt.Errorf("Error patching HttpsHealthCheck: %s", err) + } + + // It probably maybe worked, so store the ID now + d.SetId(hchk.Name) + + err = computeOperationWaitGlobal(config, op, "Updating Https Health Check") + if err != nil { + return err + } + + return resourceComputeHttpsHealthCheckRead(d, meta) +} + +func resourceComputeHttpsHealthCheckRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + hchk, err := config.clientCompute.HttpsHealthChecks.Get( + config.Project, d.Id()).Do() + if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + // The resource doesn't exist anymore + d.SetId("") + + return nil + } + + return fmt.Errorf("Error reading HttpsHealthCheck: %s", err) + } + + d.Set("host", hchk.Host) + d.Set("request_path", hchk.RequestPath) + d.Set("check_interval_sec", hchk.CheckIntervalSec) + d.Set("health_threshold", hchk.HealthyThreshold) + d.Set("port", hchk.Port) + d.Set("timeout_sec", hchk.TimeoutSec) + d.Set("unhealthy_threshold", hchk.UnhealthyThreshold) + d.Set("self_link", hchk.SelfLink) + + return nil +} + +func resourceComputeHttpsHealthCheckDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + // Delete the HttpsHealthCheck + op, err := config.clientCompute.HttpsHealthChecks.Delete( + config.Project, d.Id()).Do() + if err != nil { + return fmt.Errorf("Error deleting HttpsHealthCheck: %s", err) + } + + err = computeOperationWaitGlobal(config, op, "Deleting Https Health Check") + if err != nil { + return err + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/google/resource_compute_https_health_check_test.go b/builtin/providers/google/resource_compute_https_health_check_test.go new file mode 100644 index 0000000000..d263bfd881 --- /dev/null +++ b/builtin/providers/google/resource_compute_https_health_check_test.go @@ -0,0 +1,171 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" +) + +func TestAccComputeHttpsHealthCheck_basic(t *testing.T) { + var healthCheck compute.HttpsHealthCheck + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeHttpsHealthCheckDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeHttpsHealthCheck_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeHttpsHealthCheckExists( + "google_compute_https_health_check.foobar", &healthCheck), + testAccCheckComputeHttpsHealthCheckRequestPath( + "/health_check", &healthCheck), + testAccCheckComputeHttpsHealthCheckThresholds( + 3, 3, &healthCheck), + ), + }, + }, + }) +} + +func TestAccComputeHttpsHealthCheck_update(t *testing.T) { + var healthCheck compute.HttpsHealthCheck + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeHttpsHealthCheckDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeHttpsHealthCheck_update1, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeHttpsHealthCheckExists( + "google_compute_https_health_check.foobar", &healthCheck), + testAccCheckComputeHttpsHealthCheckRequestPath( + "/not_default", &healthCheck), + testAccCheckComputeHttpsHealthCheckThresholds( + 2, 2, &healthCheck), + ), + }, + resource.TestStep{ + Config: testAccComputeHttpsHealthCheck_update2, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeHttpsHealthCheckExists( + "google_compute_https_health_check.foobar", &healthCheck), + testAccCheckComputeHttpsHealthCheckRequestPath( + "/", &healthCheck), + testAccCheckComputeHttpsHealthCheckThresholds( + 10, 10, &healthCheck), + ), + }, + }, + }) +} + +func testAccCheckComputeHttpsHealthCheckDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_https_health_check" { + continue + } + + _, err := config.clientCompute.HttpsHealthChecks.Get( + config.Project, rs.Primary.ID).Do() + if err == nil { + return fmt.Errorf("HttpsHealthCheck still exists") + } + } + + return nil +} + +func testAccCheckComputeHttpsHealthCheckExists(n string, healthCheck *compute.HttpsHealthCheck) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + + found, err := config.clientCompute.HttpsHealthChecks.Get( + config.Project, rs.Primary.ID).Do() + if err != nil { + return err + } + + if found.Name != rs.Primary.ID { + return fmt.Errorf("HttpsHealthCheck not found") + } + + *healthCheck = *found + + return nil + } +} + +func testAccCheckComputeHttpsHealthCheckRequestPath(path string, healthCheck *compute.HttpsHealthCheck) resource.TestCheckFunc { + return func(s *terraform.State) error { + if healthCheck.RequestPath != path { + return fmt.Errorf("RequestPath doesn't match: expected %s, got %s", path, healthCheck.RequestPath) + } + + return nil + } +} + +func testAccCheckComputeHttpsHealthCheckThresholds(healthy, unhealthy int64, healthCheck *compute.HttpsHealthCheck) resource.TestCheckFunc { + return func(s *terraform.State) error { + if healthCheck.HealthyThreshold != healthy { + return fmt.Errorf("HealthyThreshold doesn't match: expected %d, got %d", healthy, healthCheck.HealthyThreshold) + } + + if healthCheck.UnhealthyThreshold != unhealthy { + return fmt.Errorf("UnhealthyThreshold doesn't match: expected %d, got %d", unhealthy, healthCheck.UnhealthyThreshold) + } + + return nil + } +} + +const testAccComputeHttpsHealthCheck_basic = ` +resource "google_compute_https_health_check" "foobar" { + check_interval_sec = 3 + description = "Resource created for Terraform acceptance testing" + healthy_threshold = 3 + host = "foobar" + name = "terraform-test" + port = "80" + request_path = "/health_check" + timeout_sec = 2 + unhealthy_threshold = 3 +} +` + +const testAccComputeHttpsHealthCheck_update1 = ` +resource "google_compute_https_health_check" "foobar" { + name = "terraform-test" + description = "Resource created for Terraform acceptance testing" + request_path = "/not_default" +} +` + +/* Change description, restore request_path to default, and change +* thresholds from defaults */ +const testAccComputeHttpsHealthCheck_update2 = ` +resource "google_compute_https_health_check" "foobar" { + name = "terraform-test" + description = "Resource updated for Terraform acceptance testing" + healthy_threshold = 10 + unhealthy_threshold = 10 +} +` diff --git a/builtin/providers/google/service_scope.go b/builtin/providers/google/service_scope.go index d4c5181251..5770dbeaa1 100644 --- a/builtin/providers/google/service_scope.go +++ b/builtin/providers/google/service_scope.go @@ -11,6 +11,7 @@ func canonicalizeServiceScope(scope string) string { "datastore": "https://www.googleapis.com/auth/datastore", "logging-write": "https://www.googleapis.com/auth/logging.write", "monitoring": "https://www.googleapis.com/auth/monitoring", + "pubsub": "https://www.googleapis.com/auth/pubsub", "sql": "https://www.googleapis.com/auth/sqlservice", "sql-admin": "https://www.googleapis.com/auth/sqlservice.admin", "storage-full": "https://www.googleapis.com/auth/devstorage.full_control", @@ -22,9 +23,9 @@ func canonicalizeServiceScope(scope string) string { "userinfo-email": "https://www.googleapis.com/auth/userinfo.email", } - if matchedUrl, ok := scopeMap[scope]; ok { - return matchedUrl - } else { - return scope + if matchedURL, ok := scopeMap[scope]; ok { + return matchedURL } + + return scope } diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go index 4cf68de038..d21e1afedb 100644 --- a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go @@ -95,6 +95,7 @@ func resourceComputeInstanceV2() *schema.Resource { Type: schema.TypeSet, Optional: true, ForceNew: false, + Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go index 3cc0cbf0cc..e3d281b2e1 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go @@ -38,8 +38,9 @@ func resourceComputeSecGroupV2() *schema.Resource { ForceNew: false, }, "rule": &schema.Schema{ - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, + Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "id": &schema.Schema{ @@ -79,6 +80,7 @@ func resourceComputeSecGroupV2() *schema.Resource { }, }, }, + Set: secgroupRuleV2Hash, }, }, } @@ -129,13 +131,10 @@ func resourceComputeSecGroupV2Read(d *schema.ResourceData, meta interface{}) err d.Set("name", sg.Name) d.Set("description", sg.Description) - rtm := rulesToMap(sg.Rules) - for _, v := range rtm { - if v["group"] == d.Get("name") { - v["self"] = "1" - } else { - v["self"] = "0" - } + + rtm, err := rulesToMap(computeClient, d, sg.Rules) + if err != nil { + return err } log.Printf("[DEBUG] rulesToMap(sg.Rules): %+v", rtm) d.Set("rule", rtm) @@ -164,14 +163,11 @@ func resourceComputeSecGroupV2Update(d *schema.ResourceData, meta interface{}) e if d.HasChange("rule") { oldSGRaw, newSGRaw := d.GetChange("rule") - oldSGRSlice, newSGRSlice := oldSGRaw.([]interface{}), newSGRaw.([]interface{}) - oldSGRSet := schema.NewSet(secgroupRuleV2Hash, oldSGRSlice) - newSGRSet := schema.NewSet(secgroupRuleV2Hash, newSGRSlice) + oldSGRSet, newSGRSet := oldSGRaw.(*schema.Set), newSGRaw.(*schema.Set) secgrouprulesToAdd := newSGRSet.Difference(oldSGRSet) secgrouprulesToRemove := oldSGRSet.Difference(newSGRSet) log.Printf("[DEBUG] Security group rules to add: %v", secgrouprulesToAdd) - log.Printf("[DEBUG] Security groups rules to remove: %v", secgrouprulesToRemove) for _, rawRule := range secgrouprulesToAdd.List() { @@ -231,67 +227,83 @@ func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) e } func resourceSecGroupRulesV2(d *schema.ResourceData) []secgroups.CreateRuleOpts { - rawRules := d.Get("rule").([]interface{}) + rawRules := d.Get("rule").(*schema.Set).List() createRuleOptsList := make([]secgroups.CreateRuleOpts, len(rawRules)) - for i, raw := range rawRules { - rawMap := raw.(map[string]interface{}) - groupId := rawMap["from_group_id"].(string) - if rawMap["self"].(bool) { - groupId = d.Id() - } - createRuleOptsList[i] = secgroups.CreateRuleOpts{ - ParentGroupID: d.Id(), - FromPort: rawMap["from_port"].(int), - ToPort: rawMap["to_port"].(int), - IPProtocol: rawMap["ip_protocol"].(string), - CIDR: rawMap["cidr"].(string), - FromGroupID: groupId, - } + for i, rawRule := range rawRules { + createRuleOptsList[i] = resourceSecGroupRuleCreateOptsV2(d, rawRule) } return createRuleOptsList } -func resourceSecGroupRuleCreateOptsV2(d *schema.ResourceData, raw interface{}) secgroups.CreateRuleOpts { - rawMap := raw.(map[string]interface{}) - groupId := rawMap["from_group_id"].(string) - if rawMap["self"].(bool) { +func resourceSecGroupRuleCreateOptsV2(d *schema.ResourceData, rawRule interface{}) secgroups.CreateRuleOpts { + rawRuleMap := rawRule.(map[string]interface{}) + groupId := rawRuleMap["from_group_id"].(string) + if rawRuleMap["self"].(bool) { groupId = d.Id() } return secgroups.CreateRuleOpts{ ParentGroupID: d.Id(), - FromPort: rawMap["from_port"].(int), - ToPort: rawMap["to_port"].(int), - IPProtocol: rawMap["ip_protocol"].(string), - CIDR: rawMap["cidr"].(string), + FromPort: rawRuleMap["from_port"].(int), + ToPort: rawRuleMap["to_port"].(int), + IPProtocol: rawRuleMap["ip_protocol"].(string), + CIDR: rawRuleMap["cidr"].(string), FromGroupID: groupId, } } -func resourceSecGroupRuleV2(d *schema.ResourceData, raw interface{}) secgroups.Rule { - rawMap := raw.(map[string]interface{}) +func resourceSecGroupRuleV2(d *schema.ResourceData, rawRule interface{}) secgroups.Rule { + rawRuleMap := rawRule.(map[string]interface{}) return secgroups.Rule{ - ID: rawMap["id"].(string), + ID: rawRuleMap["id"].(string), ParentGroupID: d.Id(), - FromPort: rawMap["from_port"].(int), - ToPort: rawMap["to_port"].(int), - IPProtocol: rawMap["ip_protocol"].(string), - IPRange: secgroups.IPRange{CIDR: rawMap["cidr"].(string)}, + FromPort: rawRuleMap["from_port"].(int), + ToPort: rawRuleMap["to_port"].(int), + IPProtocol: rawRuleMap["ip_protocol"].(string), + IPRange: secgroups.IPRange{CIDR: rawRuleMap["cidr"].(string)}, } } -func rulesToMap(sgrs []secgroups.Rule) []map[string]interface{} { +func rulesToMap(computeClient *gophercloud.ServiceClient, d *schema.ResourceData, sgrs []secgroups.Rule) ([]map[string]interface{}, error) { sgrMap := make([]map[string]interface{}, len(sgrs)) for i, sgr := range sgrs { + groupId := "" + self := false + if sgr.Group.Name != "" { + if sgr.Group.Name == d.Get("name").(string) { + self = true + } else { + // Since Nova only returns the secgroup Name (and not the ID) for the group attribute, + // we need to look up all security groups and match the name. + // Nevermind that Nova wants the ID when setting the Group *and* that multiple groups + // with the same name can exist... + allPages, err := secgroups.List(computeClient).AllPages() + if err != nil { + return nil, err + } + securityGroups, err := secgroups.ExtractSecurityGroups(allPages) + if err != nil { + return nil, err + } + + for _, sg := range securityGroups { + if sg.Name == sgr.Group.Name { + groupId = sg.ID + } + } + } + } + sgrMap[i] = map[string]interface{}{ - "id": sgr.ID, - "from_port": sgr.FromPort, - "to_port": sgr.ToPort, - "ip_protocol": sgr.IPProtocol, - "cidr": sgr.IPRange.CIDR, - "group": sgr.Group.Name, + "id": sgr.ID, + "from_port": sgr.FromPort, + "to_port": sgr.ToPort, + "ip_protocol": sgr.IPProtocol, + "cidr": sgr.IPRange.CIDR, + "self": self, + "from_group_id": groupId, } } - return sgrMap + return sgrMap, nil } func secgroupRuleV2Hash(v interface{}) int { @@ -301,6 +313,8 @@ func secgroupRuleV2Hash(v interface{}) int { buf.WriteString(fmt.Sprintf("%d-", m["to_port"].(int))) buf.WriteString(fmt.Sprintf("%s-", m["ip_protocol"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["cidr"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["from_group_id"].(string))) + buf.WriteString(fmt.Sprintf("%t-", m["self"].(bool))) return hashcode.String(buf.String()) } diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go index e78865b8a5..4cb99fa741 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go @@ -19,7 +19,7 @@ func TestAccComputeV2SecGroup_basic(t *testing.T) { CheckDestroy: testAccCheckComputeV2SecGroupDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeV2SecGroup_basic, + Config: testAccComputeV2SecGroup_basic_orig, Check: resource.ComposeTestCheckFunc( testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup), ), @@ -28,6 +28,84 @@ func TestAccComputeV2SecGroup_basic(t *testing.T) { }) } +func TestAccComputeV2SecGroup_update(t *testing.T) { + var secgroup secgroups.SecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2SecGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2SecGroup_basic_orig, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup), + ), + }, + resource.TestStep{ + Config: testAccComputeV2SecGroup_basic_update, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup), + testAccCheckComputeV2SecGroupRuleCount(t, &secgroup, 2), + ), + }, + }, + }) +} + +func TestAccComputeV2SecGroup_groupID(t *testing.T) { + var secgroup1, secgroup2, secgroup3 secgroups.SecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2SecGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2SecGroup_groupID_orig, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_1", &secgroup1), + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_2", &secgroup2), + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_3", &secgroup3), + testAccCheckComputeV2SecGroupGroupIDMatch(t, &secgroup1, &secgroup3), + ), + }, + resource.TestStep{ + Config: testAccComputeV2SecGroup_groupID_update, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_1", &secgroup1), + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_2", &secgroup2), + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_3", &secgroup3), + testAccCheckComputeV2SecGroupGroupIDMatch(t, &secgroup2, &secgroup3), + ), + }, + }, + }) +} + +func TestAccComputeV2SecGroup_self(t *testing.T) { + var secgroup secgroups.SecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2SecGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2SecGroup_self, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.test_group_1", &secgroup), + testAccCheckComputeV2SecGroupGroupIDMatch(t, &secgroup, &secgroup), + resource.TestCheckResourceAttr( + "openstack_compute_secgroup_v2.test_group_1", "rule.1118853483.self", "true"), + resource.TestCheckResourceAttr( + "openstack_compute_secgroup_v2.test_group_1", "rule.1118853483.from_group_id", ""), + ), + }, + }, + }) +} + func testAccCheckComputeV2SecGroupDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) computeClient, err := config.computeV2Client(OS_REGION_NAME) @@ -81,10 +159,148 @@ func testAccCheckComputeV2SecGroupExists(t *testing.T, n string, secgroup *secgr } } -var testAccComputeV2SecGroup_basic = fmt.Sprintf(` - resource "openstack_compute_secgroup_v2" "foo" { - region = "%s" - name = "test_group_1" - description = "first test security group" - }`, - OS_REGION_NAME) +func testAccCheckComputeV2SecGroupRuleCount(t *testing.T, secgroup *secgroups.SecurityGroup, count int) resource.TestCheckFunc { + return func(s *terraform.State) error { + if len(secgroup.Rules) != count { + return fmt.Errorf("Security group rule count does not match. Expected %d, got %d", count, len(secgroup.Rules)) + } + + return nil + } +} + +func testAccCheckComputeV2SecGroupGroupIDMatch(t *testing.T, sg1, sg2 *secgroups.SecurityGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + if len(sg2.Rules) == 1 { + if sg1.Name != sg2.Rules[0].Group.Name || sg1.TenantID != sg2.Rules[0].Group.TenantID { + return fmt.Errorf("%s was not correctly applied to %s", sg1.Name, sg2.Name) + } + } else { + return fmt.Errorf("%s rule count is incorrect", sg2.Name) + } + + return nil + } +} + +var testAccComputeV2SecGroup_basic_orig = fmt.Sprintf(` + resource "openstack_compute_secgroup_v2" "foo" { + name = "test_group_1" + description = "first test security group" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + rule { + from_port = 1 + to_port = 65535 + ip_protocol = "udp" + cidr = "0.0.0.0/0" + } + rule { + from_port = -1 + to_port = -1 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } + }`) + +var testAccComputeV2SecGroup_basic_update = fmt.Sprintf(` + resource "openstack_compute_secgroup_v2" "foo" { + name = "test_group_1" + description = "first test security group" + rule { + from_port = 2200 + to_port = 2200 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + rule { + from_port = -1 + to_port = -1 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } +}`) + +var testAccComputeV2SecGroup_groupID_orig = fmt.Sprintf(` + resource "openstack_compute_secgroup_v2" "test_group_1" { + name = "test_group_1" + description = "first test security group" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + } + + resource "openstack_compute_secgroup_v2" "test_group_2" { + name = "test_group_2" + description = "second test security group" + rule { + from_port = -1 + to_port = -1 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } + } + + resource "openstack_compute_secgroup_v2" "test_group_3" { + name = "test_group_3" + description = "third test security group" + rule { + from_port = 80 + to_port = 80 + ip_protocol = "tcp" + from_group_id = "${openstack_compute_secgroup_v2.test_group_1.id}" + } + }`) + +var testAccComputeV2SecGroup_groupID_update = fmt.Sprintf(` + resource "openstack_compute_secgroup_v2" "test_group_1" { + name = "test_group_1" + description = "first test security group" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + } + + resource "openstack_compute_secgroup_v2" "test_group_2" { + name = "test_group_2" + description = "second test security group" + rule { + from_port = -1 + to_port = -1 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } + } + + resource "openstack_compute_secgroup_v2" "test_group_3" { + name = "test_group_3" + description = "third test security group" + rule { + from_port = 80 + to_port = 80 + ip_protocol = "tcp" + from_group_id = "${openstack_compute_secgroup_v2.test_group_2.id}" + } + }`) + +var testAccComputeV2SecGroup_self = fmt.Sprintf(` + resource "openstack_compute_secgroup_v2" "test_group_1" { + name = "test_group_1" + description = "first test security group" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + self = true + } + }`) diff --git a/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go index faa007e32f..f2f026b889 100644 --- a/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go @@ -117,51 +117,53 @@ func TestAccNetworkingV2Network_fullstack(t *testing.T) { var subnet subnets.Subnet var testAccNetworkingV2Network_fullstack = fmt.Sprintf(` - resource "openstack_networking_network_v2" "foo" { - region = "%s" - name = "network_1" - admin_state_up = "true" - } + resource "openstack_networking_network_v2" "foo" { + region = "%s" + name = "network_1" + admin_state_up = "true" + } - resource "openstack_networking_subnet_v2" "foo" { - region = "%s" - name = "subnet_1" - network_id = "${openstack_networking_network_v2.foo.id}" - cidr = "192.168.199.0/24" - ip_version = 4 - } + resource "openstack_networking_subnet_v2" "foo" { + region = "%s" + name = "subnet_1" + network_id = "${openstack_networking_network_v2.foo.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } - resource "openstack_compute_secgroup_v2" "foo" { - region = "%s" - name = "secgroup_1" - description = "a security group" - rule { - from_port = 22 - to_port = 22 - ip_protocol = "tcp" - cidr = "0.0.0.0/0" - } - } + resource "openstack_compute_secgroup_v2" "foo" { + region = "%s" + name = "secgroup_1" + description = "a security group" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } + } - resource "openstack_networking_port_v2" "foo" { - region = "%s" - name = "port_1" - network_id = "${openstack_networking_network_v2.foo.id}" - admin_state_up = "true" - security_groups = ["${openstack_compute_secgroup_v2.foo.id}"] + resource "openstack_networking_port_v2" "foo" { + region = "%s" + name = "port_1" + network_id = "${openstack_networking_network_v2.foo.id}" + admin_state_up = "true" + security_group_ids = ["${openstack_compute_secgroup_v2.foo.id}"] + fixed_ip { + "subnet_id" = "${openstack_networking_subnet_v2.foo.id}" + "ip_address" = "192.168.199.23" + } + } - depends_on = ["openstack_networking_subnet_v2.foo"] - } + resource "openstack_compute_instance_v2" "foo" { + region = "%s" + name = "terraform-test" + security_groups = ["${openstack_compute_secgroup_v2.foo.name}"] - resource "openstack_compute_instance_v2" "foo" { - region = "%s" - name = "terraform-test" - security_groups = ["${openstack_compute_secgroup_v2.foo.name}"] - - network { - port = "${openstack_networking_port_v2.foo.id}" - } - }`, region, region, region, region, region) + network { + port = "${openstack_networking_port_v2.foo.id}" + } + }`, region, region, region, region, region) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, diff --git a/builtin/providers/openstack/resource_openstack_networking_port_v2.go b/builtin/providers/openstack/resource_openstack_networking_port_v2.go index 701e42c05c..0b8d33ad5a 100644 --- a/builtin/providers/openstack/resource_openstack_networking_port_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_port_v2.go @@ -3,7 +3,6 @@ package openstack import ( "fmt" "log" - "strconv" "time" "github.com/hashicorp/terraform/helper/hashcode" @@ -39,7 +38,7 @@ func resourceNetworkingPortV2() *schema.Resource { ForceNew: true, }, "admin_state_up": &schema.Schema{ - Type: schema.TypeString, + Type: schema.TypeBool, Optional: true, ForceNew: false, Computed: true, @@ -62,7 +61,7 @@ func resourceNetworkingPortV2() *schema.Resource { ForceNew: true, Computed: true, }, - "security_groups": &schema.Schema{ + "security_group_ids": &schema.Schema{ Type: schema.TypeSet, Optional: true, ForceNew: false, @@ -78,6 +77,23 @@ func resourceNetworkingPortV2() *schema.Resource { ForceNew: true, Computed: true, }, + "fixed_ip": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: false, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "ip_address": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, }, } } @@ -98,6 +114,7 @@ func resourceNetworkingPortV2Create(d *schema.ResourceData, meta interface{}) er DeviceOwner: d.Get("device_owner").(string), SecurityGroups: resourcePortSecurityGroupsV2(d), DeviceID: d.Get("device_id").(string), + FixedIPs: resourcePortFixedIpsV2(d), } log.Printf("[DEBUG] Create Options: %#v", createOpts) @@ -139,13 +156,14 @@ func resourceNetworkingPortV2Read(d *schema.ResourceData, meta interface{}) erro log.Printf("[DEBUG] Retreived Port %s: %+v", d.Id(), p) d.Set("name", p.Name) - d.Set("admin_state_up", strconv.FormatBool(p.AdminStateUp)) + d.Set("admin_state_up", p.AdminStateUp) d.Set("network_id", p.NetworkID) d.Set("mac_address", p.MACAddress) d.Set("tenant_id", p.TenantID) d.Set("device_owner", p.DeviceOwner) - d.Set("security_groups", p.SecurityGroups) + d.Set("security_group_ids", p.SecurityGroups) d.Set("device_id", p.DeviceID) + d.Set("fixed_ip", p.FixedIPs) return nil } @@ -171,7 +189,7 @@ func resourceNetworkingPortV2Update(d *schema.ResourceData, meta interface{}) er updateOpts.DeviceOwner = d.Get("device_owner").(string) } - if d.HasChange("security_groups") { + if d.HasChange("security_group_ids") { updateOpts.SecurityGroups = resourcePortSecurityGroupsV2(d) } @@ -179,6 +197,10 @@ func resourceNetworkingPortV2Update(d *schema.ResourceData, meta interface{}) er updateOpts.DeviceID = d.Get("device_id").(string) } + if d.HasChange("fixed_ip") { + updateOpts.FixedIPs = resourcePortFixedIpsV2(d) + } + log.Printf("[DEBUG] Updating Port %s with options: %+v", d.Id(), updateOpts) _, err = ports.Update(networkingClient, d.Id(), updateOpts).Extract() @@ -215,7 +237,7 @@ func resourceNetworkingPortV2Delete(d *schema.ResourceData, meta interface{}) er } func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string { - rawSecurityGroups := d.Get("security_groups").(*schema.Set) + rawSecurityGroups := d.Get("security_group_ids").(*schema.Set) groups := make([]string, rawSecurityGroups.Len()) for i, raw := range rawSecurityGroups.List() { groups[i] = raw.(string) @@ -223,10 +245,24 @@ func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string { return groups } +func resourcePortFixedIpsV2(d *schema.ResourceData) []ports.IP { + rawIP := d.Get("fixed_ip").([]interface{}) + ip := make([]ports.IP, len(rawIP)) + for i, raw := range rawIP { + rawMap := raw.(map[string]interface{}) + ip[i] = ports.IP{ + SubnetID: rawMap["subnet_id"].(string), + IPAddress: rawMap["ip_address"].(string), + } + } + + return ip +} + func resourcePortAdminStateUpV2(d *schema.ResourceData) *bool { value := false - if raw, ok := d.GetOk("admin_state_up"); ok && raw == "true" { + if raw, ok := d.GetOk("admin_state_up"); ok && raw == true { value = true } diff --git a/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go index edeb619011..2250ba36d3 100644 --- a/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go @@ -10,6 +10,7 @@ import ( "github.com/rackspace/gophercloud/openstack/networking/v2/networks" "github.com/rackspace/gophercloud/openstack/networking/v2/ports" + "github.com/rackspace/gophercloud/openstack/networking/v2/subnets" ) func TestAccNetworkingV2Port_basic(t *testing.T) { @@ -17,6 +18,7 @@ func TestAccNetworkingV2Port_basic(t *testing.T) { var network networks.Network var port ports.Port + var subnet subnets.Subnet var testAccNetworkingV2Port_basic = fmt.Sprintf(` resource "openstack_networking_network_v2" "foo" { @@ -25,12 +27,24 @@ func TestAccNetworkingV2Port_basic(t *testing.T) { admin_state_up = "true" } + resource "openstack_networking_subnet_v2" "foo" { + region = "%s" + name = "subnet_1" + network_id = "${openstack_networking_network_v2.foo.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } + resource "openstack_networking_port_v2" "foo" { region = "%s" name = "port_1" network_id = "${openstack_networking_network_v2.foo.id}" admin_state_up = "true" - }`, region, region) + fixed_ip { + subnet_id = "${openstack_networking_subnet_v2.foo.id}" + ip_address = "192.168.199.23" + } + }`, region, region, region) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -40,6 +54,7 @@ func TestAccNetworkingV2Port_basic(t *testing.T) { resource.TestStep{ Config: testAccNetworkingV2Port_basic, Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.foo", &subnet), testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.foo", &network), testAccCheckNetworkingV2PortExists(t, "openstack_networking_port_v2.foo", &port), ), diff --git a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go index 2fc0b4bbb6..8241a6f446 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go @@ -33,7 +33,12 @@ func resourceNetworkingRouterInterfaceV2() *schema.Resource { }, "subnet_id": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, + ForceNew: true, + }, + "port_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, ForceNew: true, }, }, @@ -49,6 +54,7 @@ func resourceNetworkingRouterInterfaceV2Create(d *schema.ResourceData, meta inte createOpts := routers.InterfaceOpts{ SubnetID: d.Get("subnet_id").(string), + PortID: d.Get("port_id").(string), } log.Printf("[DEBUG] Create Options: %#v", createOpts) @@ -148,6 +154,7 @@ func waitForRouterInterfaceDelete(networkingClient *gophercloud.ServiceClient, d removeOpts := routers.InterfaceOpts{ SubnetID: d.Get("subnet_id").(string), + PortID: d.Get("port_id").(string), } r, err := ports.Get(networkingClient, routerInterfaceId).Extract() diff --git a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go index be3b12c0b5..4094941dce 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go @@ -7,18 +7,53 @@ import ( "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/routers" + "github.com/rackspace/gophercloud/openstack/networking/v2/networks" "github.com/rackspace/gophercloud/openstack/networking/v2/ports" + "github.com/rackspace/gophercloud/openstack/networking/v2/subnets" ) -func TestAccNetworkingV2RouterInterface_basic(t *testing.T) { +func TestAccNetworkingV2RouterInterface_basic_subnet(t *testing.T) { + var network networks.Network + var router routers.Router + var subnet subnets.Subnet + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckNetworkingV2RouterInterfaceDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccNetworkingV2RouterInterface_basic, + Config: testAccNetworkingV2RouterInterface_basic_subnet, Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.network_1", &network), + testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.subnet_1", &subnet), + testAccCheckNetworkingV2RouterExists(t, "openstack_networking_router_v2.router_1", &router), + testAccCheckNetworkingV2RouterInterfaceExists(t, "openstack_networking_router_interface_v2.int_1"), + ), + }, + }, + }) +} + +func TestAccNetworkingV2RouterInterface_basic_port(t *testing.T) { + var network networks.Network + var port ports.Port + var router routers.Router + var subnet subnets.Subnet + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2RouterInterfaceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2RouterInterface_basic_port, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.network_1", &network), + testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.subnet_1", &subnet), + testAccCheckNetworkingV2RouterExists(t, "openstack_networking_router_v2.router_1", &router), + testAccCheckNetworkingV2PortExists(t, "openstack_networking_port_v2.port_1", &port), testAccCheckNetworkingV2RouterInterfaceExists(t, "openstack_networking_router_interface_v2.int_1"), ), }, @@ -77,24 +112,56 @@ func testAccCheckNetworkingV2RouterInterfaceExists(t *testing.T, n string) resou } } -var testAccNetworkingV2RouterInterface_basic = fmt.Sprintf(` -resource "openstack_networking_router_v2" "router_1" { - name = "router_1" - admin_state_up = "true" -} +var testAccNetworkingV2RouterInterface_basic_subnet = fmt.Sprintf(` + resource "openstack_networking_router_v2" "router_1" { + name = "router_1" + admin_state_up = "true" + } -resource "openstack_networking_router_interface_v2" "int_1" { - subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" - router_id = "${openstack_networking_router_v2.router_1.id}" -} + resource "openstack_networking_router_interface_v2" "int_1" { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + router_id = "${openstack_networking_router_v2.router_1.id}" + } -resource "openstack_networking_network_v2" "network_1" { - name = "network_1" - admin_state_up = "true" -} + resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" + } -resource "openstack_networking_subnet_v2" "subnet_1" { - network_id = "${openstack_networking_network_v2.network_1.id}" - cidr = "192.168.199.0/24" - ip_version = 4 -}`) + resource "openstack_networking_subnet_v2" "subnet_1" { + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + }`) + +var testAccNetworkingV2RouterInterface_basic_port = fmt.Sprintf(` + resource "openstack_networking_router_v2" "router_1" { + name = "router_1" + admin_state_up = "true" + } + + resource "openstack_networking_router_interface_v2" "int_1" { + router_id = "${openstack_networking_router_v2.router_1.id}" + port_id = "${openstack_networking_port_v2.port_1.id}" + } + + resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } + + resource "openstack_networking_port_v2" "port_1" { + name = "port_1" + network_id = "${openstack_networking_network_v2.network_1.id}" + admin_state_up = "true" + fixed_ip { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + ip_address = "192.168.199.1" + } + }`) diff --git a/builtin/providers/packet/resource_packet_device.go b/builtin/providers/packet/resource_packet_device.go index 56fc7afe55..6bee26cf96 100644 --- a/builtin/providers/packet/resource_packet_device.go +++ b/builtin/providers/packet/resource_packet_device.go @@ -158,7 +158,7 @@ func resourcePacketDeviceCreate(d *schema.ResourceData, meta interface{}) error log.Printf("[INFO] Device ID: %s", d.Id()) - _, err = WaitForDeviceAttribute(d, "active", []string{"provisioning"}, "state", meta) + _, err = WaitForDeviceAttribute(d, "active", []string{"queued", "provisioning"}, "state", meta) if err != nil { return fmt.Errorf( "Error waiting for device (%s) to become ready: %s", d.Id(), err) diff --git a/builtin/providers/template/resource.go b/builtin/providers/template/resource.go index 9019dcfc93..8022c064be 100644 --- a/builtin/providers/template/resource.go +++ b/builtin/providers/template/resource.go @@ -4,7 +4,6 @@ import ( "crypto/sha256" "encoding/hex" "fmt" - "io/ioutil" "log" "os" "path/filepath" @@ -12,8 +11,8 @@ import ( "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config/lang" "github.com/hashicorp/terraform/config/lang/ast" + "github.com/hashicorp/terraform/helper/pathorcontents" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/go-homedir" ) func resource() *schema.Resource { @@ -24,13 +23,23 @@ func resource() *schema.Resource { Read: Read, Schema: map[string]*schema.Schema{ + "template": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "Contents of the template", + ForceNew: true, + ConflictsWith: []string{"filename"}, + }, "filename": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, Description: "file to read template from", ForceNew: true, // Make a "best effort" attempt to relativize the file path. StateFunc: func(v interface{}) string { + if v == nil || v.(string) == "" { + return "" + } pwd, err := os.Getwd() if err != nil { return v.(string) @@ -41,6 +50,8 @@ func resource() *schema.Resource { } return rel }, + Deprecated: "Use the 'template' attribute instead.", + ConflictsWith: []string{"template"}, }, "vars": &schema.Schema{ Type: schema.TypeMap, @@ -96,23 +107,21 @@ func Read(d *schema.ResourceData, meta interface{}) error { type templateRenderError error -var readfile func(string) ([]byte, error) = ioutil.ReadFile // testing hook - func render(d *schema.ResourceData) (string, error) { + template := d.Get("template").(string) filename := d.Get("filename").(string) vars := d.Get("vars").(map[string]interface{}) - path, err := homedir.Expand(filename) + if template == "" && filename != "" { + template = filename + } + + contents, _, err := pathorcontents.Read(template) if err != nil { return "", err } - buf, err := readfile(path) - if err != nil { - return "", err - } - - rendered, err := execute(string(buf), vars) + rendered, err := execute(contents, vars) if err != nil { return "", templateRenderError( fmt.Errorf("failed to render %v: %v", filename, err), diff --git a/builtin/providers/template/resource_test.go b/builtin/providers/template/resource_test.go index 7f461325a2..91882d9d37 100644 --- a/builtin/providers/template/resource_test.go +++ b/builtin/providers/template/resource_test.go @@ -26,15 +26,10 @@ func TestTemplateRendering(t *testing.T) { for _, tt := range cases { r.Test(t, r.TestCase{ - PreCheck: func() { - readfile = func(string) ([]byte, error) { - return []byte(tt.template), nil - } - }, Providers: testProviders, Steps: []r.TestStep{ r.TestStep{ - Config: testTemplateConfig(tt.vars), + Config: testTemplateConfig(tt.template, tt.vars), Check: func(s *terraform.State) error { got := s.RootModule().Outputs["rendered"] if tt.want != got { @@ -62,14 +57,7 @@ func TestTemplateVariableChange(t *testing.T) { var testSteps []r.TestStep for i, step := range steps { testSteps = append(testSteps, r.TestStep{ - PreConfig: func(template string) func() { - return func() { - readfile = func(string) ([]byte, error) { - return []byte(template), nil - } - } - }(step.template), - Config: testTemplateConfig(step.vars), + Config: testTemplateConfig(step.template, step.vars), Check: func(i int, want string) r.TestCheckFunc { return func(s *terraform.State) error { got := s.RootModule().Outputs["rendered"] @@ -88,14 +76,13 @@ func TestTemplateVariableChange(t *testing.T) { }) } -func testTemplateConfig(vars string) string { - return ` -resource "template_file" "t0" { - filename = "mock" - vars = ` + vars + ` -} -output "rendered" { - value = "${template_file.t0.rendered}" -} - ` +func testTemplateConfig(template, vars string) string { + return fmt.Sprintf(` + resource "template_file" "t0" { + template = "%s" + vars = %s + } + output "rendered" { + value = "${template_file.t0.rendered}" + }`, template, vars) } diff --git a/builtin/provisioners/chef/resource_provisioner.go b/builtin/provisioners/chef/resource_provisioner.go index 50b5666ee1..c0e040b5f9 100644 --- a/builtin/provisioners/chef/resource_provisioner.go +++ b/builtin/provisioners/chef/resource_provisioner.go @@ -8,7 +8,7 @@ import ( "io" "log" "os" - "path" + "path/filepath" "regexp" "strings" "text/template" @@ -16,6 +16,7 @@ import ( "github.com/hashicorp/terraform/communicator" "github.com/hashicorp/terraform/communicator/remote" + "github.com/hashicorp/terraform/helper/pathorcontents" "github.com/hashicorp/terraform/terraform" "github.com/mitchellh/go-homedir" "github.com/mitchellh/go-linereader" @@ -79,18 +80,22 @@ type Provisioner struct { OSType string `mapstructure:"os_type"` PreventSudo bool `mapstructure:"prevent_sudo"` RunList []string `mapstructure:"run_list"` - SecretKeyPath string `mapstructure:"secret_key_path"` + SecretKey string `mapstructure:"secret_key"` ServerURL string `mapstructure:"server_url"` SkipInstall bool `mapstructure:"skip_install"` SSLVerifyMode string `mapstructure:"ssl_verify_mode"` ValidationClientName string `mapstructure:"validation_client_name"` - ValidationKeyPath string `mapstructure:"validation_key_path"` + ValidationKey string `mapstructure:"validation_key"` Version string `mapstructure:"version"` installChefClient func(terraform.UIOutput, communicator.Communicator) error createConfigFiles func(terraform.UIOutput, communicator.Communicator) error runChefClient func(terraform.UIOutput, communicator.Communicator) error useSudo bool + + // Deprecated Fields + SecretKeyPath string `mapstructure:"secret_key_path"` + ValidationKeyPath string `mapstructure:"validation_key_path"` } // ResourceProvisioner represents a generic chef provisioner @@ -189,8 +194,9 @@ func (r *ResourceProvisioner) Validate(c *terraform.ResourceConfig) (ws []string if p.ValidationClientName == "" { es = append(es, fmt.Errorf("Key not found: validation_client_name")) } - if p.ValidationKeyPath == "" { - es = append(es, fmt.Errorf("Key not found: validation_key_path")) + if p.ValidationKey == "" && p.ValidationKeyPath == "" { + es = append(es, fmt.Errorf( + "One of validation_key or the deprecated validation_key_path must be provided")) } if p.UsePolicyfile && p.PolicyName == "" { es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_name")) @@ -198,6 +204,14 @@ func (r *ResourceProvisioner) Validate(c *terraform.ResourceConfig) (ws []string if p.UsePolicyfile && p.PolicyGroup == "" { es = append(es, fmt.Errorf("Policyfile enabled but key not found: policy_group")) } + if p.ValidationKeyPath != "" { + ws = append(ws, "validation_key_path is deprecated, please use "+ + "validation_key instead and load the key contents via file()") + } + if p.SecretKeyPath != "" { + ws = append(ws, "secret_key_path is deprecated, please use "+ + "secret_key instead and load the key contents via file()") + } return ws, es } @@ -247,20 +261,12 @@ func (r *ResourceProvisioner) decodeConfig(c *terraform.ResourceConfig) (*Provis p.OhaiHints[i] = hintPath } - if p.ValidationKeyPath != "" { - keyPath, err := homedir.Expand(p.ValidationKeyPath) - if err != nil { - return nil, fmt.Errorf("Error expanding the validation key path: %v", err) - } - p.ValidationKeyPath = keyPath + if p.ValidationKey == "" && p.ValidationKeyPath != "" { + p.ValidationKey = p.ValidationKeyPath } - if p.SecretKeyPath != "" { - keyPath, err := homedir.Expand(p.SecretKeyPath) - if err != nil { - return nil, fmt.Errorf("Error expanding the secret key path: %v", err) - } - p.SecretKeyPath = keyPath + if p.SecretKey == "" && p.SecretKeyPath != "" { + p.SecretKey = p.SecretKeyPath } if attrs, ok := c.Config["attributes"]; ok { @@ -316,7 +322,7 @@ func (p *Provisioner) runChefClientFunc( chefCmd string, confDir string) func(terraform.UIOutput, communicator.Communicator) error { return func(o terraform.UIOutput, comm communicator.Communicator) error { - fb := path.Join(confDir, firstBoot) + fb := filepath.Join(confDir, firstBoot) var cmd string // Policyfiles do not support chef environments, so don't pass the `-E` flag. @@ -331,8 +337,8 @@ func (p *Provisioner) runChefClientFunc( return fmt.Errorf("Error creating logfile directory %s: %v", logfileDir, err) } - logFile := path.Join(logfileDir, p.NodeName) - f, err := os.Create(path.Join(logFile)) + logFile := filepath.Join(logfileDir, p.NodeName) + f, err := os.Create(filepath.Join(logFile)) if err != nil { return fmt.Errorf("Error creating logfile %s: %v", logFile, err) } @@ -348,7 +354,7 @@ func (p *Provisioner) runChefClientFunc( // Output implementation of terraform.UIOutput interface func (p *Provisioner) Output(output string) { - logFile := path.Join(logfileDir, p.NodeName) + logFile := filepath.Join(logfileDir, p.NodeName) f, err := os.OpenFile(logFile, os.O_APPEND|os.O_WRONLY, 0666) if err != nil { log.Printf("Error creating logfile %s: %v", logFile, err) @@ -376,28 +382,25 @@ func (p *Provisioner) deployConfigFiles( o terraform.UIOutput, comm communicator.Communicator, confDir string) error { - // Open the validation key file - f, err := os.Open(p.ValidationKeyPath) + contents, _, err := pathorcontents.Read(p.ValidationKey) if err != nil { return err } - defer f.Close() + f := strings.NewReader(contents) // Copy the validation key to the new instance - if err := comm.Upload(path.Join(confDir, validationKey), f); err != nil { + if err := comm.Upload(filepath.Join(confDir, validationKey), f); err != nil { return fmt.Errorf("Uploading %s failed: %v", validationKey, err) } - if p.SecretKeyPath != "" { - // Open the secret key file - s, err := os.Open(p.SecretKeyPath) + if p.SecretKey != "" { + contents, _, err := pathorcontents.Read(p.SecretKey) if err != nil { return err } - defer s.Close() - + s := strings.NewReader(contents) // Copy the secret key to the new instance - if err := comm.Upload(path.Join(confDir, secretKey), s); err != nil { + if err := comm.Upload(filepath.Join(confDir, secretKey), s); err != nil { return fmt.Errorf("Uploading %s failed: %v", secretKey, err) } } @@ -417,7 +420,7 @@ func (p *Provisioner) deployConfigFiles( } // Copy the client config to the new instance - if err := comm.Upload(path.Join(confDir, clienrb), &buf); err != nil { + if err := comm.Upload(filepath.Join(confDir, clienrb), &buf); err != nil { return fmt.Errorf("Uploading %s failed: %v", clienrb, err) } @@ -446,7 +449,7 @@ func (p *Provisioner) deployConfigFiles( } // Copy the first-boot.json to the new instance - if err := comm.Upload(path.Join(confDir, firstBoot), bytes.NewReader(d)); err != nil { + if err := comm.Upload(filepath.Join(confDir, firstBoot), bytes.NewReader(d)); err != nil { return fmt.Errorf("Uploading %s failed: %v", firstBoot, err) } @@ -466,8 +469,8 @@ func (p *Provisioner) deployOhaiHints( defer f.Close() // Copy the hint to the new instance - if err := comm.Upload(path.Join(hintDir, path.Base(hint)), f); err != nil { - return fmt.Errorf("Uploading %s failed: %v", path.Base(hint), err) + if err := comm.Upload(filepath.Join(hintDir, filepath.Base(hint)), f); err != nil { + return fmt.Errorf("Uploading %s failed: %v", filepath.Base(hint), err) } } diff --git a/builtin/provisioners/chef/resource_provisioner_test.go b/builtin/provisioners/chef/resource_provisioner_test.go index 78c44c7ea3..40625196a7 100644 --- a/builtin/provisioners/chef/resource_provisioner_test.go +++ b/builtin/provisioners/chef/resource_provisioner_test.go @@ -22,7 +22,7 @@ func TestResourceProvider_Validate_good(t *testing.T) { "run_list": []interface{}{"cookbook::recipe"}, "server_url": "https://chef.local", "validation_client_name": "validator", - "validation_key_path": "validator.pem", + "validation_key": "contentsofsomevalidator.pem", }) r := new(ResourceProvisioner) warn, errs := r.Validate(c) diff --git a/command/apply.go b/command/apply.go index 0687116a8a..62ed3dd9ab 100644 --- a/command/apply.go +++ b/command/apply.go @@ -111,11 +111,27 @@ func (c *ApplyCommand) Run(args []string) int { return 1 } if !destroyForce && c.Destroy { + // Default destroy message + desc := "Terraform will delete all your managed infrastructure.\n" + + "There is no undo. Only 'yes' will be accepted to confirm." + + // If targets are specified, list those to user + if c.Meta.targets != nil { + var descBuffer bytes.Buffer + descBuffer.WriteString("Terraform will delete the following infrastructure:\n") + for _, target := range c.Meta.targets { + descBuffer.WriteString("\t") + descBuffer.WriteString(target) + descBuffer.WriteString("\n") + } + descBuffer.WriteString("There is no undo. Only 'yes' will be accepted to confirm") + desc = descBuffer.String() + } + v, err := c.UIInput().Input(&terraform.InputOpts{ - Id: "destroy", - Query: "Do you really want to destroy?", - Description: "Terraform will delete all your managed infrastructure.\n" + - "There is no undo. Only 'yes' will be accepted to confirm.", + Id: "destroy", + Query: "Do you really want to destroy?", + Description: desc, }) if err != nil { c.Ui.Error(fmt.Sprintf("Error asking for confirmation: %s", err)) diff --git a/command/counthookaction_string.go b/command/counthookaction_string.go index 8b90dc50bc..c0c40d0de6 100644 --- a/command/counthookaction_string.go +++ b/command/counthookaction_string.go @@ -1,4 +1,4 @@ -// generated by stringer -type=countHookAction hook_count_action.go; DO NOT EDIT +// Code generated by "stringer -type=countHookAction hook_count_action.go"; DO NOT EDIT package command diff --git a/command/plan.go b/command/plan.go index cd1aeaec6f..8c5fda5cc9 100644 --- a/command/plan.go +++ b/command/plan.go @@ -68,14 +68,16 @@ func (c *PlanCommand) Run(args []string) int { c.Ui.Error(err.Error()) return 1 } - if !validateContext(ctx, c.Ui) { - return 1 - } + if err := ctx.Input(c.InputMode()); err != nil { c.Ui.Error(fmt.Sprintf("Error configuring: %s", err)) return 1 } + if !validateContext(ctx, c.Ui) { + return 1 + } + if refresh { c.Ui.Output("Refreshing Terraform state prior to plan...\n") state, err := ctx.Refresh() diff --git a/command/plan_test.go b/command/plan_test.go index d49200a5fe..d0d14bc567 100644 --- a/command/plan_test.go +++ b/command/plan_test.go @@ -1,6 +1,7 @@ package command import ( + "bytes" "io/ioutil" "os" "path/filepath" @@ -330,6 +331,30 @@ func TestPlan_vars(t *testing.T) { } } +func TestPlan_varsUnset(t *testing.T) { + // Disable test mode so input would be asked + test = false + defer func() { test = true }() + + defaultInputReader = bytes.NewBufferString("bar\n") + + p := testProvider() + ui := new(cli.MockUi) + c := &PlanCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + args := []string{ + testFixturePath("plan-vars"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } +} + func TestPlan_varFile(t *testing.T) { varFilePath := testTempFile(t) if err := ioutil.WriteFile(varFilePath, []byte(planVarFile), 0644); err != nil { diff --git a/communicator/ssh/communicator.go b/communicator/ssh/communicator.go index 14f3563584..d37d6757bd 100644 --- a/communicator/ssh/communicator.go +++ b/communicator/ssh/communicator.go @@ -93,7 +93,7 @@ func (c *Communicator) Connect(o terraform.UIOutput) (err error) { " SSH Agent: %t", c.connInfo.Host, c.connInfo.User, c.connInfo.Password != "", - c.connInfo.KeyFile != "", + c.connInfo.PrivateKey != "", c.connInfo.Agent, )) @@ -107,7 +107,7 @@ func (c *Communicator) Connect(o terraform.UIOutput) (err error) { " SSH Agent: %t", c.connInfo.BastionHost, c.connInfo.BastionUser, c.connInfo.BastionPassword != "", - c.connInfo.BastionKeyFile != "", + c.connInfo.BastionPrivateKey != "", c.connInfo.Agent, )) } diff --git a/communicator/ssh/provisioner.go b/communicator/ssh/provisioner.go index 813db57283..f9f889037e 100644 --- a/communicator/ssh/provisioner.go +++ b/communicator/ssh/provisioner.go @@ -3,14 +3,13 @@ package ssh import ( "encoding/pem" "fmt" - "io/ioutil" "log" "net" "os" "time" + "github.com/hashicorp/terraform/helper/pathorcontents" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/go-homedir" "github.com/mitchellh/mapstructure" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" @@ -37,7 +36,7 @@ const ( type connectionInfo struct { User string Password string - KeyFile string `mapstructure:"key_file"` + PrivateKey string `mapstructure:"private_key"` Host string Port int Agent bool @@ -45,11 +44,15 @@ type connectionInfo struct { ScriptPath string `mapstructure:"script_path"` TimeoutVal time.Duration `mapstructure:"-"` - BastionUser string `mapstructure:"bastion_user"` - BastionPassword string `mapstructure:"bastion_password"` - BastionKeyFile string `mapstructure:"bastion_key_file"` - BastionHost string `mapstructure:"bastion_host"` - BastionPort int `mapstructure:"bastion_port"` + BastionUser string `mapstructure:"bastion_user"` + BastionPassword string `mapstructure:"bastion_password"` + BastionPrivateKey string `mapstructure:"bastion_private_key"` + BastionHost string `mapstructure:"bastion_host"` + BastionPort int `mapstructure:"bastion_port"` + + // Deprecated + KeyFile string `mapstructure:"key_file"` + BastionKeyFile string `mapstructure:"bastion_key_file"` } // parseConnectionInfo is used to convert the ConnInfo of the InstanceState into @@ -92,6 +95,15 @@ func parseConnectionInfo(s *terraform.InstanceState) (*connectionInfo, error) { connInfo.TimeoutVal = DefaultTimeout } + // Load deprecated fields; we can handle either path or contents in + // underlying implementation. + if connInfo.PrivateKey == "" && connInfo.KeyFile != "" { + connInfo.PrivateKey = connInfo.KeyFile + } + if connInfo.BastionPrivateKey == "" && connInfo.BastionKeyFile != "" { + connInfo.BastionPrivateKey = connInfo.BastionKeyFile + } + // Default all bastion config attrs to their non-bastion counterparts if connInfo.BastionHost != "" { if connInfo.BastionUser == "" { @@ -100,8 +112,8 @@ func parseConnectionInfo(s *terraform.InstanceState) (*connectionInfo, error) { if connInfo.BastionPassword == "" { connInfo.BastionPassword = connInfo.Password } - if connInfo.BastionKeyFile == "" { - connInfo.BastionKeyFile = connInfo.KeyFile + if connInfo.BastionPrivateKey == "" { + connInfo.BastionPrivateKey = connInfo.PrivateKey } if connInfo.BastionPort == 0 { connInfo.BastionPort = connInfo.Port @@ -130,10 +142,10 @@ func prepareSSHConfig(connInfo *connectionInfo) (*sshConfig, error) { } sshConf, err := buildSSHClientConfig(sshClientConfigOpts{ - user: connInfo.User, - keyFile: connInfo.KeyFile, - password: connInfo.Password, - sshAgent: sshAgent, + user: connInfo.User, + privateKey: connInfo.PrivateKey, + password: connInfo.Password, + sshAgent: sshAgent, }) if err != nil { return nil, err @@ -142,10 +154,10 @@ func prepareSSHConfig(connInfo *connectionInfo) (*sshConfig, error) { var bastionConf *ssh.ClientConfig if connInfo.BastionHost != "" { bastionConf, err = buildSSHClientConfig(sshClientConfigOpts{ - user: connInfo.BastionUser, - keyFile: connInfo.BastionKeyFile, - password: connInfo.BastionPassword, - sshAgent: sshAgent, + user: connInfo.BastionUser, + privateKey: connInfo.BastionPrivateKey, + password: connInfo.BastionPassword, + sshAgent: sshAgent, }) if err != nil { return nil, err @@ -169,10 +181,10 @@ func prepareSSHConfig(connInfo *connectionInfo) (*sshConfig, error) { } type sshClientConfigOpts struct { - keyFile string - password string - sshAgent *sshAgent - user string + privateKey string + password string + sshAgent *sshAgent + user string } func buildSSHClientConfig(opts sshClientConfigOpts) (*ssh.ClientConfig, error) { @@ -180,8 +192,8 @@ func buildSSHClientConfig(opts sshClientConfigOpts) (*ssh.ClientConfig, error) { User: opts.user, } - if opts.keyFile != "" { - pubKeyAuth, err := readPublicKeyFromPath(opts.keyFile) + if opts.privateKey != "" { + pubKeyAuth, err := readPrivateKey(opts.privateKey) if err != nil { return nil, err } @@ -201,31 +213,27 @@ func buildSSHClientConfig(opts sshClientConfigOpts) (*ssh.ClientConfig, error) { return conf, nil } -func readPublicKeyFromPath(path string) (ssh.AuthMethod, error) { - fullPath, err := homedir.Expand(path) +func readPrivateKey(pk string) (ssh.AuthMethod, error) { + key, _, err := pathorcontents.Read(pk) if err != nil { - return nil, fmt.Errorf("Failed to expand home directory: %s", err) - } - key, err := ioutil.ReadFile(fullPath) - if err != nil { - return nil, fmt.Errorf("Failed to read key file %q: %s", path, err) + return nil, fmt.Errorf("Failed to read private key %q: %s", pk, err) } // We parse the private key on our own first so that we can // show a nicer error if the private key has a password. - block, _ := pem.Decode(key) + block, _ := pem.Decode([]byte(key)) if block == nil { - return nil, fmt.Errorf("Failed to read key %q: no key found", path) + return nil, fmt.Errorf("Failed to read key %q: no key found", pk) } if block.Headers["Proc-Type"] == "4,ENCRYPTED" { return nil, fmt.Errorf( "Failed to read key %q: password protected keys are\n"+ - "not supported. Please decrypt the key prior to use.", path) + "not supported. Please decrypt the key prior to use.", pk) } - signer, err := ssh.ParsePrivateKey(key) + signer, err := ssh.ParsePrivateKey([]byte(key)) if err != nil { - return nil, fmt.Errorf("Failed to parse key file %q: %s", path, err) + return nil, fmt.Errorf("Failed to parse key file %q: %s", pk, err) } return ssh.PublicKeys(signer), nil diff --git a/communicator/ssh/provisioner_test.go b/communicator/ssh/provisioner_test.go index fc6b686fbc..aa029dad86 100644 --- a/communicator/ssh/provisioner_test.go +++ b/communicator/ssh/provisioner_test.go @@ -10,13 +10,13 @@ func TestProvisioner_connInfo(t *testing.T) { r := &terraform.InstanceState{ Ephemeral: terraform.EphemeralState{ ConnInfo: map[string]string{ - "type": "ssh", - "user": "root", - "password": "supersecret", - "key_file": "/my/key/file.pem", - "host": "127.0.0.1", - "port": "22", - "timeout": "30s", + "type": "ssh", + "user": "root", + "password": "supersecret", + "private_key": "someprivatekeycontents", + "host": "127.0.0.1", + "port": "22", + "timeout": "30s", "bastion_host": "127.0.1.1", }, @@ -34,7 +34,7 @@ func TestProvisioner_connInfo(t *testing.T) { if conf.Password != "supersecret" { t.Fatalf("bad: %v", conf) } - if conf.KeyFile != "/my/key/file.pem" { + if conf.PrivateKey != "someprivatekeycontents" { t.Fatalf("bad: %v", conf) } if conf.Host != "127.0.0.1" { @@ -61,7 +61,31 @@ func TestProvisioner_connInfo(t *testing.T) { if conf.BastionPassword != "supersecret" { t.Fatalf("bad: %v", conf) } - if conf.BastionKeyFile != "/my/key/file.pem" { + if conf.BastionPrivateKey != "someprivatekeycontents" { + t.Fatalf("bad: %v", conf) + } +} + +func TestProvisioner_connInfoLegacy(t *testing.T) { + r := &terraform.InstanceState{ + Ephemeral: terraform.EphemeralState{ + ConnInfo: map[string]string{ + "type": "ssh", + "key_file": "/my/key/file.pem", + "bastion_host": "127.0.1.1", + }, + }, + } + + conf, err := parseConnectionInfo(r) + if err != nil { + t.Fatalf("err: %v", err) + } + + if conf.PrivateKey != "/my/key/file.pem" { + t.Fatalf("bad: %v", conf) + } + if conf.BastionPrivateKey != "/my/key/file.pem" { t.Fatalf("bad: %v", conf) } } diff --git a/config/interpolate.go b/config/interpolate.go index af0a84da49..1ccf4b0ebf 100644 --- a/config/interpolate.go +++ b/config/interpolate.go @@ -76,6 +76,13 @@ type SelfVariable struct { key string } +// SimpleVariable is an unprefixed variable, which can show up when users have +// strings they are passing down to resources that use interpolation +// internally. The template_file resource is an example of this. +type SimpleVariable struct { + Key string +} + // A UserVariable is a variable that is referencing a user variable // that is inputted from outside the configuration. This looks like // "${var.foo}" @@ -97,6 +104,8 @@ func NewInterpolatedVariable(v string) (InterpolatedVariable, error) { return NewUserVariable(v) } else if strings.HasPrefix(v, "module.") { return NewModuleVariable(v) + } else if !strings.ContainsRune(v, '.') { + return NewSimpleVariable(v) } else { return NewResourceVariable(v) } @@ -227,6 +236,18 @@ func (v *SelfVariable) GoString() string { return fmt.Sprintf("*%#v", *v) } +func NewSimpleVariable(key string) (*SimpleVariable, error) { + return &SimpleVariable{key}, nil +} + +func (v *SimpleVariable) FullKey() string { + return v.Key +} + +func (v *SimpleVariable) GoString() string { + return fmt.Sprintf("*%#v", *v) +} + func NewUserVariable(key string) (*UserVariable, error) { name := key[len("var."):] elem := "" diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go index e98ade2f0c..5538763c0c 100644 --- a/config/interpolate_funcs.go +++ b/config/interpolate_funcs.go @@ -25,6 +25,7 @@ func init() { "cidrhost": interpolationFuncCidrHost(), "cidrnetmask": interpolationFuncCidrNetmask(), "cidrsubnet": interpolationFuncCidrSubnet(), + "coalesce": interpolationFuncCoalesce(), "compact": interpolationFuncCompact(), "concat": interpolationFuncConcat(), "element": interpolationFuncElement(), @@ -145,6 +146,30 @@ func interpolationFuncCidrSubnet() ast.Function { } } +// interpolationFuncCoalesce implements the "coalesce" function that +// returns the first non null / empty string from the provided input +func interpolationFuncCoalesce() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeString}, + ReturnType: ast.TypeString, + Variadic: true, + VariadicType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + if len(args) < 2 { + return nil, fmt.Errorf("must provide at least two arguments") + } + for _, arg := range args { + argument := arg.(string) + + if argument != "" { + return argument, nil + } + } + return "", nil + }, + } +} + // interpolationFuncConcat implements the "concat" function that // concatenates multiple strings. This isn't actually necessary anymore // since our language supports string concat natively, but for backwards diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go index bbfdd484ad..3aeb50db17 100644 --- a/config/interpolate_funcs_test.go +++ b/config/interpolate_funcs_test.go @@ -147,6 +147,33 @@ func TestInterpolateFuncCidrSubnet(t *testing.T) { }) } +func TestInterpolateFuncCoalesce(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${coalesce("first", "second", "third")}`, + "first", + false, + }, + { + `${coalesce("", "second", "third")}`, + "second", + false, + }, + { + `${coalesce("", "", "")}`, + "", + false, + }, + { + `${coalesce("foo")}`, + nil, + true, + }, + }, + }) +} + func TestInterpolateFuncDeprecatedConcat(t *testing.T) { testFunction(t, testFunctionConfig{ Cases: []testFunctionCase{ diff --git a/config/lang/ast/type_string.go b/config/lang/ast/type_string.go index d9b5a2df4c..5410e011e1 100644 --- a/config/lang/ast/type_string.go +++ b/config/lang/ast/type_string.go @@ -1,4 +1,4 @@ -// generated by stringer -type=Type; DO NOT EDIT +// Code generated by "stringer -type=Type"; DO NOT EDIT package ast diff --git a/config/loader.go b/config/loader.go index 5711ce8ef8..c9a1295fe1 100644 --- a/config/loader.go +++ b/config/loader.go @@ -25,7 +25,7 @@ func LoadJSON(raw json.RawMessage) (*Config, error) { // Start building the result hclConfig := &hclConfigurable{ - Object: obj, + Root: obj, } return hclConfig.Config() diff --git a/config/loader_hcl.go b/config/loader_hcl.go index f451a31d15..c62ca37314 100644 --- a/config/loader_hcl.go +++ b/config/loader_hcl.go @@ -5,15 +5,15 @@ import ( "io/ioutil" "github.com/hashicorp/hcl" - hclobj "github.com/hashicorp/hcl/hcl" + "github.com/hashicorp/hcl/hcl/ast" "github.com/mitchellh/mapstructure" ) // hclConfigurable is an implementation of configurable that knows // how to turn HCL configuration into a *Config object. type hclConfigurable struct { - File string - Object *hclobj.Object + File string + Root *ast.File } func (t *hclConfigurable) Config() (*Config, error) { @@ -36,7 +36,13 @@ func (t *hclConfigurable) Config() (*Config, error) { Variable map[string]*hclVariable } - if err := hcl.DecodeObject(&rawConfig, t.Object); err != nil { + // Top-level item should be the object list + list, ok := t.Root.Node.(*ast.ObjectList) + if !ok { + return nil, fmt.Errorf("error parsing: file doesn't contain a root object") + } + + if err := hcl.DecodeObject(&rawConfig, list); err != nil { return nil, err } @@ -73,7 +79,7 @@ func (t *hclConfigurable) Config() (*Config, error) { } // Get Atlas configuration - if atlas := t.Object.Get("atlas", false); atlas != nil { + if atlas := list.Filter("atlas"); len(atlas.Items) > 0 { var err error config.Atlas, err = loadAtlasHcl(atlas) if err != nil { @@ -82,7 +88,7 @@ func (t *hclConfigurable) Config() (*Config, error) { } // Build the modules - if modules := t.Object.Get("module", false); modules != nil { + if modules := list.Filter("module"); len(modules.Items) > 0 { var err error config.Modules, err = loadModulesHcl(modules) if err != nil { @@ -91,7 +97,7 @@ func (t *hclConfigurable) Config() (*Config, error) { } // Build the provider configs - if providers := t.Object.Get("provider", false); providers != nil { + if providers := list.Filter("provider"); len(providers.Items) > 0 { var err error config.ProviderConfigs, err = loadProvidersHcl(providers) if err != nil { @@ -100,7 +106,7 @@ func (t *hclConfigurable) Config() (*Config, error) { } // Build the resources - if resources := t.Object.Get("resource", false); resources != nil { + if resources := list.Filter("resource"); len(resources.Items) > 0 { var err error config.Resources, err = loadResourcesHcl(resources) if err != nil { @@ -109,7 +115,7 @@ func (t *hclConfigurable) Config() (*Config, error) { } // Build the outputs - if outputs := t.Object.Get("output", false); outputs != nil { + if outputs := list.Filter("output"); len(outputs.Items) > 0 { var err error config.Outputs, err = loadOutputsHcl(outputs) if err != nil { @@ -118,8 +124,13 @@ func (t *hclConfigurable) Config() (*Config, error) { } // Check for invalid keys - for _, elem := range t.Object.Elem(true) { - k := elem.Key + for _, item := range list.Items { + if len(item.Keys) == 0 { + // Not sure how this would happen, but let's avoid a panic + continue + } + + k := item.Keys[0].Token.Value().(string) if _, ok := validKeys[k]; ok { continue } @@ -133,8 +144,6 @@ func (t *hclConfigurable) Config() (*Config, error) { // loadFileHcl is a fileLoaderFunc that knows how to read HCL // files and turn them into hclConfigurables. func loadFileHcl(root string) (configurable, []string, error) { - var obj *hclobj.Object = nil - // Read the HCL file and prepare for parsing d, err := ioutil.ReadFile(root) if err != nil { @@ -143,7 +152,7 @@ func loadFileHcl(root string) (configurable, []string, error) { } // Parse it - obj, err = hcl.Parse(string(d)) + hclRoot, err := hcl.Parse(string(d)) if err != nil { return nil, nil, fmt.Errorf( "Error parsing %s: %s", root, err) @@ -151,8 +160,8 @@ func loadFileHcl(root string) (configurable, []string, error) { // Start building the result result := &hclConfigurable{ - File: root, - Object: obj, + File: root, + Root: hclRoot, } // Dive in, find the imports. This is disabled for now since @@ -200,9 +209,16 @@ func loadFileHcl(root string) (configurable, []string, error) { // Given a handle to a HCL object, this transforms it into the Atlas // configuration. -func loadAtlasHcl(obj *hclobj.Object) (*AtlasConfig, error) { +func loadAtlasHcl(list *ast.ObjectList) (*AtlasConfig, error) { + if len(list.Items) > 1 { + return nil, fmt.Errorf("only one 'atlas' block allowed") + } + + // Get our one item + item := list.Items[0] + var config AtlasConfig - if err := hcl.DecodeObject(&config, obj); err != nil { + if err := hcl.DecodeObject(&config, item.Val); err != nil { return nil, fmt.Errorf( "Error reading atlas config: %s", err) @@ -217,18 +233,10 @@ func loadAtlasHcl(obj *hclobj.Object) (*AtlasConfig, error) { // The resulting modules may not be unique, but each module // represents exactly one module definition in the HCL configuration. // We leave it up to another pass to merge them together. -func loadModulesHcl(os *hclobj.Object) ([]*Module, error) { - var allNames []*hclobj.Object - - // See loadResourcesHcl for why this exists. Don't touch this. - for _, o1 := range os.Elem(false) { - // Iterate the inner to get the list of types - for _, o2 := range o1.Elem(true) { - // Iterate all of this type to get _all_ the types - for _, o3 := range o2.Elem(false) { - allNames = append(allNames, o3) - } - } +func loadModulesHcl(list *ast.ObjectList) ([]*Module, error) { + list = list.Children() + if len(list.Items) == 0 { + return nil, nil } // Where all the results will go @@ -236,11 +244,18 @@ func loadModulesHcl(os *hclobj.Object) ([]*Module, error) { // Now go over all the types and their children in order to get // all of the actual resources. - for _, obj := range allNames { - k := obj.Key + for _, item := range list.Items { + k := item.Keys[0].Token.Value().(string) + + var listVal *ast.ObjectList + if ot, ok := item.Val.(*ast.ObjectType); ok { + listVal = ot.List + } else { + return nil, fmt.Errorf("module '%s': should be an object", k) + } var config map[string]interface{} - if err := hcl.DecodeObject(&config, obj); err != nil { + if err := hcl.DecodeObject(&config, item.Val); err != nil { return nil, fmt.Errorf( "Error reading config for %s: %s", k, @@ -260,8 +275,8 @@ func loadModulesHcl(os *hclobj.Object) ([]*Module, error) { // If we have a count, then figure it out var source string - if o := obj.Get("source", false); o != nil { - err = hcl.DecodeObject(&source, o) + if o := listVal.Filter("source"); len(o.Items) > 0 { + err = hcl.DecodeObject(&source, o.Items[0].Val) if err != nil { return nil, fmt.Errorf( "Error parsing source for %s: %s", @@ -282,27 +297,19 @@ func loadModulesHcl(os *hclobj.Object) ([]*Module, error) { // LoadOutputsHcl recurses into the given HCL object and turns // it into a mapping of outputs. -func loadOutputsHcl(os *hclobj.Object) ([]*Output, error) { - objects := make(map[string]*hclobj.Object) - - // Iterate over all the "output" blocks and get the keys along with - // their raw configuration objects. We'll parse those later. - for _, o1 := range os.Elem(false) { - for _, o2 := range o1.Elem(true) { - objects[o2.Key] = o2 - } - } - - if len(objects) == 0 { +func loadOutputsHcl(list *ast.ObjectList) ([]*Output, error) { + list = list.Children() + if len(list.Items) == 0 { return nil, nil } // Go through each object and turn it into an actual result. - result := make([]*Output, 0, len(objects)) - for n, o := range objects { - var config map[string]interface{} + result := make([]*Output, 0, len(list.Items)) + for _, item := range list.Items { + n := item.Keys[0].Token.Value().(string) - if err := hcl.DecodeObject(&config, o); err != nil { + var config map[string]interface{} + if err := hcl.DecodeObject(&config, item.Val); err != nil { return nil, err } @@ -325,27 +332,26 @@ func loadOutputsHcl(os *hclobj.Object) ([]*Output, error) { // LoadProvidersHcl recurses into the given HCL object and turns // it into a mapping of provider configs. -func loadProvidersHcl(os *hclobj.Object) ([]*ProviderConfig, error) { - var objects []*hclobj.Object - - // Iterate over all the "provider" blocks and get the keys along with - // their raw configuration objects. We'll parse those later. - for _, o1 := range os.Elem(false) { - for _, o2 := range o1.Elem(true) { - objects = append(objects, o2) - } - } - - if len(objects) == 0 { +func loadProvidersHcl(list *ast.ObjectList) ([]*ProviderConfig, error) { + list = list.Children() + if len(list.Items) == 0 { return nil, nil } // Go through each object and turn it into an actual result. - result := make([]*ProviderConfig, 0, len(objects)) - for _, o := range objects { - var config map[string]interface{} + result := make([]*ProviderConfig, 0, len(list.Items)) + for _, item := range list.Items { + n := item.Keys[0].Token.Value().(string) - if err := hcl.DecodeObject(&config, o); err != nil { + var listVal *ast.ObjectList + if ot, ok := item.Val.(*ast.ObjectType); ok { + listVal = ot.List + } else { + return nil, fmt.Errorf("module '%s': should be an object", n) + } + + var config map[string]interface{} + if err := hcl.DecodeObject(&config, item.Val); err != nil { return nil, err } @@ -355,24 +361,24 @@ func loadProvidersHcl(os *hclobj.Object) ([]*ProviderConfig, error) { if err != nil { return nil, fmt.Errorf( "Error reading config for provider config %s: %s", - o.Key, + n, err) } // If we have an alias field, then add those in var alias string - if a := o.Get("alias", false); a != nil { - err := hcl.DecodeObject(&alias, a) + if a := listVal.Filter("alias"); len(a.Items) > 0 { + err := hcl.DecodeObject(&alias, a.Items[0].Val) if err != nil { return nil, fmt.Errorf( "Error reading alias for provider[%s]: %s", - o.Key, + n, err) } } result = append(result, &ProviderConfig{ - Name: o.Key, + Name: n, Alias: alias, RawConfig: rawConfig, }) @@ -387,27 +393,10 @@ func loadProvidersHcl(os *hclobj.Object) ([]*ProviderConfig, error) { // The resulting resources may not be unique, but each resource // represents exactly one resource definition in the HCL configuration. // We leave it up to another pass to merge them together. -func loadResourcesHcl(os *hclobj.Object) ([]*Resource, error) { - var allTypes []*hclobj.Object - - // HCL object iteration is really nasty. Below is likely to make - // no sense to anyone approaching this code. Luckily, it is very heavily - // tested. If working on a bug fix or feature, we recommend writing a - // test first then doing whatever you want to the code below. If you - // break it, the tests will catch it. Likewise, if you change this, - // MAKE SURE you write a test for your change, because its fairly impossible - // to reason about this mess. - // - // Functionally, what the code does below is get the libucl.Objects - // for all the TYPES, such as "aws_security_group". - for _, o1 := range os.Elem(false) { - // Iterate the inner to get the list of types - for _, o2 := range o1.Elem(true) { - // Iterate all of this type to get _all_ the types - for _, o3 := range o2.Elem(false) { - allTypes = append(allTypes, o3) - } - } +func loadResourcesHcl(list *ast.ObjectList) ([]*Resource, error) { + list = list.Children() + if len(list.Items) == 0 { + return nil, nil } // Where all the results will go @@ -415,191 +404,178 @@ func loadResourcesHcl(os *hclobj.Object) ([]*Resource, error) { // Now go over all the types and their children in order to get // all of the actual resources. - for _, t := range allTypes { - for _, obj := range t.Elem(true) { - k := obj.Key - - var config map[string]interface{} - if err := hcl.DecodeObject(&config, obj); err != nil { - return nil, fmt.Errorf( - "Error reading config for %s[%s]: %s", - t.Key, - k, - err) - } - - // Remove the fields we handle specially - delete(config, "connection") - delete(config, "count") - delete(config, "depends_on") - delete(config, "provisioner") - delete(config, "provider") - delete(config, "lifecycle") - - rawConfig, err := NewRawConfig(config) - if err != nil { - return nil, fmt.Errorf( - "Error reading config for %s[%s]: %s", - t.Key, - k, - err) - } - - // If we have a count, then figure it out - var count string = "1" - if o := obj.Get("count", false); o != nil { - err = hcl.DecodeObject(&count, o) - if err != nil { - return nil, fmt.Errorf( - "Error parsing count for %s[%s]: %s", - t.Key, - k, - err) - } - } - countConfig, err := NewRawConfig(map[string]interface{}{ - "count": count, - }) - if err != nil { - return nil, err - } - countConfig.Key = "count" - - // If we have depends fields, then add those in - var dependsOn []string - if o := obj.Get("depends_on", false); o != nil { - err := hcl.DecodeObject(&dependsOn, o) - if err != nil { - return nil, fmt.Errorf( - "Error reading depends_on for %s[%s]: %s", - t.Key, - k, - err) - } - } - - // If we have connection info, then parse those out - var connInfo map[string]interface{} - if o := obj.Get("connection", false); o != nil { - err := hcl.DecodeObject(&connInfo, o) - if err != nil { - return nil, fmt.Errorf( - "Error reading connection info for %s[%s]: %s", - t.Key, - k, - err) - } - } - - // If we have provisioners, then parse those out - var provisioners []*Provisioner - if os := obj.Get("provisioner", false); os != nil { - var err error - provisioners, err = loadProvisionersHcl(os, connInfo) - if err != nil { - return nil, fmt.Errorf( - "Error reading provisioners for %s[%s]: %s", - t.Key, - k, - err) - } - } - - // If we have a provider, then parse it out - var provider string - if o := obj.Get("provider", false); o != nil { - err := hcl.DecodeObject(&provider, o) - if err != nil { - return nil, fmt.Errorf( - "Error reading provider for %s[%s]: %s", - t.Key, - k, - err) - } - } - - // Check if the resource should be re-created before - // destroying the existing instance - var lifecycle ResourceLifecycle - if o := obj.Get("lifecycle", false); o != nil { - var raw map[string]interface{} - if err = hcl.DecodeObject(&raw, o); err != nil { - return nil, fmt.Errorf( - "Error parsing lifecycle for %s[%s]: %s", - t.Key, - k, - err) - } - - if err := mapstructure.WeakDecode(raw, &lifecycle); err != nil { - return nil, fmt.Errorf( - "Error parsing lifecycle for %s[%s]: %s", - t.Key, - k, - err) - } - } - - result = append(result, &Resource{ - Name: k, - Type: t.Key, - RawCount: countConfig, - RawConfig: rawConfig, - Provisioners: provisioners, - Provider: provider, - DependsOn: dependsOn, - Lifecycle: lifecycle, - }) + for _, item := range list.Items { + if len(item.Keys) != 2 { + // TODO: bad error message + return nil, fmt.Errorf("resource needs exactly 2 names") } + + t := item.Keys[0].Token.Value().(string) + k := item.Keys[1].Token.Value().(string) + + var listVal *ast.ObjectList + if ot, ok := item.Val.(*ast.ObjectType); ok { + listVal = ot.List + } else { + return nil, fmt.Errorf("resources %s[%s]: should be an object", t, k) + } + + var config map[string]interface{} + if err := hcl.DecodeObject(&config, item.Val); err != nil { + return nil, fmt.Errorf( + "Error reading config for %s[%s]: %s", + t, + k, + err) + } + + // Remove the fields we handle specially + delete(config, "connection") + delete(config, "count") + delete(config, "depends_on") + delete(config, "provisioner") + delete(config, "provider") + delete(config, "lifecycle") + + rawConfig, err := NewRawConfig(config) + if err != nil { + return nil, fmt.Errorf( + "Error reading config for %s[%s]: %s", + t, + k, + err) + } + + // If we have a count, then figure it out + var count string = "1" + if o := listVal.Filter("count"); len(o.Items) > 0 { + err = hcl.DecodeObject(&count, o.Items[0].Val) + if err != nil { + return nil, fmt.Errorf( + "Error parsing count for %s[%s]: %s", + t, + k, + err) + } + } + countConfig, err := NewRawConfig(map[string]interface{}{ + "count": count, + }) + if err != nil { + return nil, err + } + countConfig.Key = "count" + + // If we have depends fields, then add those in + var dependsOn []string + if o := listVal.Filter("depends_on"); len(o.Items) > 0 { + err := hcl.DecodeObject(&dependsOn, o.Items[0].Val) + if err != nil { + return nil, fmt.Errorf( + "Error reading depends_on for %s[%s]: %s", + t, + k, + err) + } + } + + // If we have connection info, then parse those out + var connInfo map[string]interface{} + if o := listVal.Filter("connection"); len(o.Items) > 0 { + err := hcl.DecodeObject(&connInfo, o.Items[0].Val) + if err != nil { + return nil, fmt.Errorf( + "Error reading connection info for %s[%s]: %s", + t, + k, + err) + } + } + + // If we have provisioners, then parse those out + var provisioners []*Provisioner + if os := listVal.Filter("provisioner"); len(os.Items) > 0 { + var err error + provisioners, err = loadProvisionersHcl(os, connInfo) + if err != nil { + return nil, fmt.Errorf( + "Error reading provisioners for %s[%s]: %s", + t, + k, + err) + } + } + + // If we have a provider, then parse it out + var provider string + if o := listVal.Filter("provider"); len(o.Items) > 0 { + err := hcl.DecodeObject(&provider, o.Items[0].Val) + if err != nil { + return nil, fmt.Errorf( + "Error reading provider for %s[%s]: %s", + t, + k, + err) + } + } + + // Check if the resource should be re-created before + // destroying the existing instance + var lifecycle ResourceLifecycle + if o := listVal.Filter("lifecycle"); len(o.Items) > 0 { + var raw map[string]interface{} + if err = hcl.DecodeObject(&raw, o.Items[0].Val); err != nil { + return nil, fmt.Errorf( + "Error parsing lifecycle for %s[%s]: %s", + t, + k, + err) + } + + if err := mapstructure.WeakDecode(raw, &lifecycle); err != nil { + return nil, fmt.Errorf( + "Error parsing lifecycle for %s[%s]: %s", + t, + k, + err) + } + } + + result = append(result, &Resource{ + Name: k, + Type: t, + RawCount: countConfig, + RawConfig: rawConfig, + Provisioners: provisioners, + Provider: provider, + DependsOn: dependsOn, + Lifecycle: lifecycle, + }) } return result, nil } -func loadProvisionersHcl(os *hclobj.Object, connInfo map[string]interface{}) ([]*Provisioner, error) { - pos := make([]*hclobj.Object, 0, int(os.Len())) - - // Accumulate all the actual provisioner configuration objects. We - // have to iterate twice here: - // - // 1. The first iteration is of the list of `provisioner` blocks. - // 2. The second iteration is of the dictionary within the - // provisioner which will have only one element which is the - // type of provisioner to use along with tis config. - // - // In JSON it looks kind of like this: - // - // [ - // { - // "shell": { - // ... - // } - // } - // ] - // - for _, o1 := range os.Elem(false) { - for _, o2 := range o1.Elem(true) { - - switch o1.Type { - case hclobj.ValueTypeList: - for _, o3 := range o2.Elem(true) { - pos = append(pos, o3) - } - case hclobj.ValueTypeObject: - pos = append(pos, o2) - } - } - } - - // Short-circuit if there are no items - if len(pos) == 0 { +func loadProvisionersHcl(list *ast.ObjectList, connInfo map[string]interface{}) ([]*Provisioner, error) { + list = list.Children() + if len(list.Items) == 0 { return nil, nil } - result := make([]*Provisioner, 0, len(pos)) - for _, po := range pos { + // Go through each object and turn it into an actual result. + result := make([]*Provisioner, 0, len(list.Items)) + for _, item := range list.Items { + n := item.Keys[0].Token.Value().(string) + + var listVal *ast.ObjectList + if ot, ok := item.Val.(*ast.ObjectType); ok { + listVal = ot.List + } else { + return nil, fmt.Errorf("provisioner '%s': should be an object", n) + } + var config map[string]interface{} - if err := hcl.DecodeObject(&config, po); err != nil { + if err := hcl.DecodeObject(&config, item.Val); err != nil { return nil, err } @@ -614,8 +590,8 @@ func loadProvisionersHcl(os *hclobj.Object, connInfo map[string]interface{}) ([] // Check if we have a provisioner-level connection // block that overrides the resource-level var subConnInfo map[string]interface{} - if o := po.Get("connection", false); o != nil { - err := hcl.DecodeObject(&subConnInfo, o) + if o := listVal.Filter("connection"); len(o.Items) > 0 { + err := hcl.DecodeObject(&subConnInfo, o.Items[0].Val) if err != nil { return nil, err } @@ -640,7 +616,7 @@ func loadProvisionersHcl(os *hclobj.Object, connInfo map[string]interface{}) ([] } result = append(result, &Provisioner{ - Type: po.Key, + Type: n, RawConfig: rawConfig, ConnInfo: connRaw, }) diff --git a/config/loader_test.go b/config/loader_test.go index eaf4f10aaa..18b26f9c53 100644 --- a/config/loader_test.go +++ b/config/loader_test.go @@ -45,6 +45,31 @@ func TestLoadFile_badType(t *testing.T) { } } +func TestLoadFileHeredoc(t *testing.T) { + c, err := LoadFile(filepath.Join(fixtureDir, "heredoc.tf")) + if err != nil { + t.Fatalf("err: %s", err) + } + + if c == nil { + t.Fatal("config should not be nil") + } + + if c.Dir != "" { + t.Fatalf("bad: %#v", c.Dir) + } + + actual := providerConfigsStr(c.ProviderConfigs) + if actual != strings.TrimSpace(heredocProvidersStr) { + t.Fatalf("bad:\n%s", actual) + } + + actual = resourcesStr(c.Resources) + if actual != strings.TrimSpace(heredocResourcesStr) { + t.Fatalf("bad:\n%s", actual) + } +} + func TestLoadFileBasic(t *testing.T) { c, err := LoadFile(filepath.Join(fixtureDir, "basic.tf")) if err != nil { @@ -532,6 +557,20 @@ func TestLoad_temporary_files(t *testing.T) { } } +const heredocProvidersStr = ` +aws + access_key + secret_key +` + +const heredocResourcesStr = ` +aws_iam_policy[policy] (x1) + description + name + path + policy +` + const basicOutputsStr = ` web_ip vars diff --git a/config/test-fixtures/heredoc.tf b/config/test-fixtures/heredoc.tf new file mode 100644 index 0000000000..b765a58f0c --- /dev/null +++ b/config/test-fixtures/heredoc.tf @@ -0,0 +1,24 @@ +provider "aws" { + access_key = "foo" + secret_key = "bar" +} + +resource "aws_iam_policy" "policy" { + name = "test_policy" + path = "/" + description = "My test policy" + policy = < "))) - } - - for i, m := range msgs { - msgs[i] = fmt.Sprintf("* %s", m) - } - - return fmt.Sprintf( - "The dependency graph is not valid:\n\n%s", - strings.Join(msgs, "\n")) -} - -// ConstraintError is used to return detailed violation -// information from CheckConstraints -type ConstraintError struct { - Violations []*Violation -} - -func (c *ConstraintError) Error() string { - return fmt.Sprintf("%d constraint violations", len(c.Violations)) -} - -// Violation is used to pass along information about -// a constraint violation -type Violation struct { - Source *Noun - Target *Noun - Dependency *Dependency - Constraint Constraint - Err error -} - -func (v *Violation) Error() string { - return fmt.Sprintf("Constraint %v between %v and %v violated: %v", - v.Constraint, v.Source, v.Target, v.Err) -} - -// CheckConstraints walks the graph and ensures that all -// user imposed constraints are satisfied. -func (g *Graph) CheckConstraints() error { - // Ensure we have a root - if g.Root == nil { - return fmt.Errorf("Graph must be validated before checking constraint violations") - } - - // Create a constraint error - cErr := &ConstraintError{} - - // Walk from the root - digraph.DepthFirstWalk(g.Root, func(n digraph.Node) bool { - noun := n.(*Noun) - for _, dep := range noun.Deps { - target := dep.Target - for _, constraint := range dep.Constraints { - ok, err := constraint.Satisfied(noun, target) - if ok { - continue - } - violation := &Violation{ - Source: noun, - Target: target, - Dependency: dep, - Constraint: constraint, - Err: err, - } - cErr.Violations = append(cErr.Violations, violation) - } - } - return true - }) - - if cErr.Violations != nil { - return cErr - } - return nil -} - -// Noun returns the noun with the given name, or nil if it cannot be found. -func (g *Graph) Noun(name string) *Noun { - for _, n := range g.Nouns { - if n.Name == name { - return n - } - } - - return nil -} - -// String generates a little ASCII string of the graph, useful in -// debugging output. -func (g *Graph) String() string { - var buf bytes.Buffer - - // Alphabetize the output based on the noun name - keys := make([]string, 0, len(g.Nouns)) - mapping := make(map[string]*Noun) - for _, n := range g.Nouns { - mapping[n.Name] = n - keys = append(keys, n.Name) - } - sort.Strings(keys) - - if g.Root != nil { - buf.WriteString(fmt.Sprintf("root: %s\n", g.Root.Name)) - } else { - buf.WriteString("root: \n") - } - for _, k := range keys { - n := mapping[k] - buf.WriteString(fmt.Sprintf("%s\n", n.Name)) - - // Alphabetize the dependency names - depKeys := make([]string, 0, len(n.Deps)) - depMapping := make(map[string]*Dependency) - for _, d := range n.Deps { - depMapping[d.Target.Name] = d - depKeys = append(depKeys, d.Target.Name) - } - sort.Strings(depKeys) - - for _, k := range depKeys { - dep := depMapping[k] - buf.WriteString(fmt.Sprintf( - " %s -> %s\n", - dep.Source, - dep.Target)) - } - } - - return buf.String() -} - -// Validate is used to ensure that a few properties of the graph are not violated: -// 1) There must be a single "root", or source on which nothing depends. -// 2) All nouns in the graph must be reachable from the root -// 3) The graph must be cycle free, meaning there are no cicular dependencies -func (g *Graph) Validate() error { - // Convert to node list - nodes := make([]digraph.Node, len(g.Nouns)) - for i, n := range g.Nouns { - nodes[i] = n - } - - // Create a validate erro - vErr := &ValidateError{} - - // Search for all the sources, if we have only 1, it must be the root - if sources := digraph.Sources(nodes); len(sources) != 1 { - vErr.MissingRoot = true - goto CHECK_CYCLES - } else { - g.Root = sources[0].(*Noun) - } - - // Check reachability - if unreached := digraph.Unreachable(g.Root, nodes); len(unreached) > 0 { - vErr.Unreachable = make([]*Noun, len(unreached)) - for i, u := range unreached { - vErr.Unreachable[i] = u.(*Noun) - } - } - -CHECK_CYCLES: - // Check for cycles - if cycles := digraph.StronglyConnectedComponents(nodes, true); len(cycles) > 0 { - vErr.Cycles = make([][]*Noun, len(cycles)) - for i, cycle := range cycles { - group := make([]*Noun, len(cycle)) - for j, n := range cycle { - group[j] = n.(*Noun) - } - vErr.Cycles[i] = group - } - } - - // Check for loops to yourself - for _, n := range g.Nouns { - for _, d := range n.Deps { - if d.Source == d.Target { - vErr.Cycles = append(vErr.Cycles, []*Noun{n}) - } - } - } - - // Return the detailed error - if vErr.MissingRoot || vErr.Unreachable != nil || vErr.Cycles != nil { - return vErr - } - return nil -} - -// Walk will walk the tree depth-first (dependency first) and call -// the callback. -// -// The callbacks will be called in parallel, so if you need non-parallelism, -// then introduce a lock in your callback. -func (g *Graph) Walk(fn WalkFunc) error { - // Set so we don't callback for a single noun multiple times - var seenMapL sync.RWMutex - seenMap := make(map[*Noun]chan struct{}) - seenMap[g.Root] = make(chan struct{}) - - // Keep track of what nodes errored. - var errMapL sync.RWMutex - errMap := make(map[*Noun]struct{}) - - // Build the list of things to visit - tovisit := make([]*Noun, 1, len(g.Nouns)) - tovisit[0] = g.Root - - // Spawn off all our goroutines to walk the tree - errCh := make(chan error) - for len(tovisit) > 0 { - // Grab the current thing to use - n := len(tovisit) - current := tovisit[n-1] - tovisit = tovisit[:n-1] - - // Go through each dependency and run that first - for _, dep := range current.Deps { - if _, ok := seenMap[dep.Target]; !ok { - seenMapL.Lock() - seenMap[dep.Target] = make(chan struct{}) - seenMapL.Unlock() - tovisit = append(tovisit, dep.Target) - } - } - - // Spawn off a goroutine to execute our callback once - // all our dependencies are satisfied. - go func(current *Noun) { - seenMapL.RLock() - closeCh := seenMap[current] - seenMapL.RUnlock() - - defer close(closeCh) - - // Wait for all our dependencies - for _, dep := range current.Deps { - seenMapL.RLock() - ch := seenMap[dep.Target] - seenMapL.RUnlock() - - // Wait for the dep to be run - <-ch - - // Check if any dependencies errored. If so, - // then return right away, we won't walk it. - errMapL.RLock() - _, errOk := errMap[dep.Target] - errMapL.RUnlock() - if errOk { - return - } - } - - // Call our callback! - if err := fn(current); err != nil { - errMapL.Lock() - errMap[current] = struct{}{} - errMapL.Unlock() - - errCh <- err - } - }(current) - } - - // Aggregate channel that is closed when all goroutines finish - doneCh := make(chan struct{}) - go func() { - defer close(doneCh) - - for _, ch := range seenMap { - <-ch - } - }() - - // Wait for finish OR an error - select { - case <-doneCh: - return nil - case err := <-errCh: - // Drain the error channel - go func() { - for _ = range errCh { - // Nothing - } - }() - - // Wait for the goroutines to end - <-doneCh - close(errCh) - - return err - } -} - -// DependsOn returns the set of nouns that have a -// dependency on a given noun. This can be used to find -// the incoming edges to a noun. -func (g *Graph) DependsOn(n *Noun) []*Noun { - var incoming []*Noun -OUTER: - for _, other := range g.Nouns { - if other == n { - continue - } - for _, d := range other.Deps { - if d.Target == n { - incoming = append(incoming, other) - continue OUTER - } - } - } - return incoming -} diff --git a/depgraph/graph_test.go b/depgraph/graph_test.go deleted file mode 100644 index c883b1417c..0000000000 --- a/depgraph/graph_test.go +++ /dev/null @@ -1,467 +0,0 @@ -package depgraph - -import ( - "fmt" - "reflect" - "sort" - "strings" - "sync" - "testing" -) - -// ParseNouns is used to parse a string in the format of: -// a -> b ; edge name -// b -> c -// Into a series of nouns and dependencies -func ParseNouns(s string) map[string]*Noun { - lines := strings.Split(s, "\n") - nodes := make(map[string]*Noun) - for _, line := range lines { - var edgeName string - if idx := strings.Index(line, ";"); idx >= 0 { - edgeName = strings.Trim(line[idx+1:], " \t\r\n") - line = line[:idx] - } - parts := strings.SplitN(line, "->", 2) - if len(parts) != 2 { - continue - } - head_name := strings.Trim(parts[0], " \t\r\n") - tail_name := strings.Trim(parts[1], " \t\r\n") - head := nodes[head_name] - if head == nil { - head = &Noun{Name: head_name} - nodes[head_name] = head - } - tail := nodes[tail_name] - if tail == nil { - tail = &Noun{Name: tail_name} - nodes[tail_name] = tail - } - edge := &Dependency{ - Name: edgeName, - Source: head, - Target: tail, - } - head.Deps = append(head.Deps, edge) - } - return nodes -} - -func NounMapToList(m map[string]*Noun) []*Noun { - list := make([]*Noun, 0, len(m)) - for _, n := range m { - list = append(list, n) - } - return list -} - -func TestGraph_Noun(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -b -> e -c -> d -c -> e`) - - g := &Graph{ - Name: "Test", - Nouns: NounMapToList(nodes), - } - - n := g.Noun("a") - if n == nil { - t.Fatal("should not be nil") - } - if n.Name != "a" { - t.Fatalf("bad: %#v", n) - } -} - -func TestGraph_String(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -b -> e -c -> d -c -> e`) - - g := &Graph{ - Name: "Test", - Nouns: NounMapToList(nodes), - Root: nodes["a"], - } - actual := g.String() - - expected := ` -root: a -a - a -> b - a -> c -b - b -> d - b -> e -c - c -> d - c -> e -d -e -` - - actual = strings.TrimSpace(actual) - expected = strings.TrimSpace(expected) - if actual != expected { - t.Fatalf("bad:\n%s\n!=\n%s", actual, expected) - } -} - -func TestGraph_Validate(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -b -> e -c -> d -c -> e`) - list := NounMapToList(nodes) - - g := &Graph{Name: "Test", Nouns: list} - if err := g.Validate(); err != nil { - t.Fatalf("err: %v", err) - } -} - -func TestGraph_Validate_Cycle(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -d -> b`) - list := NounMapToList(nodes) - - g := &Graph{Name: "Test", Nouns: list} - err := g.Validate() - if err == nil { - t.Fatalf("expected err") - } - - vErr, ok := err.(*ValidateError) - if !ok { - t.Fatalf("expected validate error") - } - - if len(vErr.Cycles) != 1 { - t.Fatalf("expected cycles") - } - - cycle := vErr.Cycles[0] - cycleNodes := make([]string, len(cycle)) - for i, c := range cycle { - cycleNodes[i] = c.Name - } - sort.Strings(cycleNodes) - - if cycleNodes[0] != "b" { - t.Fatalf("bad: %v", cycle) - } - if cycleNodes[1] != "d" { - t.Fatalf("bad: %v", cycle) - } -} - -func TestGraph_Validate_MultiRoot(t *testing.T) { - nodes := ParseNouns(`a -> b -c -> d`) - list := NounMapToList(nodes) - - g := &Graph{Name: "Test", Nouns: list} - err := g.Validate() - if err == nil { - t.Fatalf("expected err") - } - - vErr, ok := err.(*ValidateError) - if !ok { - t.Fatalf("expected validate error") - } - - if !vErr.MissingRoot { - t.Fatalf("expected missing root") - } -} - -func TestGraph_Validate_NoRoot(t *testing.T) { - nodes := ParseNouns(`a -> b -b -> a`) - list := NounMapToList(nodes) - - g := &Graph{Name: "Test", Nouns: list} - err := g.Validate() - if err == nil { - t.Fatalf("expected err") - } - - vErr, ok := err.(*ValidateError) - if !ok { - t.Fatalf("expected validate error") - } - - if !vErr.MissingRoot { - t.Fatalf("expected missing root") - } -} - -func TestGraph_Validate_Unreachable(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -x -> x`) - list := NounMapToList(nodes) - - g := &Graph{Name: "Test", Nouns: list} - err := g.Validate() - if err == nil { - t.Fatalf("expected err") - } - - vErr, ok := err.(*ValidateError) - if !ok { - t.Fatalf("expected validate error") - } - - if len(vErr.Unreachable) != 1 { - t.Fatalf("expected unreachable") - } - - if vErr.Unreachable[0].Name != "x" { - t.Fatalf("bad: %v", vErr.Unreachable[0]) - } -} - -type VersionMeta int -type VersionConstraint struct { - Min int - Max int -} - -func (v *VersionConstraint) Satisfied(head, tail *Noun) (bool, error) { - vers := int(tail.Meta.(VersionMeta)) - if vers < v.Min { - return false, fmt.Errorf("version %d below minimum %d", - vers, v.Min) - } else if vers > v.Max { - return false, fmt.Errorf("version %d above maximum %d", - vers, v.Max) - } - return true, nil -} - -func (v *VersionConstraint) String() string { - return "version" -} - -func TestGraph_ConstraintViolation(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -b -> e -c -> d -c -> e`) - list := NounMapToList(nodes) - - // Add a version constraint - vers := &VersionConstraint{1, 3} - - // Introduce some constraints - depB := nodes["a"].Deps[0] - depB.Constraints = []Constraint{vers} - depC := nodes["a"].Deps[1] - depC.Constraints = []Constraint{vers} - - // Add some versions - nodes["b"].Meta = VersionMeta(0) - nodes["c"].Meta = VersionMeta(4) - - g := &Graph{Name: "Test", Nouns: list} - err := g.Validate() - if err != nil { - t.Fatalf("err: %v", err) - } - - err = g.CheckConstraints() - if err == nil { - t.Fatalf("Expected err") - } - - cErr, ok := err.(*ConstraintError) - if !ok { - t.Fatalf("expected constraint error") - } - - if len(cErr.Violations) != 2 { - t.Fatalf("expected 2 violations: %v", cErr) - } - - if cErr.Violations[0].Error() != "Constraint version between a and b violated: version 0 below minimum 1" { - t.Fatalf("err: %v", cErr.Violations[0]) - } - - if cErr.Violations[1].Error() != "Constraint version between a and c violated: version 4 above maximum 3" { - t.Fatalf("err: %v", cErr.Violations[1]) - } -} - -func TestGraph_Constraint_NoViolation(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -b -> e -c -> d -c -> e`) - list := NounMapToList(nodes) - - // Add a version constraint - vers := &VersionConstraint{1, 3} - - // Introduce some constraints - depB := nodes["a"].Deps[0] - depB.Constraints = []Constraint{vers} - depC := nodes["a"].Deps[1] - depC.Constraints = []Constraint{vers} - - // Add some versions - nodes["b"].Meta = VersionMeta(2) - nodes["c"].Meta = VersionMeta(3) - - g := &Graph{Name: "Test", Nouns: list} - err := g.Validate() - if err != nil { - t.Fatalf("err: %v", err) - } - - err = g.CheckConstraints() - if err != nil { - t.Fatalf("err: %v", err) - } -} - -func TestGraphWalk(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -b -> e -c -> d -c -> e`) - list := NounMapToList(nodes) - g := &Graph{Name: "Test", Nouns: list} - if err := g.Validate(); err != nil { - t.Fatalf("err: %s", err) - } - - var namesLock sync.Mutex - names := make([]string, 0, 0) - err := g.Walk(func(n *Noun) error { - namesLock.Lock() - defer namesLock.Unlock() - names = append(names, n.Name) - return nil - }) - if err != nil { - t.Fatalf("err: %s", err) - } - - expected := [][]string{ - {"e", "d", "c", "b", "a"}, - {"e", "d", "b", "c", "a"}, - {"d", "e", "c", "b", "a"}, - {"d", "e", "b", "c", "a"}, - } - found := false - for _, expect := range expected { - if reflect.DeepEqual(expect, names) { - found = true - break - } - } - if !found { - t.Fatalf("bad: %#v", names) - } -} - -func TestGraphWalk_error(t *testing.T) { - nodes := ParseNouns(`a -> b -b -> c -a -> d -a -> e -e -> f -f -> g -g -> h`) - list := NounMapToList(nodes) - g := &Graph{Name: "Test", Nouns: list} - if err := g.Validate(); err != nil { - t.Fatalf("err: %s", err) - } - - // We repeat this a lot because sometimes timing causes - // a false positive. - for i := 0; i < 100; i++ { - var lock sync.Mutex - var walked []string - err := g.Walk(func(n *Noun) error { - lock.Lock() - defer lock.Unlock() - - walked = append(walked, n.Name) - - if n.Name == "b" { - return fmt.Errorf("foo") - } - - return nil - }) - if err == nil { - t.Fatal("should error") - } - - sort.Strings(walked) - - expected := []string{"b", "c", "d", "e", "f", "g", "h"} - if !reflect.DeepEqual(walked, expected) { - t.Fatalf("bad: %#v", walked) - } - } -} - -func TestGraph_DependsOn(t *testing.T) { - nodes := ParseNouns(`a -> b -a -> c -b -> d -b -> e -c -> d -c -> e`) - - g := &Graph{ - Name: "Test", - Nouns: NounMapToList(nodes), - } - - dNoun := g.Noun("d") - incoming := g.DependsOn(dNoun) - - if len(incoming) != 2 { - t.Fatalf("bad: %#v", incoming) - } - - var hasB, hasC bool - for _, in := range incoming { - switch in.Name { - case "b": - hasB = true - case "c": - hasC = true - default: - t.Fatalf("Bad: %#v", in) - } - } - if !hasB || !hasC { - t.Fatalf("missing incoming edge") - } -} diff --git a/depgraph/noun.go b/depgraph/noun.go deleted file mode 100644 index 8f14adfe10..0000000000 --- a/depgraph/noun.go +++ /dev/null @@ -1,33 +0,0 @@ -package depgraph - -import ( - "fmt" - - "github.com/hashicorp/terraform/digraph" -) - -// Nouns are the key structure of the dependency graph. They can -// be used to represent all objects in the graph. They are linked -// by depedencies. -type Noun struct { - Name string // Opaque name - Meta interface{} - Deps []*Dependency -} - -// Edges returns the out-going edges of a Noun -func (n *Noun) Edges() []digraph.Edge { - edges := make([]digraph.Edge, len(n.Deps)) - for idx, dep := range n.Deps { - edges[idx] = dep - } - return edges -} - -func (n *Noun) GoString() string { - return fmt.Sprintf("*%#v", *n) -} - -func (n *Noun) String() string { - return n.Name -} diff --git a/examples/aws-s3-cross-account-access/main.tf b/examples/aws-s3-cross-account-access/main.tf index 5cc9b9638b..07ec30a27b 100644 --- a/examples/aws-s3-cross-account-access/main.tf +++ b/examples/aws-s3-cross-account-access/main.tf @@ -13,7 +13,7 @@ resource "aws_s3_bucket" "prod" { acl = "private" policy = < 1 && rawURL[1] == ':' { - // Assume we're dealing with a drive letter file path where the drive - // letter has been parsed into the URL Scheme, and the rest of the path - // has been parsed into the URL Path without the leading ':' character. - u.Path = fmt.Sprintf("%s:%s", string(rawURL[0]), u.Path) - u.Scheme = "" - } - - if len(u.Host) > 1 && u.Host[1] == ':' && strings.HasPrefix(rawURL, "file://") { - // Assume we're dealing with a drive letter file path where the drive - // letter has been parsed into the URL Host. - u.Path = fmt.Sprintf("%s%s", u.Host, u.Path) - u.Host = "" - } - - // Remove leading slash for absolute file paths. - if len(u.Path) > 2 && u.Path[0] == '/' && u.Path[2] == ':' { - u.Path = u.Path[1:] - } - - return u, err -} diff --git a/scripts/website_push.sh b/scripts/website_push.sh index fa58fd694a..53ed59777c 100755 --- a/scripts/website_push.sh +++ b/scripts/website_push.sh @@ -1,5 +1,8 @@ #!/bin/bash +# Switch to the stable-website branch +git checkout stable-website + # Set the tmpdir if [ -z "$TMPDIR" ]; then TMPDIR="/tmp" diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go index 1fd069db08..4060dd3e37 100644 --- a/terraform/context_apply_test.go +++ b/terraform/context_apply_test.go @@ -2851,6 +2851,55 @@ func TestContext2Apply_outputInvalid(t *testing.T) { } } +func TestContext2Apply_outputAdd(t *testing.T) { + m1 := testModule(t, "apply-output-add-before") + p1 := testProvider("aws") + p1.ApplyFn = testApplyFn + p1.DiffFn = testDiffFn + ctx1 := testContext2(t, &ContextOpts{ + Module: m1, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p1), + }, + }) + + if _, err := ctx1.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state1, err := ctx1.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + m2 := testModule(t, "apply-output-add-after") + p2 := testProvider("aws") + p2.ApplyFn = testApplyFn + p2.DiffFn = testDiffFn + ctx2 := testContext2(t, &ContextOpts{ + Module: m2, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p2), + }, + State: state1, + }) + + if _, err := ctx2.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state2, err := ctx2.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(state2.String()) + expected := strings.TrimSpace(testTerraformApplyOutputAddStr) + if actual != expected { + t.Fatalf("bad: \n%s", actual) + } +} + func TestContext2Apply_outputList(t *testing.T) { m := testModule(t, "apply-output-list") p := testProvider("aws") diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go index db6f245772..e91fc77472 100644 --- a/terraform/context_plan_test.go +++ b/terraform/context_plan_test.go @@ -1627,6 +1627,53 @@ STATE: } } +func TestContext2Plan_targetedOrphan(t *testing.T) { + m := testModule(t, "plan-targeted-orphan") + p := testProvider("aws") + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.orphan": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "i-789xyz", + }, + }, + }, + }, + }, + }, + Destroy: true, + Targets: []string{"aws_instance.orphan"}, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(plan.String()) + expected := strings.TrimSpace(`DIFF: + +DESTROY: aws_instance.orphan + +STATE: + +aws_instance.orphan: + ID = i-789xyz`) + if actual != expected { + t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + } +} + func TestContext2Plan_provider(t *testing.T) { m := testModule(t, "plan-provider") p := testProvider("aws") diff --git a/terraform/graph_builder.go b/terraform/graph_builder.go index ca99667016..7963bcbf4f 100644 --- a/terraform/graph_builder.go +++ b/terraform/graph_builder.go @@ -105,9 +105,8 @@ func (b *BuiltinGraphBuilder) Steps(path []string) []GraphTransformer { // Create all our resources from the configuration and state &ConfigTransformer{Module: b.Root}, &OrphanTransformer{ - State: b.State, - Module: b.Root, - Targeting: len(b.Targets) > 0, + State: b.State, + Module: b.Root, }, // Output-related transformations diff --git a/terraform/graph_config_node_resource.go b/terraform/graph_config_node_resource.go index 2bf0e4568a..9fc696c2a3 100644 --- a/terraform/graph_config_node_resource.go +++ b/terraform/graph_config_node_resource.go @@ -163,9 +163,9 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) // expand orphans, which have all the same semantics in a destroy // as a primary. steps = append(steps, &OrphanTransformer{ - State: state, - View: n.Resource.Id(), - Targeting: len(n.Targets) > 0, + State: state, + View: n.Resource.Id(), + Targets: n.Targets, }) steps = append(steps, &DeposedTransformer{ diff --git a/terraform/graphnodeconfigtype_string.go b/terraform/graphnodeconfigtype_string.go index d8c1724f47..9ea0acbebe 100644 --- a/terraform/graphnodeconfigtype_string.go +++ b/terraform/graphnodeconfigtype_string.go @@ -1,4 +1,4 @@ -// generated by stringer -type=GraphNodeConfigType graph_config_node_type.go; DO NOT EDIT +// Code generated by "stringer -type=GraphNodeConfigType graph_config_node_type.go"; DO NOT EDIT package terraform diff --git a/terraform/instancetype_string.go b/terraform/instancetype_string.go index 3114bc1571..f65414b347 100644 --- a/terraform/instancetype_string.go +++ b/terraform/instancetype_string.go @@ -1,4 +1,4 @@ -// generated by stringer -type=InstanceType instancetype.go; DO NOT EDIT +// Code generated by "stringer -type=InstanceType instancetype.go"; DO NOT EDIT package terraform diff --git a/terraform/interpolate.go b/terraform/interpolate.go index 31c366eabc..0ee61901ce 100644 --- a/terraform/interpolate.go +++ b/terraform/interpolate.go @@ -73,6 +73,8 @@ func (i *Interpolater) Values( err = i.valueResourceVar(scope, n, v, result) case *config.SelfVariable: err = i.valueSelfVar(scope, n, v, result) + case *config.SimpleVariable: + err = i.valueSimpleVar(scope, n, v, result) case *config.UserVariable: err = i.valueUserVar(scope, n, v, result) default: @@ -249,6 +251,19 @@ func (i *Interpolater) valueSelfVar( return i.valueResourceVar(scope, n, rv, result) } +func (i *Interpolater) valueSimpleVar( + scope *InterpolationScope, + n string, + v *config.SimpleVariable, + result map[string]ast.Variable) error { + // SimpleVars are never handled by Terraform's interpolator + result[n] = ast.Variable{ + Value: config.UnknownVariableValue, + Type: ast.TypeString, + } + return nil +} + func (i *Interpolater) valueUserVar( scope *InterpolationScope, n string, diff --git a/terraform/terraform_test.go b/terraform/terraform_test.go index d17726acb4..3b1653f431 100644 --- a/terraform/terraform_test.go +++ b/terraform/terraform_test.go @@ -575,6 +575,22 @@ Outputs: foo_num = 2 ` +const testTerraformApplyOutputAddStr = ` +aws_instance.test.0: + ID = foo + foo = foo0 + type = aws_instance +aws_instance.test.1: + ID = foo + foo = foo1 + type = aws_instance + +Outputs: + +firstOutput = foo0 +secondOutput = foo1 +` + const testTerraformApplyOutputListStr = ` aws_instance.bar.0: ID = foo diff --git a/terraform/test-fixtures/apply-output-add-after/main.tf b/terraform/test-fixtures/apply-output-add-after/main.tf new file mode 100644 index 0000000000..1c10eaafc5 --- /dev/null +++ b/terraform/test-fixtures/apply-output-add-after/main.tf @@ -0,0 +1,6 @@ +provider "aws" {} + +resource "aws_instance" "test" { + foo = "${format("foo%d", count.index)}" + count = 2 +} diff --git a/terraform/test-fixtures/apply-output-add-after/outputs.tf.json b/terraform/test-fixtures/apply-output-add-after/outputs.tf.json new file mode 100644 index 0000000000..32e96b0ee0 --- /dev/null +++ b/terraform/test-fixtures/apply-output-add-after/outputs.tf.json @@ -0,0 +1,10 @@ +{ + "output": { + "firstOutput": { + "value": "${aws_instance.test.0.foo}" + }, + "secondOutput": { + "value": "${aws_instance.test.1.foo}" + } + } +} diff --git a/terraform/test-fixtures/apply-output-add-before/main.tf b/terraform/test-fixtures/apply-output-add-before/main.tf new file mode 100644 index 0000000000..1c10eaafc5 --- /dev/null +++ b/terraform/test-fixtures/apply-output-add-before/main.tf @@ -0,0 +1,6 @@ +provider "aws" {} + +resource "aws_instance" "test" { + foo = "${format("foo%d", count.index)}" + count = 2 +} diff --git a/terraform/test-fixtures/apply-output-add-before/outputs.tf.json b/terraform/test-fixtures/apply-output-add-before/outputs.tf.json new file mode 100644 index 0000000000..238668ef3d --- /dev/null +++ b/terraform/test-fixtures/apply-output-add-before/outputs.tf.json @@ -0,0 +1,7 @@ +{ + "output": { + "firstOutput": { + "value": "${aws_instance.test.0.foo}" + } + } +} diff --git a/terraform/test-fixtures/plan-targeted-orphan/main.tf b/terraform/test-fixtures/plan-targeted-orphan/main.tf new file mode 100644 index 0000000000..f2020858b1 --- /dev/null +++ b/terraform/test-fixtures/plan-targeted-orphan/main.tf @@ -0,0 +1,6 @@ +# This resource was previously "created" and the fixture represents +# it being destroyed subsequently + +/*resource "aws_instance" "orphan" {*/ + /*foo = "bar"*/ +/*}*/ diff --git a/terraform/transform_orphan.go b/terraform/transform_orphan.go index 45ea050ba3..13e8fbf941 100644 --- a/terraform/transform_orphan.go +++ b/terraform/transform_orphan.go @@ -2,7 +2,7 @@ package terraform import ( "fmt" - "log" + "strings" "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config/module" @@ -29,7 +29,7 @@ type OrphanTransformer struct { // Targets are user-specified resources to target. We need to be aware of // these so we don't improperly identify orphans when they've just been // filtered out of the graph via targeting. - Targeting bool + Targets []ResourceAddress // View, if non-nil will set a view on the module state. View string @@ -41,13 +41,6 @@ func (t *OrphanTransformer) Transform(g *Graph) error { return nil } - if t.Targeting { - log.Printf("Skipping orphan transformer because we have targets.") - // If we are in a run where we are targeting nodes, we won't process - // orphans for this run. - return nil - } - // Build up all our state representatives resourceRep := make(map[string]struct{}) for _, v := range g.Vertices() { @@ -74,8 +67,24 @@ func (t *OrphanTransformer) Transform(g *Graph) error { state = state.View(t.View) } - // Go over each resource orphan and add it to the graph. resourceOrphans := state.Orphans(config) + if len(t.Targets) > 0 { + var targetedOrphans []string + for _, o := range resourceOrphans { + targeted := false + for _, t := range t.Targets { + prefix := fmt.Sprintf("%s.%s.%d", t.Type, t.Name, t.Index) + if strings.HasPrefix(o, prefix) { + targeted = true + } + } + if targeted { + targetedOrphans = append(targetedOrphans, o) + } + } + resourceOrphans = targetedOrphans + } + resourceVertexes = make([]dag.Vertex, len(resourceOrphans)) for i, k := range resourceOrphans { // If this orphan is represented by some other node somehow, @@ -173,6 +182,10 @@ type graphNodeOrphanResource struct { dependentOn []string } +func (n *graphNodeOrphanResource) ResourceAddress() *ResourceAddress { + return n.ResourceAddress() +} + func (n *graphNodeOrphanResource) DependableName() []string { return []string{n.dependableName()} } diff --git a/terraform/walkoperation_string.go b/terraform/walkoperation_string.go index 1ce3661c49..0811fc8744 100644 --- a/terraform/walkoperation_string.go +++ b/terraform/walkoperation_string.go @@ -1,4 +1,4 @@ -// generated by stringer -type=walkOperation graph_walk_operation.go; DO NOT EDIT +// Code generated by "stringer -type=walkOperation graph_walk_operation.go"; DO NOT EDIT package terraform diff --git a/website/Gemfile.lock b/website/Gemfile.lock index 7034f311e8..725b16df37 100644 --- a/website/Gemfile.lock +++ b/website/Gemfile.lock @@ -186,6 +186,3 @@ PLATFORMS DEPENDENCIES middleman-hashicorp! - -BUNDLED WITH - 1.10.6 diff --git a/website/source/assets/images/logo-header-black@2x.png b/website/source/assets/images/logo-header-black@2x.png new file mode 100644 index 0000000000..74521a6646 Binary files /dev/null and b/website/source/assets/images/logo-header-black@2x.png differ diff --git a/website/source/assets/images/header-logo.png b/website/source/assets/images/logo-header.png similarity index 100% rename from website/source/assets/images/header-logo.png rename to website/source/assets/images/logo-header.png diff --git a/website/source/assets/images/header-logo@2x.png b/website/source/assets/images/logo-header@2x.png similarity index 100% rename from website/source/assets/images/header-logo@2x.png rename to website/source/assets/images/logo-header@2x.png diff --git a/website/source/assets/javascripts/app/Init.js b/website/source/assets/javascripts/app/Init.js index 074c95d8d9..06e772bebe 100644 --- a/website/source/assets/javascripts/app/Init.js +++ b/website/source/assets/javascripts/app/Init.js @@ -21,6 +21,12 @@ var Init = { if (this.Pages[id]) { this.Pages[id](); } + //always init sidebar + Init.initializeSidebar(); + }, + + initializeSidebar: function(){ + new Sidebar(); }, generateAnimatedLogo: function(){ diff --git a/website/source/assets/javascripts/app/Sidebar.js b/website/source/assets/javascripts/app/Sidebar.js new file mode 100644 index 0000000000..b36e508c4a --- /dev/null +++ b/website/source/assets/javascripts/app/Sidebar.js @@ -0,0 +1,50 @@ +(function(){ + + Sidebar = Base.extend({ + + $body: null, + $overlay: null, + $sidebar: null, + $sidebarHeader: null, + $sidebarImg: null, + $toggleButton: null, + + constructor: function(){ + this.$body = $('body'); + this.$overlay = $('.sidebar-overlay'); + this.$sidebar = $('#sidebar'); + this.$sidebarHeader = $('#sidebar .sidebar-header'); + this.$toggleButton = $('.navbar-toggle'); + this.sidebarImg = this.$sidebarHeader.css('background-image'); + + this.addEventListeners(); + }, + + addEventListeners: function(){ + var _this = this; + + _this.$toggleButton.on('click', function() { + _this.$sidebar.toggleClass('open'); + if ((_this.$sidebar.hasClass('sidebar-fixed-left') || _this.$sidebar.hasClass('sidebar-fixed-right')) && _this.$sidebar.hasClass('open')) { + _this.$overlay.addClass('active'); + _this.$body.css('overflow', 'hidden'); + } else { + _this.$overlay.removeClass('active'); + _this.$body.css('overflow', 'auto'); + } + + return false; + }); + + _this.$overlay.on('click', function() { + $(this).removeClass('active'); + _this.$body.css('overflow', 'auto'); + _this.$sidebar.removeClass('open'); + }); + } + + }); + + window.Sidebar = Sidebar; + +})(); diff --git a/website/source/assets/javascripts/application.js b/website/source/assets/javascripts/application.js index 9180016ef3..15542d8dc1 100644 --- a/website/source/assets/javascripts/application.js +++ b/website/source/assets/javascripts/application.js @@ -21,4 +21,5 @@ //= require app/Engine.Shape //= require app/Engine.Shape.Puller //= require app/Engine.Typewriter +//= require app/Sidebar //= require app/Init diff --git a/website/source/assets/javascripts/docs.js b/website/source/assets/javascripts/docs.js index 3247af6585..5f35a8cd74 100644 --- a/website/source/assets/javascripts/docs.js +++ b/website/source/assets/javascripts/docs.js @@ -2,7 +2,7 @@ var Init = { - start: function(){ + start: function(){ var classname = this.hasClass(document.body, 'page-sub'); if (classname) { @@ -25,7 +25,7 @@ var Init = { resizeImage: function(){ var header = document.getElementById('header'), - footer = document.getElementById('footer-wrap'), + footer = document.getElementById('footer'), main = document.getElementById('main-content'), vp = window.innerHeight, bodyHeight = document.body.clientHeight, @@ -33,10 +33,10 @@ var Init = { fHeight = footer.clientHeight, withMinHeight = hHeight + fHeight + 830; - if(withMinHeight > bodyHeight ){ + if(withMinHeight < vp && bodyHeight < vp){ var newHeight = (vp - (hHeight+fHeight)) + 'px'; main.style.height = newHeight; - } + } } }; diff --git a/website/source/assets/stylesheets/_fonts.scss b/website/source/assets/stylesheets/_fonts.scss index 3f1d4aaed7..c14cb70711 100755 --- a/website/source/assets/stylesheets/_fonts.scss +++ b/website/source/assets/stylesheets/_fonts.scss @@ -2,6 +2,7 @@ // Typography // -------------------------------------------------- + //light .rls-l{ font-family: $font-family-lato; diff --git a/website/source/assets/stylesheets/_footer.scss b/website/source/assets/stylesheets/_footer.scss index c16acff753..386caf8874 100644 --- a/website/source/assets/stylesheets/_footer.scss +++ b/website/source/assets/stylesheets/_footer.scss @@ -1,210 +1,88 @@ - -#footer-wrap{ - background-color: white; - padding: 0 0 50px 0; -} - -body.page-home{ - #footer{ - margin-top: -40px; - } +body.page-sub{ + #footer{ + padding: 40px 0; + margin-top: 0; + } } #footer{ - padding: 140px 0 40px; - color: black; - - a{ - color: black; - } - + background-color: white; + padding: 150px 0 80px; + margin-top: -40px; + &.white{ + background-color: $black; .footer-links{ - margin-bottom: 20px; - - .li-under a:hover::after, - .li-under a:focus::after { - opacity: 1; - -webkit-transform: skewY(15deg) translateY(8px); - -moz-transform: skewY(15deg) translateY(8px); - transform: skewY(15deg) translateY(8px); - } - - .li-under a::after { - background-color: $purple; - } - - li{ - a{ - text-transform: uppercase; - font-size: 12px; - letter-spacing: 3px; - @include transition( color 0.3s ease ); - font-weight: 400; - - &:hover{ - color: $purple; - @include transition( color 0.3s ease ); - background-color: transparent; - } - } - } + li > a { + @include project-footer-a-subpage-style(); + } } + } - .buttons.navbar-nav{ - float: none; - display: inline-block; - margin-bottom: 30px; - margin-top: 0px; - - li{ - &.first{ - margin-right: 12px; - } - - &.download{ - a{ - background: image-url('icon-download-purple.png') 8px 6px no-repeat; - @include img-retina("icon-download-purple.png", "icon-download-purple@2x.png", 20px, 20px); - } - } - - &.github{ - a{ - background: image-url('icon-github-purple.png') 8px 6px no-repeat; - @include img-retina("icon-github-purple.png", "icon-github-purple@2x.png", 20px, 20px); - } - } - } - - li > a { - padding-top: 6px; - padding-bottom: 6px; - padding-left: 40px; - } + .footer-links{ + li > a { + @include project-footer-a-style(); } + } - .footer-hashi{ - float: right; - padding-top: 5px; - letter-spacing: 2px; + .hashicorp-project{ + margin-top: 24px; + } - a{ - color: black; - font-weight: $font-weight-lato-xb; - } - - span{ - margin-right: 10px; - } - - .hashi-logo{ - display: inline-block; - vertical-align: middle; - i{ - display: inline-block; - width: 37px; - height: 40px; - background: image-url('footer-hashicorp-logo.png') 0 0 no-repeat; - @include img-retina("footer-hashicorp-logo.png", "footer-hashicorp-logo@2x.png", 37px, 40px); - } - } - } + .pull-right{ + padding-right: 15px; + } } -.page-sub{ - #footer-wrap{ - padding: 0; - } +.edit-page-link{ + position: absolute; + top: -70px; + right: 30px;; - #footer{ - padding: 140px 0 100px; - background-color: $black; - transform: none; - - >.container{ - transform: none; - } - - a{ - color: white; - } - - .footer-hashi{ - color: white; - - .hashi-logo{ - i{ - background: image-url('footer-hashicorp-white-logo.png') 0 0 no-repeat; - @include img-retina("footer-hashicorp-white-logo.png", "footer-hashicorp-white-logo@2x.png", 37px, 40px); - } - } - } - } -} - - -@media (min-width: 1500px) { - body.page-home{ - #footer{ - margin-top: -60px; - padding: 190px 0 40px; - } - } + a{ + text-transform: uppercase; + color: $black; + font-size: 13px; + } } @media (max-width: 992px) { - .page-sub #footer, #footer{ - .footer-hashi { - padding-top: 14px; - span{ - margin-right: 6px; - } - .hashi-logo{ - i{ - margin-top: -6px; - width: 20px; - height: 22px; - background-size: 20px 22px; - } - } - } + .footer-links { + display: block; + text-align: center; + + ul{ + display: inline-block;; + float: none !important; } - -} - -@media (max-width: 768px) { - #footer{ - padding: 100px 0 40px; - text-align: center; - - .footer-links{ - float: none; - display: inline-block; - } - - .footer-hashi { - float: none; - display: inline-block; - - .pull-right{ - float: none !important; - } - } + .footer-hashi{ + display: block; + float: none !important; } + } } -@media (max-width: 320px) { - #footer{ - text-align: center; +@media (max-width: 414px) { + #footer{ + ul{ + display: block; + li{ + display: block; + float: none; + } - .footer-links{ - .li-under{ - float: none !important; - } + &.external-links{ + li{ + svg{ + position: relative; + left: 0; + top: 2px; + margin-top: 0; + margin-right: 4px; + } } + } } + } } - - diff --git a/website/source/assets/stylesheets/_header.scss b/website/source/assets/stylesheets/_header.scss index 408d11d781..68e50f3683 100755 --- a/website/source/assets/stylesheets/_header.scss +++ b/website/source/assets/stylesheets/_header.scss @@ -1,382 +1,87 @@ // // Header +// - Project Specific +// - edits should be made here // -------------------------------------------------- body.page-sub{ - - .terra-btn{ - background-color: rgba(130, 47, 247, 1); - } - - #header{ - height: 90px; - background-color: $purple; - - .navbar-collapse{ - background-color: rgba(255, 255, 255, 0.98); - } - - .nav-logo{ - height: 90px; - } - - .nav-white{ - height: 90px; - background-color: white; - } - - .main-links.navbar-nav{ - float: left !important; - li > a { - color: black; - @include transition( color 0.3s ease ); - } - } - - .buttons.nav > li > a, .buttons.nav > li > a { - //background-color: lighten($purple, 1%); - @include transition( background-color 0.3s ease ); - } - - .buttons.nav > li > a:hover, .buttons.nav > li > a:focus { - background-color: black; - @include transition( background-color 0.3s ease ); - } - - .main-links.nav > li > a:hover, .main-links.nav > li > a:focus { - color: $purple; - @include transition( color 0.3s ease ); - } - } + #header{ + background-color: $purple; + } } #header { - position: relative; - color: $white; - text-rendering: optimizeLegibility; - margin-bottom: 0; + .navbar-brand { + .logo{ + font-size: 20px; + text-transform: uppercase; + @include lato-light(); + background: image-url('../images/logo-header.png') 0 0 no-repeat; + @include img-retina("../images/logo-header.png", "../images/logo-header@2x.png", $project-logo-width, $project-logo-height); + background-position: 0 45%; - &.navbar-static-top{ - height:70px; - - -webkit-transform:translate3d(0,0,0); - -moz-transform:translate3d(0,0,0); - -ms-transform:translate3d(0,0,0); - -o-transform:translate3d(0,0,0); - transform:translate3d(0,0,0); - z-index: 1000; + &:hover{ + opacity: .6; + } } - a{ - color: $white; - } - - .navbar-toggle{ - margin-top: 26px; - margin-bottom: 14px; - margin-right: 0; - border: 2px solid $white; - border-radius: 0; - .icon-bar{ - border: 1px solid $white; - border-radius: 0; + .by-hashicorp{ + &:hover{ + svg{ + line{ + opacity: .4; + } } + } } + } + .buttons{ + margin-top: 2px; //baseline everything + + ul.navbar-nav{ + li { + // &:hover{ + // svg path{ + // fill: $purple; + // } + // } + + svg path{ + fill: $white; + } + } + } + } + + .main-links, + .external-links { + li > a { + @include project-a-style(); + } + } +} + +@media (max-width: 414px) { + #header { .navbar-brand { - &.logo{ - margin-top: 15px; - padding: 5px 0 0 68px; - height: 56px; - line-height: 56px; - font-size: 24px; - @include lato-light(); - text-transform: uppercase; - background: image-url('consul-header-logo.png') 0 0 no-repeat; - @include img-retina("header-logo.png", "header-logo@2x.png", 50px, 56px); - -webkit-font-smoothing: default; - } - } - - .navbar-nav{ - -webkit-font-smoothing: antialiased; - li{ - position: relative; - - > a { - font-size: 12px; - text-transform: uppercase; - letter-spacing: 3px; - padding-left: 22px; - @include transition( color 0.3s ease ); - } - - &.first{ - >a{ - padding-left: 15px; - } - } - } - } - - .nav > li > a:hover, .nav > li > a:focus { - background-color: transparent; - color: lighten($purple, 15%); - @include transition( color 0.3s ease ); - } - - .main-links.navbar-nav{ - margin-top: 28px; - - li + li{ - padding-left: 6px; - } - - li + li::before { - content: ""; - position: absolute; - left: 0; - top: 7px; - width: 1px; - height: 12px; - background-color: $purple; - @include skewY(24deg); - padding-right: 0; - } - - li > a { - //border-bottom: 2px solid rgba(255, 255, 255, .2); - line-height: 26px; - margin: 0 8px; - padding: 0 0 0 4px; - } - - } - - .buttons.navbar-nav{ - margin-top: 25px; - margin-left: 30px; - - li{ - &.first{ - margin-right: 13px; - } - - &.download{ - a{ - padding-left: 30px; - background: image-url("header-download-icon.png") 12px 8px no-repeat; - @include img-retina("header-download-icon.png", "header-download-icon@2x.png", 12px, 13px); - } - } - - &.github{ - a{ - background: image-url("header-github-icon.png") 12px 7px no-repeat; - @include img-retina("header-github-icon.png", "header-github-icon@2x.png", 12px, 13px); - } - } - } - - li > a { - color: white; - padding-top: 4px; - padding-bottom: 4px; - padding-left: 32px; - padding-right: 12px; - letter-spacing: 0.05em; - } + .logo{ + padding-left: 37px; + font-size: 18px; + @include img-retina("../images/logo-header.png", "../images/logo-header@2x.png", $project-logo-width * .75, $project-logo-height * .75); + //background-position: 0 45%; + } } + } } -@media (min-width: 1200px) { - - #header{ - .main-links.navbar-nav{ - margin-top: 28px; - - li + li{ - padding-left: 6px; - } - - li + li::before { - content: ""; - position: absolute; - left: 0; - top: 9px; - width: 6px; - height: 8px; - background-color: $purple; - @include skewY(24deg); - padding-right: 8px; - } - - li > a { - //border-bottom: 2px solid rgba(255, 255, 255, .2); - line-height: 26px; - margin: 0 12px; - padding: 0 0 0 4px; - } - - } - } -} - -@media (min-width: 992px) { - - .collapse{ - margin-top: 8px; - } - - //homepage has more space at this width to accommodate chevrons - .page-home{ - #header{ - .main-links.navbar-nav{ - li + li{ - padding-left: 6px; - } - - li + li::before { - content: ""; - position: absolute; - left: 0; - top: 9px; - width: 6px; - height: 8px; - background-color: $purple; - @include skewY(24deg); - padding-right: 8px; - } - } - } - } -} - - - -@media (min-width: 768px) and (max-width: 992px) { - - body.page-home{ - .nav-logo{ - width: 30%; - } - .nav-white{ - margin-top: 8px; - width: 70%; - } - .buttons.navbar-nav{ - li{ - > a{ - padding-right: 4px !important; - text-indent: -9999px; - white-space: nowrap; - } - } - } - } -} - - -@media (max-width: 992px) { - - #header { - .navbar-brand { - &.logo{ - span{ - width: 120px; - height: 39px; - margin-top: 12px; - background-size: 120px 39px; - } - } - } - } -} - -@media (max-width: 768px) { - - body.page-sub{ - #header{ - .nav-white{ - background-color: transparent; - } - } - } - - #header{ - .buttons.navbar-nav{ - float: none !important; - margin: 0; - padding-bottom: 0 !important; - - li{ - &.first{ - margin-right: 0; - } - } - } - } - - //#footer, - #header{ - .buttons.navbar-nav, - .main-links.navbar-nav{ - display: block; - padding-bottom: 15px; - li{ - display: block; - float: none; - margin-top: 15px; - } - - .li-under a::after, - li + li::before { - display: none; - } - } - } - - //#footer, - #header{ - .main-links.navbar-nav{ - float: left !important; - li > a { - padding: 0; - padding-left: 0; - line-height: 22px; - } - } - } -} - -@media (max-width: 763px) { - .navbar-static-top { - .nav-white { - background-color:rgba(0,0,0,0.5); - } - } -} @media (max-width: 320px) { - - #header{ - .navbar-brand { - &.logo{ - padding:0 0 0 54px !important; - font-size: 20px !important; - line-height:42px !important; - margin-top: 23px !important ; - @include img-retina("../images/header-logo.png", "../images/header-logo@2x.png", 39px, 44px); - } - } - + #header { + .navbar-brand { + .logo{ + font-size: 0 !important; //hide terraform text + } } - - #feature-auto{ - .terminal-text{ - line-height: 48px !important; - font-size: 20px !important; - } - } - + } } diff --git a/website/source/assets/stylesheets/_sidebar.scss b/website/source/assets/stylesheets/_sidebar.scss new file mode 100644 index 0000000000..45a4ee64fa --- /dev/null +++ b/website/source/assets/stylesheets/_sidebar.scss @@ -0,0 +1,23 @@ +// +// Sidebar +// - Project Specific +// - Make sidebar edits here +// -------------------------------------------------- + +.sidebar { + .sidebar-nav { + // Links + //---------------- + li { + a { + color: $black; + + svg{ + path{ + fill: $black; + } + } + } + } + } +} diff --git a/website/source/assets/stylesheets/_utilities.scss b/website/source/assets/stylesheets/_utilities.scss index b01423f500..4f30e502b3 100755 --- a/website/source/assets/stylesheets/_utilities.scss +++ b/website/source/assets/stylesheets/_utilities.scss @@ -2,27 +2,11 @@ // Utility classes // -------------------------------------------------- - -// -// ------------------------- - @mixin anti-alias() { text-rendering: optimizeLegibility; -webkit-font-smoothing: antialiased; } -@mixin consul-gradient-bg() { - background: #694a9c; /* Old browsers */ - background: -moz-linear-gradient(left, #694a9c 0%, #cd2028 100%); /* FF3.6+ */ - background: -webkit-gradient(linear, left top, right top, color-stop(0%,#694a9c), color-stop(100%,#cd2028)); /* Chrome,Safari4+ */ - background: -webkit-linear-gradient(left, #694a9c 0%,#cd2028 100%); /* Chrome10+,Safari5.1+ */ - background: -o-linear-gradient(left, #694a9c 0%,#cd2028 100%); /* Opera 11.10+ */ - background: -ms-linear-gradient(left, #694a9c 0%,#cd2028 100%); /* IE10+ */ - background: linear-gradient(to right, #694a9c 0%,#cd2028 100%); /* W3C */ - filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#694a9c', endColorstr='#cd2028',GradientType=1 ); /* IE6-9 */ - -} - @mixin lato-light() { font-family: $font-family-lato; font-weight: 300; diff --git a/website/source/assets/stylesheets/application.scss b/website/source/assets/stylesheets/application.scss index 95ae64b73a..3776f90566 100755 --- a/website/source/assets/stylesheets/application.scss +++ b/website/source/assets/stylesheets/application.scss @@ -1,13 +1,12 @@ @import 'bootstrap-sprockets'; @import 'bootstrap'; -@import url("//fonts.googleapis.com/css?family=Lato:300,400,700"); +@import url("//fonts.googleapis.com/css?family=Lato:300,400,700|Open+Sans:300,400,600"); // Core variables and mixins @import '_variables'; -@import '_mixins'; -// Utility classes +// Utility @import '_utilities'; // Core CSS @@ -16,11 +15,18 @@ //Global Site @import '_global'; +// Hashicorp Shared Project Styles +@import 'hashicorp-shared/_project-utility'; +@import 'hashicorp-shared/_hashicorp-utility'; +@import 'hashicorp-shared/_hashicorp-header'; +@import 'hashicorp-shared/_hashicorp-sidebar'; + // Components @import '_header'; @import '_footer'; @import '_jumbotron'; @import '_buttons'; +@import '_sidebar'; // Pages @import '_home'; diff --git a/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss new file mode 100755 index 0000000000..e9bbe501e7 --- /dev/null +++ b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss @@ -0,0 +1,343 @@ +// +// Hashicorp header +// - Shared throughout projects +// - Edits should not be made here +// -------------------------------------------------- + +#header{ + position: relative; + margin-bottom: 0; +} + +.navigation { + color: black; + text-rendering: optimizeLegibility; + transition: all 1s ease; + + &.white{ + .navbar-brand { + .logo { + color: white; + } + } + + .main-links, + .external-links { + li > a { + &:hover{ + opacity: 1; + } + } + } + } + + &.black{ + .navbar-brand { + .logo { + color: black; + } + } + + .main-links, + .external-links { + li > a { + color: black; + } + } + } + + .navbar-toggle{ + height: $header-height; + margin: 0; + border-radius: 0; + .icon-bar{ + border: 1px solid $black; + border-radius: 0; + } + } + + .external-links { + &.white{ + svg path{ + fill: $white; + } + } + + li { + position: relative; + + svg path{ + @include transition( all 300ms ease-in ); + } + + &:hover{ + svg path{ + @include transition( all 300ms ease-in ); + } + } + + @include project-svg-external-links-style(); + + &.download{ + margin-right: 10px; + } + + > a { + padding-left: 12px !important; + svg{ + position: absolute; + left: -12px; + top: 50%; + margin-top: -7px; + width: 14px; + height: 14px; + } + } + } + } + + .main-links{ + margin-right: $nav-margin-right * 2; + } + + .main-links, + .external-links { + &.white{ + li > a { + color: white; + } + } + li > a { + @include hashi-a-style(); + margin: 0 10px; + padding-top: 1px; + line-height: $header-height; + @include project-a-style(); + } + } + + .nav > li > a:hover, .nav > li > a:focus { + background-color: transparent; + @include transition( all 300ms ease-in ); + } +} + +.navbar-brand { + display: block; + height: $header-height; + padding: 0; + margin: 0 10px 0 0; + + .logo{ + display: inline-block; + height: $header-height; + vertical-align:top; + padding: 0; + line-height: $header-height; + padding-left: $project-logo-width + $project-logo-pad-left; + background-position: 0 center; + @include transition(all 300ms ease-in); + + &:hover{ + @include transition(all 300ms ease-in); + text-decoration: none; + } + } +} + +.navbar-toggle{ + &.white{ + .icon-bar{ + border: 1px solid white; + } + } +} + +.by-hashicorp{ + display: inline-block; + vertical-align:top; + height: $header-height; + margin-left: 3px; + padding-top: 2px; + color: black; + line-height: $header-height; + font-family: $header-font-family; + font-weight: 600; + font-size: 0; + text-decoration: none; + + &.white{ + color: white; + font-weight: 300; + svg{ + path, + polygon{ + fill: white; + } + line{ + stroke: white; + } + } + + &:focus, + &:hover{ + text-decoration: none; + color: white; + } + } + + &:focus, + &:hover{ + text-decoration: none; + } + + .svg-wrap{ + font-size: 13px; + } + + svg{ + &.svg-by{ + width: $by-hashicorp-width; + height: $by-hashicorp-height; + margin-bottom: -4px; + margin-left: 4px; + } + + &.svg-logo{ + width: 16px; + height: 16px; + margin-bottom: -3px; + margin-left: 4px; + } + + path, + polygon{ + fill: black; + @include transition(all 300ms ease-in); + + &:hover{ + @include transition(all 300ms ease-in); + } + } + line{ + stroke: black; + @include transition(all 300ms ease-in); + + &:hover{ + @include transition(all 300ms ease-in); + } + } + } +} + +.hashicorp-project{ + display: inline-block; + height: 30px; + line-height: 30px; + text-decoration: none; + font-size: 14px; + color: $black; + font-weight: 600; + + &.white{ + color: white; + svg{ + path, + polygon{ + fill: white; + } + line{ + stroke: white; + } + } + } + + &:focus{ + text-decoration: none; + } + + &:hover{ + text-decoration: none; + svg{ + &.svg-by{ + line{ + stroke: $purple; + } + } + } + } + + span{ + margin-right: 4px; + font-family: $header-font-family; + font-weight: 500; + } + + span, + svg{ + display: inline-block; + } + + svg{ + &.svg-by{ + width: $by-hashicorp-width; + height: $by-hashicorp-height; + margin-bottom: -4px; + margin-left: -3px; + } + + &.svg-logo{ + width: 30px; + height: 30px; + margin-bottom: -10px; + margin-left: -1px; + } + + path, + line{ + fill: $black; + @include transition(all 300ms ease-in); + + &:hover{ + @include transition(all 300ms ease-in); + } + } + } +} + +@media (max-width: 480px) { + .navigation { + .main-links{ + margin-right: 0; + } + } +} + +@media (max-width: 414px) { + #header { + .navbar-toggle{ + padding-top: 10px; + height: $header-mobile-height; + } + + .navbar-brand { + height: $header-mobile-height; + + .logo{ + height: $header-mobile-height; + line-height: $header-mobile-height; + } + .by-hashicorp{ + height: $header-mobile-height; + line-height: $header-mobile-height; + padding-top: 0; + } + } + .main-links, + .external-links { + li > a { + line-height: $header-mobile-height; + } + } + } +} diff --git a/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-sidebar.scss b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-sidebar.scss new file mode 100644 index 0000000000..99f77f6c52 --- /dev/null +++ b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-sidebar.scss @@ -0,0 +1,293 @@ +// +// Hashicorp Sidebar +// - Shared throughout projects +// - Edits should not be made here +// -------------------------------------------------- + +// Base variables +// -------------------------------------------------- +$screen-tablet: 768px; + +$gray-darker: #212121; // #212121 - text +$gray-secondary: #757575; // #757575 - secondary text, icons +$gray: #bdbdbd; // #bdbdbd - hint text +$gray-light: #e0e0e0; // #e0e0e0 - divider +$gray-lighter: #f5f5f5; // #f5f5f5 - background +$link-color: $gray-darker; +$link-bg: transparent; +$link-hover-color: $gray-lighter; +$link-hover-bg: $gray-lighter; +$link-active-color: $gray-darker; +$link-active-bg: $gray-light; +$link-disabled-color: $gray-light; +$link-disabled-bg: transparent; + +/* -- Sidebar style ------------------------------- */ + +// Sidebar variables +// -------------------------------------------------- +$zindex-sidebar-fixed: 1035; + +$sidebar-desktop-width: 280px; +$sidebar-width: 240px; + +$sidebar-padding: 16px; +$sidebar-divider: $sidebar-padding/2; + +$sidebar-icon-width: 40px; +$sidebar-icon-height: 20px; + +@mixin sidebar-nav-base { + text-align: center; + + &:last-child{ + border-bottom: none; + } + + li > a { + background-color: $link-bg; + } + li:hover > a { + background-color: $link-hover-bg; + } + li:focus > a, li > a:focus { + background-color: $link-bg; + } + + > .open > a { + &, + &:hover, + &:focus { + background-color: $link-hover-bg; + } + } + + > .active > a { + &, + &:hover, + &:focus { + background-color: $link-active-bg; + } + } + > .disabled > a { + &, + &:hover, + &:focus { + background-color: $link-disabled-bg; + } + } + + // Dropdown menu items + > .dropdown { + // Remove background color from open dropdown + > .dropdown-menu { + background-color: $link-hover-bg; + + > li > a { + &:focus { + background-color: $link-hover-bg; + } + &:hover { + background-color: $link-hover-bg; + } + } + + > .active > a { + &, + &:hover, + &:focus { + color: $link-active-color; + background-color: $link-active-bg; + } + } + } + } +} + +// +// Sidebar +// -------------------------------------------------- + +// Sidebar Elements +// +// Basic style of sidebar elements +.sidebar { + position: relative; + display: block; + min-height: 100%; + overflow-y: auto; + overflow-x: hidden; + border: none; + @include transition(all 0.5s cubic-bezier(0.55, 0, 0.1, 1)); + @include clearfix(); + background-color: $white; + + ul{ + padding-left: 0; + list-style-type: none; + } + + .sidebar-divider, .divider { + width: 80%; + height: 1px; + margin: 8px auto; + background-color: lighten($gray, 20%); + } + + // Sidebar heading + //---------------- + .sidebar-header { + position: relative; + margin-bottom: $sidebar-padding; + @include transition(all .2s ease-in-out); + } + + .sidebar-image { + padding-top: 24px; + img { + display: block; + margin: 0 auto; + } + } + + + // Sidebar icons + //---------------- + .sidebar-icon { + display: inline-block; + height: $sidebar-icon-height; + margin-right: $sidebar-divider; + text-align: left; + font-size: $sidebar-icon-height; + vertical-align: middle; + + &:before, &:after { + vertical-align: middle; + } + } + + .sidebar-nav { + margin: 0; + padding: 0; + + @include sidebar-nav-base(); + + // Links + //---------------- + li { + position: relative; + list-style-type: none; + text-align: center; + + a { + position: relative; + cursor: pointer; + user-select: none; + @include hashi-a-style-core(); + + svg{ + top: 2px; + width: 14px; + height: 14px; + margin-bottom: -2px; + margin-right: 4px; + } + } + } + } +} + +// Sidebar toggling +// +// Hide sidebar +.sidebar { + width: 0; + @include translate3d(-$sidebar-desktop-width, 0, 0); + + &.open { + min-width: $sidebar-desktop-width; + width: $sidebar-desktop-width; + @include translate3d(0, 0, 0); + } +} + +// Sidebar positions: fix the left/right sidebars +.sidebar-fixed-left, +.sidebar-fixed-right, +.sidebar-stacked { + position: fixed; + top: 0; + bottom: 0; + z-index: $zindex-sidebar-fixed; +} +.sidebar-stacked { + left: 0; +} +.sidebar-fixed-left { + left: 0; + box-shadow: 2px 0px 25px rgba(0,0,0,0.15); + -webkit-box-shadow: 2px 0px 25px rgba(0,0,0,0.15); +} +.sidebar-fixed-right { + right: 0; + box-shadow: 0px 2px 25px rgba(0,0,0,0.15); + -webkit-box-shadow: 0px 2px 25px rgba(0,0,0,0.15); + + @include translate3d($sidebar-desktop-width, 0, 0); + &.open { + @include translate3d(0, 0, 0); + } + .icon-material-sidebar-arrow:before { + content: "\e614"; // icon-material-arrow-forward + } +} + +// Sidebar size +// +// Change size of sidebar and sidebar elements on small screens +@media (max-width: $screen-tablet) { + .sidebar.open { + min-width: $sidebar-width; + width: $sidebar-width; + } + + .sidebar .sidebar-header { + //height: $sidebar-width * 9/16; // 16:9 header dimension + } + + .sidebar .sidebar-image { + /* img { + width: $sidebar-width/4 - $sidebar-padding; + height: $sidebar-width/4 - $sidebar-padding; + } */ + } +} + +.sidebar-overlay { + visibility: hidden; + position: fixed; + top: 0; + left: 0; + right: 0; + bottom: 0; + opacity: 0; + background: $white; + z-index: $zindex-sidebar-fixed - 1; + + -webkit-transition: visibility 0 linear .4s,opacity .4s cubic-bezier(.4,0,.2,1); + -moz-transition: visibility 0 linear .4s,opacity .4s cubic-bezier(.4,0,.2,1); + transition: visibility 0 linear .4s,opacity .4s cubic-bezier(.4,0,.2,1); + -webkit-transform: translateZ(0); + -moz-transform: translateZ(0); + -ms-transform: translateZ(0); + -o-transform: translateZ(0); + transform: translateZ(0); +} + +.sidebar-overlay.active { + opacity: 0.3; + visibility: visible; + -webkit-transition-delay: 0; + -moz-transition-delay: 0; + transition-delay: 0; +} diff --git a/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-utility.scss b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-utility.scss new file mode 100755 index 0000000000..de17e9815d --- /dev/null +++ b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-utility.scss @@ -0,0 +1,87 @@ +// +// Hashicorp Nav (header/footer) Utiliy Vars and Mixins +// +// Notes: +// - Include this in Application.scss before header and feature-footer +// - Open Sans Google (Semibold - 600) font needs to be included if not already +// -------------------------------------------------- + +// Variables +$font-family-open-sans: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; +$header-font-family: $font-family-open-sans; +$header-font-weight: 600; // semi-bold + +$header-height: 74px; +$header-mobile-height: 60px; +$by-hashicorp-width: 74px; +$by-hashicorp-height: 16px; +$nav-margin-right: 12px; + +// Mixins +@mixin hashi-a-style-core{ + font-family: $header-font-family; + font-weight: $header-font-weight; + font-size: 14px; + //letter-spacing: 0.0625em; +} + +@mixin hashi-a-style{ + margin: 0 15px; + padding: 0; + line-height: 22px; + @include hashi-a-style-core(); + @include transition( all 300ms ease-in ); + + &:hover{ + @include transition( all 300ms ease-in ); + background-color: transparent; + } +} + +//general shared project mixins +@mixin img-retina($image1x, $image, $width, $height) { + background-image: url($image1x); + background-size: $width $height; + background-repeat: no-repeat; + + @media (min--moz-device-pixel-ratio: 1.3), + (-o-min-device-pixel-ratio: 2.6/2), + (-webkit-min-device-pixel-ratio: 1.3), + (min-device-pixel-ratio: 1.3), + (min-resolution: 1.3dppx) { + /* on retina, use image that's scaled by 2 */ + background-image: url($image); + background-size: $width $height; + } +} + +// +// ------------------------- +@mixin anti-alias() { + text-rendering: optimizeLegibility; + -webkit-font-smoothing: antialiased; +} + +@mixin open-light() { + font-family: $font-family-open-sans; + font-weight: 300; +} + +@mixin open() { + font-family: $font-family-open-sans; + font-weight: 400; +} + +@mixin open-sb() { + font-family: $font-family-open-sans; + font-weight: 600; +} + +@mixin open-bold() { + font-family: $font-family-open-sans; + font-weight: 700; +} + +@mixin bez-1-transition{ + @include transition( all 300ms ease-in-out ); +} diff --git a/website/source/assets/stylesheets/hashicorp-shared/_project-utility.scss b/website/source/assets/stylesheets/hashicorp-shared/_project-utility.scss new file mode 100755 index 0000000000..570d6932c2 --- /dev/null +++ b/website/source/assets/stylesheets/hashicorp-shared/_project-utility.scss @@ -0,0 +1,72 @@ +// +// Mixins Specific to project +// - make edits to mixins here +// -------------------------------------------------- + +// Variables +$project-logo-width: 38px; +$project-logo-height: 40px; +$project-logo-pad-left: 8px; + +// Mixins +@mixin project-a-style{ + color: $white; + font-weight: 400; + opacity: .75; + -webkit-font-smoothing: antialiased; + + &:hover{ + color: $white; + opacity: 1; + } +} + +@mixin project-footer-a-style{ + color: $black; + font-weight: 400; + + &:hover{ + color: $purple; + + svg path{ + fill: $purple; + } + } +} + +@mixin project-footer-a-subpage-style{ + color: $white; + font-weight: 300; + + svg path{ + fill: $white; + } + + &:hover{ + color: $purple; + + svg path{ + fill: $purple; + } + } +} + +@mixin project-svg-external-links-style{ + svg path{ + fill: $black; + } + + &:hover{ + svg path{ + fill: $blue; + } + } +} + +@mixin project-by-hashicorp-style{ + &:hover{ + line{ + stroke: $blue; + } + } +} diff --git a/website/source/docs/commands/remote-config.html.markdown b/website/source/docs/commands/remote-config.html.markdown index 6f9a84b93b..ad31021134 100644 --- a/website/source/docs/commands/remote-config.html.markdown +++ b/website/source/docs/commands/remote-config.html.markdown @@ -16,7 +16,7 @@ disk. When remote state storage is enabled, Terraform will automatically fetch the latest state from the remote server when necessary and if any updates are made, the newest state is persisted back to the remote server. In this mode, users do not need to durably store the state using version -control or shared storaged. +control or shared storage. ## Usage diff --git a/website/source/docs/configuration/interpolation.html.md b/website/source/docs/configuration/interpolation.html.md index 049c718251..21efbd83e6 100644 --- a/website/source/docs/configuration/interpolation.html.md +++ b/website/source/docs/configuration/interpolation.html.md @@ -95,6 +95,9 @@ The supported built-in functions are: CIDR notation (like ``10.0.0.0/8``) and extends its prefix to include an additional subnet number. For example, ``cidrsubnet("10.0.0.0/8", 8, 2)`` returns ``10.2.0.0/16``. + + * `coalesce(string1, string2, ...)` - Returns the first non-empty value from + the given arguments. At least two arguments must be provided. * `compact(list)` - Removes empty string elements from a list. This can be useful in some cases, for example when passing joined lists as module diff --git a/website/source/docs/internals/debugging.html.md b/website/source/docs/internals/debugging.html.md index 7aa5fe25ec..b4d2afbd37 100644 --- a/website/source/docs/internals/debugging.html.md +++ b/website/source/docs/internals/debugging.html.md @@ -12,6 +12,6 @@ Terraform has detailed logs which can be enabled by setting the `TF_LOG` environ You can set `TF_LOG` to one of the log levels `TRACE`, `DEBUG`, `INFO`, `WARN` or `ERROR` to change the verbosity of the logs. `TRACE` is the most verbose and it is the default if `TF_LOG` is set to something other than a log level name. -To persist logged output you can set TF_LOG_PATH in order to force the log to always go to a specific file when logging is enabled. Note that even when TF_LOG_PATH is set, TF_LOG must be set in order for any logging to be enabled. +To persist logged output you can set `TF_LOG_PATH` in order to force the log to always go to a specific file when logging is enabled. Note that even when `TF_LOG_PATH` is set, `TF_LOG` must be set in order for any logging to be enabled. -If you find a bug with Terraform, please include the detailed log by using a service such as gist. \ No newline at end of file +If you find a bug with Terraform, please include the detailed log by using a service such as gist. diff --git a/website/source/docs/providers/aws/index.html.markdown b/website/source/docs/providers/aws/index.html.markdown index 05efd5700f..7199111c2b 100644 --- a/website/source/docs/providers/aws/index.html.markdown +++ b/website/source/docs/providers/aws/index.html.markdown @@ -59,5 +59,4 @@ The following arguments are supported in the `provider` block: * `kinesis_endpoint` - (Optional) Use this to override the default endpoint URL constructed from the `region`. It's typically used to connect to kinesalite. -In addition to the above parameters, the `AWS_SESSION_TOKEN` environmental -variable can be set to set an MFA token. +* `token` - (Optional) Use this to set an MFA token. It can also be sourced from the `AWS_SECURITY_TOKEN` environment variable. diff --git a/website/source/docs/providers/aws/r/db_instance.html.markdown b/website/source/docs/providers/aws/r/db_instance.html.markdown index 499e13ba40..55d13e250f 100644 --- a/website/source/docs/providers/aws/r/db_instance.html.markdown +++ b/website/source/docs/providers/aws/r/db_instance.html.markdown @@ -36,7 +36,7 @@ The following arguments are supported: * `allocated_storage` - (Required) The allocated storage in gigabytes. * `engine` - (Required) The database engine to use. -* `engine_version` - (Required) The engine version to use. +* `engine_version` - (Optional) The engine version to use. * `identifier` - (Required) The name of the RDS instance * `instance_class` - (Required) The instance type of the RDS instance. * `storage_type` - (Optional) One of "standard" (magnetic), "gp2" (general @@ -81,6 +81,7 @@ database, and to use this value as the source database. This correlates to the [Working with PostgreSQL and MySQL Read Replicas](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) for more information on using Replication. * `snapshot_identifier` - (Optional) Specifies whether or not to create this database from a snapshot. This correlates to the snapshot ID you'd find in the RDS console, e.g: rds:production-2015-06-26-06-05. +* `license_model` - (Optional, but required for some DB engines, i.e. Oracle SE1) License model information for this DB instance. ~> **NOTE:** Removing the `replicate_source_db` attribute from an existing RDS Replicate database managed by Terraform will promote the database to a fully diff --git a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown index ef1d69ed4a..e39d6172a7 100644 --- a/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown +++ b/website/source/docs/providers/aws/r/elasticache_cluster.html.markdown @@ -73,12 +73,22 @@ names to associate with this cache cluster Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3. Example: `arn:aws:s3:::my_bucket/snapshot1.rdb` +* `snapshot_window` - (Optional) The daily time range (in UTC) during which ElastiCache will +begin taking a daily snapshot of your cache cluster. Can only be used for the Redis engine. Example: 05:00-09:00 + +* `snapshow_retention_limit` - (Optional) The number of days for which ElastiCache will +retain automatic cache cluster snapshots before deleting them. For example, if you set +SnapshotRetentionLimit to 5, then a snapshot that was taken today will be retained for 5 days +before being deleted. If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off. +Can only be used for the Redis engine. + * `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an SNS topic to send ElastiCache notifications to. Example: `arn:aws:sns:us-east-1:012345678999:my_sns_topic` * `tags` - (Optional) A mapping of tags to assign to the resource. +~> **NOTE:** Snapshotting functionality is not compatible with t2 instance types. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/elb.html.markdown b/website/source/docs/providers/aws/r/elb.html.markdown index 824a5507f8..dde90e54d7 100644 --- a/website/source/docs/providers/aws/r/elb.html.markdown +++ b/website/source/docs/providers/aws/r/elb.html.markdown @@ -18,6 +18,12 @@ resource "aws_elb" "bar" { name = "foobar-terraform-elb" availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + access_logs { + bucket = "foo" + bucket_prefix = "bar" + interval = 60 + } + listener { instance_port = 8000 instance_protocol = "http" @@ -27,7 +33,7 @@ resource "aws_elb" "bar" { listener { instance_port = 8000 - instance_protocol = "http" + instance_protocol = "https" lb_port = 443 lb_protocol = "https" ssl_certificate_id = "arn:aws:iam::123456789012:server-certificate/certName" @@ -58,6 +64,7 @@ resource "aws_elb" "bar" { The following arguments are supported: * `name` - (Optional) The name of the ELB. By default generated by terraform. +* `access_logs` - (Optional) An Access Logs block. Access Logs documented below. * `availability_zones` - (Required for an EC2-classic ELB) The AZ's to serve traffic in. * `security_groups` - (Optional) A list of security group IDs to assign to the ELB. * `subnets` - (Required for a VPC ELB) A list of subnet IDs to attach to the ELB. @@ -74,13 +81,23 @@ The following arguments are supported: Exactly one of `availability_zones` or `subnets` must be specified: this determines if the ELB exists in a VPC or in EC2-classic. +Access Logs support the following: + +* `bucket` - (Required) The S3 bucket name to store the logs in. +* `bucket_prefix` - (Optional) The S3 bucket prefix. Logs are stored in the root if not configured. +* `interval` - (Optional) The publishing interval in minutes. Default: 60 minutes. + Listeners support the following: * `instance_port` - (Required) The port on the instance to route to -* `instance_protocol` - (Required) The protocol to use to the instance. +* `instance_protocol` - (Required) The protocol to use to the instance. Valid + values are `HTTP`, `HTTPS`, `TCP`, or `SSL` * `lb_port` - (Required) The port to listen on for the load balancer -* `lb_protocol` - (Required) The protocol to listen on. -* `ssl_certificate_id` - (Optional) The id of an SSL certificate you have uploaded to AWS IAM. +* `lb_protocol` - (Required) The protocol to listen on. Valid values are `HTTP`, + `HTTPS`, `TCP`, or `SSL` +* `ssl_certificate_id` - (Optional) The id of an SSL certificate you have +uploaded to AWS IAM. **Only valid when `instance_protocol` and + `lb_protocol` are either HTTPS or SSL** Health Check supports the following: @@ -100,5 +117,8 @@ The following attributes are exported: * `instances` - The list of instances in the ELB * `source_security_group` - The name of the security group that you can use as part of your inbound rules for your load balancer's back-end application - instances. + instances. Use this for Classic or Default VPC only. +* `source_security_group_id` - The ID of the security group that you can use as + part of your inbound rules for your load balancer's back-end application + instances. Only available on ELBs launch in a VPC. * `zone_id` - The canonical hosted zone ID of the ELB (to be used in a Route 53 Alias record) diff --git a/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown b/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown index c09ff6dc84..f9b05f66fa 100644 --- a/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown +++ b/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown @@ -23,7 +23,7 @@ resource "aws_iam_role" "role" { path = "/" assume_role_policy = < **NOTE:** Kinesis Firehose is currently only supported in us-east-1, us-west-2 and eu-west-1. This implementation of Kinesis Firehose only supports the s3 destination type as Terraform doesn't support Redshift yet. + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A name to identify the stream. This is unique to the +AWS account and region the Stream is created in. +* `destination` – (Required) This is the destination to where the data is delivered. The only options are `s3` & `redshift` +* `role_arn` - (Required) The ARN of the AWS credentials. +* `s3_bucket_arn` - (Required) The ARN of the S3 bucket +* `s3_prefix` - (Optional) The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket +* `s3_buffer_size` - (Optional) Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. + We recommend setting SizeInMBs to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec set SizeInMBs to be 10 MB or highe +* `s3_buffer_interval` - (Optional) Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 +* `s3_data_compression` - (Optional) The compression format. If no value is specified, the default is NOCOMPRESSION. Other supported values are GZIP, ZIP & Snappy + + +## Attributes Reference + +* `arn` - The Amazon Resource Name (ARN) specifying the Stream + +[1]: http://aws.amazon.com/documentation/firehose/ diff --git a/website/source/docs/providers/aws/r/lambda_function.html.markdown b/website/source/docs/providers/aws/r/lambda_function.html.markdown index 4c931fbada..f9c1ea4a3f 100644 --- a/website/source/docs/providers/aws/r/lambda_function.html.markdown +++ b/website/source/docs/providers/aws/r/lambda_function.html.markdown @@ -44,7 +44,10 @@ resource "aws_lambda_function" "test_lambda" { ## Argument Reference -* `filename` - (Required) A [zip file][2] containing your lambda function source code. +* `filename` - (Optional) A [zip file][2] containing your lambda function source code. If defined, The `s3_*` options cannot be used. +* `s3_bucket` - (Optional) The S3 bucket location containing your lambda function source code. Conflicts with `filename`. +* `s3_key` - (Optional) The S3 key containing your lambda function source code. Conflicts with `filename`. +* `s3_object_version` - (Optional) The object version of your lambda function source code. Conflicts with `filename`. * `function_name` - (Required) A unique name for your Lambda Function. * `handler` - (Required) The function [entrypoint][3] in your code. * `role` - (Required) IAM role attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See [Lambda Permission Model][4] for more details. diff --git a/website/source/docs/providers/aws/r/launch_configuration.html.markdown b/website/source/docs/providers/aws/r/launch_configuration.html.markdown index 8492640184..413f1b4a1e 100644 --- a/website/source/docs/providers/aws/r/launch_configuration.html.markdown +++ b/website/source/docs/providers/aws/r/launch_configuration.html.markdown @@ -26,11 +26,13 @@ Launch Configurations cannot be updated after creation with the Amazon Web Service API. In order to update a Launch Configuration, Terraform will destroy the existing resource and create a replacement. In order to effectively use a Launch Configuration resource with an [AutoScaling Group resource][1], -it's recommend to omit the Launch Configuration `name` attribute, and -specify `create_before_destroy` in a [lifecycle][2] block, as shown: +it's recommended to specify `create_before_destroy` in a [lifecycle][2] block. +Either omit the Launch Configuration `name` attribute, or specify a partial name +with `name_prefix`. Example: ``` resource "aws_launch_configuration" "as_conf" { + name_prefix = "terraform-lc-example-" image_id = "ami-1234" instance_type = "m1.small" @@ -87,7 +89,9 @@ resource "aws_autoscaling_group" "bar" { The following arguments are supported: * `name` - (Optional) The name of the launch configuration. If you leave - this blank, Terraform will auto-generate it. + this blank, Terraform will auto-generate a unique name. +* `name_prefix` - (Optional) Creates a unique name beginning with the specified + prefix. Conflicts with `name`. * `image_id` - (Required) The EC2 image ID to launch. * `instance_type` - (Required) The size of instance to launch. * `iam_instance_profile` - (Optional) The IAM instance profile to associate diff --git a/website/source/docs/providers/aws/r/rds_cluster.html.markdown b/website/source/docs/providers/aws/r/rds_cluster.html.markdown index fb1f0dac85..c60e6ef294 100644 --- a/website/source/docs/providers/aws/r/rds_cluster.html.markdown +++ b/website/source/docs/providers/aws/r/rds_cluster.html.markdown @@ -63,6 +63,7 @@ Default: A 30-minute window selected at random from an 8-hour block of time per * `apply_immediately` - (Optional) Specifies whether any cluster modifications are applied immediately, or during the next maintenance window. Default is `false`. See [Amazon RDS Documentation for more information.](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) +* `db_subnet_group_name` - (Optional) A DB subnet group to associate with this DB instance. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown index 782339a343..c124713b38 100644 --- a/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown +++ b/website/source/docs/providers/aws/r/rds_cluster_instance.html.markdown @@ -27,7 +27,7 @@ For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amaz resource "aws_rds_cluster_instance" "cluster_instances" { count = 2 identifier = "aurora-cluster-demo" - cluster_identifer = "${aws_rds_cluster.default.id}" + cluster_identifier = "${aws_rds_cluster.default.id}" instance_class = "db.r3.large" } @@ -64,6 +64,10 @@ and memory, see [Scaling Aurora DB Instances][4]. Aurora currently Default `false`. See the documentation on [Creating DB Instances][6] for more details on controlling this property. +* `db_subnet_group_name` - (Optional) A DB subnet group to associate with this DB instance. + +~> **NOTE:** `db_subnet_group_name` is a required field when you are trying to create a private instance (`publicly_accessible` = false) + ## Attributes Reference The following attributes are exported: diff --git a/website/source/docs/providers/aws/r/sqs_queue.html.markdown b/website/source/docs/providers/aws/r/sqs_queue.html.markdown index 78e53c224b..62666b188c 100644 --- a/website/source/docs/providers/aws/r/sqs_queue.html.markdown +++ b/website/source/docs/providers/aws/r/sqs_queue.html.markdown @@ -17,6 +17,7 @@ resource "aws_sqs_queue" "terraform_queue" { max_message_size = 2048 message_retention_seconds = 86400 receive_wait_time_seconds = 10 + redrive_policy = "{\"deadLetterTargetArn\":\"${aws_sqs_queue.terraform_queue_deadletter.arn}\",\"maxReceiveCount\":4}" } ``` diff --git a/website/source/docs/providers/azure/index.html.markdown b/website/source/docs/providers/azure/index.html.markdown index 5d7afdd20b..0c8c09f28c 100644 --- a/website/source/docs/providers/azure/index.html.markdown +++ b/website/source/docs/providers/azure/index.html.markdown @@ -33,11 +33,11 @@ resource "azure_instance" "web" { The following arguments are supported: -* `settings_file` - (Optional) Contents of a valid `publishsettings` file, used to - authenticate with the Azure API. You can download the settings file here: - https://manage.windowsazure.com/publishsettings. You must either provide - (or source from the `AZURE_SETTINGS_FILE` environment variable) a settings - file or both a `subscription_id` and `certificate`. +* `publish_settings` - (Optional) Contents of a valid `publishsettings` file, + used to authenticate with the Azure API. You can download the settings file + here: https://manage.windowsazure.com/publishsettings. You must either + provide publish settings or both a `subscription_id` and `certificate`. It + can also be sourced from the `AZURE_PUBLISH_SETTINGS` environment variable. * `subscription_id` - (Optional) The subscription ID to use. If a `settings_file` is not provided `subscription_id` is required. It can also @@ -47,6 +47,16 @@ The following arguments are supported: Azure API. If a `settings_file` is not provided `certificate` is required. It can also be sourced from the `AZURE_CERTIFICATE` environment variable. +These arguments are supported for backwards compatibility, and may be removed +in a future version: + +* `settings_file` - __Deprecated: please use `publish_settings` instead.__ + Path to or contents of a valid `publishsettings` file, used to + authenticate with the Azure API. You can download the settings file here: + https://manage.windowsazure.com/publishsettings. You must either provide + (or source from the `AZURE_SETTINGS_FILE` environment variable) a settings + file or both a `subscription_id` and `certificate`. + ## Testing: The following environment variables must be set for the running of the diff --git a/website/source/docs/providers/do/index.html.markdown b/website/source/docs/providers/do/index.html.markdown index 9e18277a3c..468539c57d 100644 --- a/website/source/docs/providers/do/index.html.markdown +++ b/website/source/docs/providers/do/index.html.markdown @@ -17,6 +17,9 @@ Use the navigation to the left to read about the available resources. ## Example Usage ``` +# Set the variable value in *.tfvars file or using -var="do_token=..." CLI option +variable "do_token" {} + # Configure the DigitalOcean Provider provider "digitalocean" { token = "${var.do_token}" diff --git a/website/source/docs/providers/dyn/index.html.markdown b/website/source/docs/providers/dyn/index.html.markdown new file mode 100644 index 0000000000..700bb00870 --- /dev/null +++ b/website/source/docs/providers/dyn/index.html.markdown @@ -0,0 +1,39 @@ +--- +layout: "dyn" +page_title: "Provider: Dyn" +sidebar_current: "docs-dyn-index" +description: |- + The Dyn provider is used to interact with the resources supported by Dyn. The provider needs to be configured with the proper credentials before it can be used. +--- + +# Dyn Provider + +The Dyn provider is used to interact with the +resources supported by Dyn. The provider needs to be configured +with the proper credentials before it can be used. + +Use the navigation to the left to read about the available resources. + +## Example Usage + +``` +# Configure the Dyn provider +provider "dyn" { + customer_name = "${var.dyn_customer_name}" + username = "${var.dyn_username}" + password = "${var.dyn_password}" +} + +# Create a record +resource "dyn_record" "www" { + ... +} +``` + +## Argument Reference + +The following arguments are supported: + +* `customer_name` - (Required) The Dyn customer name. It must be provided, but it can also be sourced from the `DYN_CUSTOMER_NAME` environment variable. +* `username` - (Required) The Dyn username. It must be provided, but it can also be sourced from the `DYN_USERNAME` environment variable. +* `password` - (Required) The Dyn password. It must be provided, but it can also be sourced from the `DYN_PASSWORD` environment variable. diff --git a/website/source/docs/providers/dyn/r/record.html.markdown b/website/source/docs/providers/dyn/r/record.html.markdown new file mode 100644 index 0000000000..6094c27dee --- /dev/null +++ b/website/source/docs/providers/dyn/r/record.html.markdown @@ -0,0 +1,41 @@ +--- +layout: "dyn" +page_title: "Dyn: dyn_record" +sidebar_current: "docs-dyn-resource-record" +description: |- + Provides a Dyn DNS record resource. +--- + +# dyn\_record + +Provides a Dyn DNS record resource. + +## Example Usage + +``` +# Add a record to the domain +resource "dyn_record" "foobar" { + zone = "${var.dyn_zone}" + name = "terraform" + value = "192.168.0.11" + type = "A" + ttl = 3600 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the record. +* `type` - (Required) The type of the record. +* `value` - (Required) The value of the record. +* `zone` - (Required) The DNS zone to add the record to. +* `ttl` - (Optional) The TTL of the record. Default uses the zone default. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The record ID. +* `fqdn` - The FQDN of the record, built from the `name` and the `zone`. diff --git a/website/source/docs/providers/google/index.html.markdown b/website/source/docs/providers/google/index.html.markdown index 3bbef84c60..14a208d6a2 100644 --- a/website/source/docs/providers/google/index.html.markdown +++ b/website/source/docs/providers/google/index.html.markdown @@ -19,14 +19,14 @@ Use the navigation to the left to read about the available resources. ``` # Configure the Google Cloud provider provider "google" { - account_file = "${file("account.json")}" - project = "my-gce-project" - region = "us-central1" + credentials = "${file("account.json")}" + project = "my-gce-project" + region = "us-central1" } # Create a new instance resource "google_compute_instance" "default" { - ... + ... } ``` @@ -34,12 +34,12 @@ resource "google_compute_instance" "default" { The following keys can be used to configure the provider. -* `account_file` - (Required) Contents of the JSON file used to describe your +* `credentials` - (Optional) Contents of the JSON file used to describe your account credentials, downloaded from Google Cloud Console. More details on - retrieving this file are below. The `account file` can be "" if you are running - terraform from a GCE instance with a properly-configured [Compute Engine + retrieving this file are below. Credentials may be blank if you are running + Terraform from a GCE instance with a properly-configured [Compute Engine Service Account](https://cloud.google.com/compute/docs/authentication). This - can also be specified with the `GOOGLE_ACCOUNT_FILE` shell environment + can also be specified with the `GOOGLE_CREDENTIALS` shell environment variable. * `project` - (Required) The ID of the project to apply any resources to. This @@ -48,6 +48,19 @@ The following keys can be used to configure the provider. * `region` - (Required) The region to operate under. This can also be specified with the `GOOGLE_REGION` shell environment variable. +The following keys are supported for backwards compatibility, and may be +removed in a future version: + +* `account_file` - __Deprecated: please use `credentials` instead.__ + Path to or contents of the JSON file used to describe your + account credentials, downloaded from Google Cloud Console. More details on + retrieving this file are below. The `account file` can be "" if you are running + terraform from a GCE instance with a properly-configured [Compute Engine + Service Account](https://cloud.google.com/compute/docs/authentication). This + can also be specified with the `GOOGLE_ACCOUNT_FILE` shell environment + variable. + + ## Authentication JSON File Authenticating with Google Cloud services requires a JSON diff --git a/website/source/docs/providers/google/r/compute_https_health_check.html.markdown b/website/source/docs/providers/google/r/compute_https_health_check.html.markdown new file mode 100644 index 0000000000..f608cac363 --- /dev/null +++ b/website/source/docs/providers/google/r/compute_https_health_check.html.markdown @@ -0,0 +1,57 @@ +--- +layout: "google" +page_title: "Google: google_compute_https_health_check" +sidebar_current: "docs-google-compute-https-health-check" +description: |- + Manages an HTTPS Health Check within GCE. +--- + +# google\_compute\_https\_health\_check + +Manages an HTTPS health check within GCE. This is used to monitor instances +behind load balancers. Timeouts or HTTPS errors cause the instance to be +removed from the pool. For more information, see [the official +documentation](https://cloud.google.com/compute/docs/load-balancing/health-checks) +and +[API](https://cloud.google.com/compute/docs/reference/latest/httpsHealthChecks). + +## Example Usage + +``` +resource "google_compute_https_health_check" "default" { + name = "test" + request_path = "/health_check" + check_interval_sec = 1 + timeout_sec = 1 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `check_interval_sec` - (Optional) How often to poll each instance (default 5). + +* `description` - (Optional) Textual description field. + +* `healthy_threshold` - (Optional) Consecutive successes required (default 2). + +* `host` - (Optional) HTTPS host header field (default instance's public ip). + +* `name` - (Required) A unique name for the resource, required by GCE. + Changing this forces a new resource to be created. + +* `port` - (Optional) TCP port to connect to (default 443). + +* `request_path` - (Optional) URL path to query (default /). + +* `timeout_sec` - (Optional) How long before declaring failure (default 5). + +* `unhealthy_threshold` - (Optional) Consecutive failures required (default 2). + + +## Attributes Reference + +The following attributes are exported: + +* `self_link` - The URL of the created resource. diff --git a/website/source/docs/providers/google/r/compute_ssl_certificate.html.markdown b/website/source/docs/providers/google/r/compute_ssl_certificate.html.markdown index 2b7826102d..81eb3a2673 100644 --- a/website/source/docs/providers/google/r/compute_ssl_certificate.html.markdown +++ b/website/source/docs/providers/google/r/compute_ssl_certificate.html.markdown @@ -35,7 +35,7 @@ The following arguments are supported: Changing this forces a new resource to be created. * `private_key` - (Required) Write only private key in PEM format. Changing this forces a new resource to be created. -* `description` - (Required) A local certificate file in PEM format. The chain +* `certificate` - (Required) A local certificate file in PEM format. The chain may be at most 5 certs long, and must include at least one intermediate cert. Changing this forces a new resource to be created. diff --git a/website/source/docs/providers/openstack/r/networking_floatingip_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_floatingip_v2.html.markdown index 9389eafeb2..fb1c57cbc4 100644 --- a/website/source/docs/providers/openstack/r/networking_floatingip_v2.html.markdown +++ b/website/source/docs/providers/openstack/r/networking_floatingip_v2.html.markdown @@ -35,6 +35,9 @@ The following arguments are supported: * `pool` - (Required) The name of the pool from which to obtain the floating IP. Changing this creates a new floating IP. +* `port_id` - ID of an existing port with at least one IP address to associate with +this floating IP. + ## Attributes Reference The following attributes are exported: @@ -42,3 +45,4 @@ The following attributes are exported: * `region` - See Argument Reference above. * `pool` - See Argument Reference above. * `address` - The actual floating IP address itself. +* `port_id` - ID of associated port. diff --git a/website/source/docs/providers/openstack/r/networking_network_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_network_v2.html.markdown index 9a9eab935b..ce4f46db8f 100644 --- a/website/source/docs/providers/openstack/r/networking_network_v2.html.markdown +++ b/website/source/docs/providers/openstack/r/networking_network_v2.html.markdown @@ -42,7 +42,10 @@ resource "openstack_networking_port_v2" "port_1" { admin_state_up = "true" security_groups = ["${openstack_compute_secgroup_v2.secgroup_1.id}"] - depends_on = ["openstack_networking_subnet_v2.subnet_1"] + fixed_ips { + "subnet_id" = "008ba151-0b8c-4a67-98b5-0d2b87666062" + "ip_address" = "172.24.4.2" + } } resource "openstack_compute_instance_v2" "instance_1" { diff --git a/website/source/docs/providers/openstack/r/networking_port_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_port_v2.html.markdown index d10130a56f..3e5998c945 100644 --- a/website/source/docs/providers/openstack/r/networking_port_v2.html.markdown +++ b/website/source/docs/providers/openstack/r/networking_port_v2.html.markdown @@ -53,13 +53,25 @@ The following arguments are supported: * `device_owner` - (Optional) The device owner of the Port. Changing this creates a new port. -* `security_groups` - (Optional) A list of security groups to apply to the port. - The security groups must be specified by ID and not name (as opposed to how - they are configured with the Compute Instance). +* `security_group_ids` - (Optional) A list of security group IDs to apply to the + port. The security groups must be specified by ID and not name (as opposed + to how they are configured with the Compute Instance). * `device_id` - (Optional) The ID of the device attached to the port. Changing this creates a new port. +* `fixed_ip` - (Optional) An array of desired IPs for this port. The structure is + described below. + + +The `fixed_ip` block supports: + +* `subnet_id` - (Required) Subnet in which to allocate IP address for +this port. + +* `ip_address` - (Required) IP address desired in the subnet for this +port. + ## Attributes Reference The following attributes are exported: diff --git a/website/source/docs/providers/openstack/r/networking_router_interface_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_router_interface_v2.html.markdown index b0106dadda..13046d64da 100644 --- a/website/source/docs/providers/openstack/r/networking_router_interface_v2.html.markdown +++ b/website/source/docs/providers/openstack/r/networking_router_interface_v2.html.markdown @@ -49,7 +49,10 @@ The following arguments are supported: * `router_id` - (Required) ID of the router this interface belongs to. Changing this creates a new router interface. -* `subnet_id` - (Required) ID of the subnet this interface connects to. Changing +* `subnet_id` - ID of the subnet this interface connects to. Changing + this creates a new router interface. + +* `port_id` - ID of the port this interface connects to. Changing this creates a new router interface. ## Attributes Reference @@ -59,3 +62,4 @@ The following attributes are exported: * `region` - See Argument Reference above. * `router_id` - See Argument Reference above. * `subnet_id` - See Argument Reference above. +* `port_id` - See Argument Reference above. diff --git a/website/source/docs/providers/packet/r/project.html.markdown b/website/source/docs/providers/packet/r/project.html.markdown index b008f864fe..c34b49c209 100644 --- a/website/source/docs/providers/packet/r/project.html.markdown +++ b/website/source/docs/providers/packet/r/project.html.markdown @@ -25,7 +25,7 @@ resource "packet_project" "tf_project_1" { The following arguments are supported: -* `name` - (Required) The name of the SSH key for identification +* `name` - (Required) The name of the Project in Packet.net * `payment_method` - (Required) The id of the payment method on file to use for services created on this project. @@ -33,8 +33,8 @@ on this project. The following attributes are exported: -* `id` - The unique ID of the key +* `id` - The unique ID of the project * `payment_method` - The id of the payment method on file to use for services created on this project. -* `created` - The timestamp for when the SSH key was created -* `updated` - The timestamp for the last time the SSH key was udpated +* `created` - The timestamp for when the Project was created +* `updated` - The timestamp for the last time the Project was updated diff --git a/website/source/docs/providers/template/r/file.html.md b/website/source/docs/providers/template/r/file.html.md index b46e55a80f..7c9e2c59ec 100644 --- a/website/source/docs/providers/template/r/file.html.md +++ b/website/source/docs/providers/template/r/file.html.md @@ -14,7 +14,7 @@ Renders a template from a file. ``` resource "template_file" "init" { - filename = "${path.module}/init.tpl" + template = "${file(${path.module}/init.tpl)}" vars { consul_address = "${aws_instance.consul.private_ip}" @@ -27,17 +27,24 @@ resource "template_file" "init" { The following arguments are supported: -* `filename` - (Required) The filename for the template. Use [path - variables](/docs/configuration/interpolation.html#path-variables) to make - this path relative to different path roots. +* `template` - (Required) The contents of the template. These can be loaded + from a file on disk using the [`file()` interpolation + function](/docs/configuration/interpolation.html#file_path_). * `vars` - (Optional) Variables for interpolation within the template. +The following arguments are maintained for backwards compatibility and may be +removed in a future version: + +* `filename` - __Deprecated, please use `template` instead_. The filename for + the template. Use [path variables](/docs/configuration/interpolation.html#path-variables) to make + this path relative to different path roots. + ## Attributes Reference The following attributes are exported: -* `filename` - See Argument Reference above. +* `template` - See Argument Reference above. * `vars` - See Argument Reference above. * `rendered` - The final rendered template. diff --git a/website/source/docs/provisioners/chef.html.markdown b/website/source/docs/provisioners/chef.html.markdown index a1a7e7ba90..60f6d577a0 100644 --- a/website/source/docs/provisioners/chef.html.markdown +++ b/website/source/docs/provisioners/chef.html.markdown @@ -36,10 +36,10 @@ resource "aws_instance" "web" { environment = "_default" run_list = ["cookbook::recipe"] node_name = "webserver1" - secret_key_path = "../encrypted_data_bag_secret" + secret_key = "${file("../encrypted_data_bag_secret")}" server_url = "https://chef.company.com/organizations/org1" validation_client_name = "chef-validator" - validation_key_path = "../chef-validator.pem" + validation_key = "${file("../chef-validator.pem")}" version = "12.4.1" } } @@ -83,9 +83,10 @@ The following arguments are supported: Chef Client run. The run-list will also be saved to the Chef Server after a successful initial run. -* `secret_key_path (string)` - (Optional) The path to the secret key that is used +* `secret_key (string)` - (Optional) The contents of the secret key that is used by the client to decrypt data bags on the Chef Server. The key will be uploaded to the remote - machine. + machine. These can be loaded from a file on disk using the [`file()` interpolation + function](/docs/configuration/interpolation.html#file_path_). * `server_url (string)` - (Required) The URL to the Chef server. This includes the path to the organization. See the example. @@ -100,9 +101,16 @@ The following arguments are supported: * `validation_client_name (string)` - (Required) The name of the validation client to use for the initial communication with the Chef Server. -* `validation_key_path (string)` - (Required) The path to the validation key that is needed +* `validation_key (string)` - (Required) The contents of the validation key that is needed by the node to register itself with the Chef Server. The key will be uploaded to the remote - machine. + machine. These can be loaded from a file on disk using the [`file()` + interpolation function](/docs/configuration/interpolation.html#file_path_). * `version (string)` - (Optional) The Chef Client version to install on the remote machine. If not set the latest available version will be installed. + +These are supported for backwards compatibility and may be removed in a +future version: + +* `validation_key_path (string)` - __Deprecated: please use `validation_key` instead__. +* `secret_key_path (string)` - __Deprecated: please use `secret_key` instead__. diff --git a/website/source/docs/provisioners/connection.html.markdown b/website/source/docs/provisioners/connection.html.markdown index 6efd73e835..83fa8ebb4a 100644 --- a/website/source/docs/provisioners/connection.html.markdown +++ b/website/source/docs/provisioners/connection.html.markdown @@ -68,8 +68,10 @@ provisioner "file" { **Additional arguments only supported by the "ssh" connection type:** -* `key_file` - The SSH key to use for the connection. This takes preference over the - password if provided. +* `private_key` - The contents of an SSH key to use for the connection. These can + be loaded from a file on disk using the [`file()` interpolation + function](/docs/configuration/interpolation.html#file_path_). This takes + preference over the password if provided. * `agent` - Set to false to disable using ssh-agent to authenticate. @@ -99,5 +101,22 @@ The `ssh` connection additionally supports the following fields to facilitate a * `bastion_password` - The password we should use for the bastion host. Defaults to the value of `password`. -* `bastion_key_file` - The SSH key to use for the bastion host. Defaults to the - value of `key_file`. +* `bastion_private_key` - The contents of an SSH key file to use for the bastion + host. These can be loaded from a file on disk using the [`file()` + interpolation function](/docs/configuration/interpolation.html#file_path_). + Defaults to the value of `private_key`. + +## Deprecations + +These are supported for backwards compatibility and may be removed in a +future version: + +* `key_file` - A path to or the contents of an SSH key to use for the + connection. These can be loaded from a file on disk using the [`file()` + interpolation function](/docs/configuration/interpolation.html#file_path_). + This takes preference over the password if provided. + +* `bastion_key_file` - The contents of an SSH key file to use for the bastion + host. These can be loaded from a file on disk using the [`file()` + interpolation function](/docs/configuration/interpolation.html#file_path_). + Defaults to the value of `key_file`. diff --git a/website/source/intro/getting-started/variables.html.md b/website/source/intro/getting-started/variables.html.md index 41e828a724..24154ca25d 100644 --- a/website/source/intro/getting-started/variables.html.md +++ b/website/source/intro/getting-started/variables.html.md @@ -186,7 +186,7 @@ And access them via `lookup()`: ``` output "ami" { - value = "${lookup(var.amis, var.region)} + value = "${lookup(var.amis, var.region)}" } ``` diff --git a/website/source/layouts/_footer.erb b/website/source/layouts/_footer.erb index d42c55cac6..ace6475c58 100644 --- a/website/source/layouts/_footer.erb +++ b/website/source/layouts/_footer.erb @@ -1,28 +1,42 @@ - +