diff --git a/.travis.yml b/.travis.yml index dcb8c11d43..88c853d681 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,7 +1,20 @@ +dist: trusty sudo: false language: go go: - 1.8 + +env: + - CONSUL_VERSION=0.7.5 TF_CONSUL_TEST=1 GOMAXPROCS=4 + +# Fetch consul for the backend and provider tests +before_install: + - curl -sLo consul.zip https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip + - unzip consul.zip + - mkdir ~/bin + - mv consul ~/bin + - export PATH="~/bin:$PATH" + install: # This script is used by the Travis build to install a cookie for # go.googlesource.com so rate limits are higher when using `go get` to fetch diff --git a/CHANGELOG.md b/CHANGELOG.md index b520eb32f1..16f330066d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,18 +1,244 @@ -**TEMPORARY NOTE:** The "master" branch CHANGELOG also includes any changes -in the branch "0-8-stable". The "master" branch is currently a development -branch for the next major version of Terraform. +## 0.9.2 (unreleased) -## 0.9.0-beta3 (unreleased) +FEATURES: -BACKWARDS INCOMPATIBILITIES / NOTES: - - * provider/aws: `aws_codebuild_project` renamed `timeout` to `build_timeout` [GH-12503] - * provider/azurem: `azurerm_virtual_machine` and `azurerm_virtual_machine_scale_set` now store has of custom_data not all custom_data [GH-12214] + * **New Resource:** `aws_api_gateway_usage_plan` [GH-12542] + * **New Resource:** `aws_api_gateway_usage_plan_key` [GH-12851] + * **New Resource:** `github_repository_webhook` [GH-12924] + * **New Interpolation:** `substr` [GH-12870] IMPROVEMENTS: - * provider/azurerm: store only hash of `azurerm_virtual_machine` and `azurerm_virtual_machine_scale_set` custom_data - reduces size of state [GH-12214] + * core: fix `ignore_changes` causing fields to be removed during apply [GH-12897] + * core: add `-force-copy` option to `terraform init` to supress prompts for copying state [GH-12939] + * helper/acctest: Add NewSSHKeyPair function [GH-12894] + * provider/alicloud: simplify validators [GH-12982] + * provider/aws: Added support for EMR AutoScalingRole [GH-12823] + * provider/aws: Add `name_prefix` to `aws_autoscaling_group` and `aws_elb` resources [GH-12629] + * provider/aws: Updated default configuration manager version in `aws_opsworks_stack` [GH-12979] + * provider/aws: Added aws_api_gateway_api_key value attribute [GH-9462] + * provider/aws: Allow aws_alb subnets to change [GH-12850] + * provider/aws: Support Attachment of ALB Target Groups to Autoscaling Groups [GH-12855] + * provider/azurerm: Add support for setting the primary network interface [GH-11290] + * provider/cloudstack: Add `zone_id` to `cloudstack_ipaddress` resource [GH-11306] + * provider/consul: Add support for basic auth to the provider [GH-12679] + * provider/dnsimple: Allow dnsimple_record.priority attribute to be set [GH-12843] + * provider/google: Add support for service_account, metadata, and image_type fields in GKE cluster config [GH-12743] + * provider/google: Add local ssd count support for container clusters [GH-12281] + * provider/ignition: ignition_filesystem, explicit option to create the filesystem [GH-12980] + * provider/ns1: Ensure provider checks for credentials [GH-12920] + * provider/openstack: Adding Timeouts to Blockstorage Resources [GH-12862] + * provider/openstack: Adding Timeouts to FWaaS v1 Resources [GH-12863] + * provider/openstack: Adding Timeouts to Image v2 and LBaaS v2 Resources [GH-12865] + * provider/openstack: Adding Timeouts to Network Resources [GH-12866] + * provider/openstack: Adding Timeouts to LBaaS v1 Resources [GH-12867] + * provider/pagerduty: Validate credentials [GH-12854] + +BUG FIXES: + * core: Remove legacy remote state configuration on state migration. This fixes errors when saving plans. [GH-12888] + * provider/arukas: Default timeout for launching container increased to 15mins (was 10mins) [GH-12849] + * provider/aws: Fix flattened cloudfront lambda function associations to be a set not a slice [GH-11984] + * provider/aws: Consider ACTIVE as pending state during ECS svc deletion [GH-12986] + * provider/aws: Deprecate the usage of Api Gateway Key Stages in favor of Usage Plans [GH-12883] + * provider/aws: prevent panic in resourceAwsSsmDocumentRead [GH-12891] + * provider/aws: Prevent panic when setting AWS CodeBuild Source to state [GH-12915] + * provider/aws: Only call replace Iam Instance Profile on existing machines [GH-12922] + * provider/aws: Increase AWS AMI Destroy timeout [GH-12943] + * provider/aws: Set aws_vpc ipv6 for associated only [GH-12899] + * provider/aws: Fix AWS ECS placement strategy spread fields [GH-12998] + * provider/aws: Specify that aws_network_acl_rule requires a cidr block [GH-13013] + * provider/google: turn compute_instance_group.instances into a set [GH-12790] + * provider/mysql: recreate user/grant if user/grant got deleted manually [GH-12791] + +## 0.9.1 (March 17, 2017) + +BACKWARDS IMCOMPATIBILITIES / NOTES: + + * provider/pagerduty: the deprecated `name_regex` field has been removed from vendor data source ([#12396](https://github.com/hashicorp/terraform/issues/12396)) + +FEATURES: + + * **New Provider:** `kubernetes` ([#12372](https://github.com/hashicorp/terraform/issues/12372)) + * **New Resource:** `kubernetes_namespace` ([#12372](https://github.com/hashicorp/terraform/issues/12372)) + * **New Resource:** `kubernetes_config_map` ([#12753](https://github.com/hashicorp/terraform/issues/12753)) + * **New Data Source:** `dns_a_record_set` ([#12744](https://github.com/hashicorp/terraform/issues/12744)) + * **New Data Source:** `dns_cname_record_set` ([#12744](https://github.com/hashicorp/terraform/issues/12744)) + * **New Data Source:** `dns_txt_record_set` ([#12744](https://github.com/hashicorp/terraform/issues/12744)) + +IMPROVEMENTS: + + * command/init: `-backend-config` accepts `key=value` pairs + * provider/aws: Improved error when failing to get S3 tags ([#12759](https://github.com/hashicorp/terraform/issues/12759)) + * provider/aws: Validate CIDR Blocks in SG and SG rule resources ([#12765](https://github.com/hashicorp/terraform/issues/12765)) + * provider/aws: Add KMS key tag support ([#12243](https://github.com/hashicorp/terraform/issues/12243)) + * provider/aws: Allow `name_prefix` to be used with various IAM resources ([#12658](https://github.com/hashicorp/terraform/issues/12658)) + * provider/openstack: Add timeout support for Compute resources ([#12794](https://github.com/hashicorp/terraform/issues/12794)) + * provider/scaleway: expose public IPv6 information on scaleway_server ([#12748](https://github.com/hashicorp/terraform/issues/12748)) + +BUG FIXES: + + * core: Fix panic when an undefined module is reference ([#12793](https://github.com/hashicorp/terraform/issues/12793)) + * core: Fix regression from 0.8.x when using a data source in a module ([#12837](https://github.com/hashicorp/terraform/issues/12837)) + * command/apply: Applies from plans with backends set will reuse the backend rather than local ([#12785](https://github.com/hashicorp/terraform/issues/12785)) + * command/init: Changing only `-backend-config` detects changes and reconfigures ([#12776](https://github.com/hashicorp/terraform/issues/12776)) + * command/init: Fix legacy backend init error that could occur when upgrading ([#12818](https://github.com/hashicorp/terraform/issues/12818)) + * command/push: Detect local state and error properly ([#12773](https://github.com/hashicorp/terraform/issues/12773)) + * command/refresh: Allow empty and non-existent state ([#12777](https://github.com/hashicorp/terraform/issues/12777)) + * provider/aws: Get the aws_lambda_function attributes when there are great than 50 versions of a function ([#11745](https://github.com/hashicorp/terraform/issues/11745)) + * provider/aws: Correctly check for nil cidr_block in aws_network_acl ([#12735](https://github.com/hashicorp/terraform/issues/12735)) + * provider/aws: Stop setting weight property on route53_record read ([#12756](https://github.com/hashicorp/terraform/issues/12756)) + * provider/google: Fix the Google provider asking for account_file input on every run ([#12729](https://github.com/hashicorp/terraform/issues/12729)) + * provider/profitbricks: Prevent panic on profitbricks volume ([#12819](https://github.com/hashicorp/terraform/issues/12819)) + + +## 0.9.0 (March 15, 2017) + +**This is the complete 0.8.8 to 0.9 CHANGELOG. Below this section we also have a 0.9.0-beta2 to 0.9.0 final CHANGELOG.** + +BACKWARDS INCOMPATIBILITIES / NOTES: + + * provider/aws: `aws_codebuild_project` renamed `timeout` to `build_timeout` ([#12503](https://github.com/hashicorp/terraform/issues/12503)) + * provider/azurem: `azurerm_virtual_machine` and `azurerm_virtual_machine_scale_set` now store has of custom_data not all custom_data ([#12214](https://github.com/hashicorp/terraform/issues/12214)) + * provider/azurerm: scale_sets `os_profile_master_password` now marked as sensitive + * provider/azurerm: sql_server `administrator_login_password` now marked as sensitive + * provider/dnsimple: Provider has been upgraded to APIv2 therefore, you will need to use the APIv2 auth token + * provider/google: storage buckets have been updated with the new storage classes. The old classes will continue working as before, but should be migrated as soon as possible, as there's no guarantee they'll continue working forever. ([#12044](https://github.com/hashicorp/terraform/issues/12044)) + * provider/google: compute_instance, compute_instance_template, and compute_disk all have a subtly changed logic when specifying an image family as the image; in 0.8.x they would pin to the latest image in the family when the resource is created; in 0.9.x they pass the family to the API and use its behaviour. New input formats are also supported. ([#12223](https://github.com/hashicorp/terraform/issues/12223)) + * provider/google: removed the unused and deprecated region field from google_compute_backend_service ([#12663](https://github.com/hashicorp/terraform/issues/12663)) + * provider/google: removed the deprecated account_file field for the Google Cloud provider ([#12668](https://github.com/hashicorp/terraform/issues/12668)) + * provider/google: removed the deprecated fields from google_project ([#12659](https://github.com/hashicorp/terraform/issues/12659)) + +FEATURES: + + * **Remote Backends:** This is a successor to "remote state" and includes + file-based configuration, an improved setup process (just run `terraform init`), + no more local caching of remote state, and more. ([#11286](https://github.com/hashicorp/terraform/issues/11286)) + * **Destroy Provisioners:** Provisioners can now be configured to run + on resource destruction. ([#11329](https://github.com/hashicorp/terraform/issues/11329)) + * **State Locking:** State will be automatically locked when supported by the backend. + Backends supporting locking in this release are Local, S3 (via DynamoDB), and Consul. ([#11187](https://github.com/hashicorp/terraform/issues/11187)) + * **State Environments:** You can now create named "environments" for states. This allows you to manage distinct infrastructure resources from the same configuration. + * **New Provider:** `Circonus` ([#12578](https://github.com/hashicorp/terraform/issues/12578)) + * **New Data Source:** `openstack_networking_network_v2` ([#12304](https://github.com/hashicorp/terraform/issues/12304)) + * **New Resource:** `aws_iam_account_alias` ([#12648](https://github.com/hashicorp/terraform/issues/12648)) + * **New Resource:** `datadog_downtime` ([#10994](https://github.com/hashicorp/terraform/issues/10994)) + * **New Resource:** `ns1_notifylist` ([#12373](https://github.com/hashicorp/terraform/issues/12373)) + * **New Resource:** `google_container_node_pool` ([#11802](https://github.com/hashicorp/terraform/issues/11802)) + * **New Resource:** `rancher_certificate` ([#12717](https://github.com/hashicorp/terraform/issues/12717)) + * **New Resource:** `rancher_host` ([#11545](https://github.com/hashicorp/terraform/issues/11545)) + * helper/schema: Added Timeouts to allow Provider/Resource developers to expose configurable timeouts for actions ([#12311](https://github.com/hashicorp/terraform/issues/12311)) + +IMPROVEMENTS: + + * core: Data source values can now be used as part of a `count` calculation. ([#11482](https://github.com/hashicorp/terraform/issues/11482)) + * core: "terraformrc" can contain env var references with $FOO ([#11929](https://github.com/hashicorp/terraform/issues/11929)) + * core: report all errors encountered during config validation ([#12383](https://github.com/hashicorp/terraform/issues/12383)) + * command: CLI args can be specified via env vars. Specify `TF_CLI_ARGS` or `TF_CLI_ARGS_name` (where name is the name of a command) to specify additional CLI args ([#11922](https://github.com/hashicorp/terraform/issues/11922)) + * command/init: previous behavior is retained, but init now also configures + the new remote backends as well as downloads modules. It is the single + command to initialize a new or existing Terraform configuration. + * command: Display resource state ID in refresh/plan/destroy output ([#12261](https://github.com/hashicorp/terraform/issues/12261)) + * provider/aws: AWS Lambda DeadLetterConfig support ([#12188](https://github.com/hashicorp/terraform/issues/12188)) + * provider/aws: Return errors from Elastic Beanstalk ([#12425](https://github.com/hashicorp/terraform/issues/12425)) + * provider/aws: Set aws_db_cluster to snapshot by default ([#11668](https://github.com/hashicorp/terraform/issues/11668)) + * provider/aws: Enable final snapshots for aws_rds_cluster by default ([#11694](https://github.com/hashicorp/terraform/issues/11694)) + * provider/aws: Enable snapshotting by default on aws_redshift_cluster ([#11695](https://github.com/hashicorp/terraform/issues/11695)) + * provider/aws: Add support for ACM certificates to `api_gateway_domain_name` ([#12592](https://github.com/hashicorp/terraform/issues/12592)) + * provider/aws: Add support for IPv6 to aws\_security\_group\_rule ([#12645](https://github.com/hashicorp/terraform/issues/12645)) + * provider/aws: Add IPv6 Support to aws\_route\_table ([#12640](https://github.com/hashicorp/terraform/issues/12640)) + * provider/aws: Add support for IPv6 to aws\_network\_acl\_rule ([#12644](https://github.com/hashicorp/terraform/issues/12644)) + * provider/aws: Add support for IPv6 to aws\_default\_route\_table ([#12642](https://github.com/hashicorp/terraform/issues/12642)) + * provider/aws: Add support for IPv6 to aws\_network\_acl ([#12641](https://github.com/hashicorp/terraform/issues/12641)) + * provider/aws: Add support for IPv6 in aws\_route ([#12639](https://github.com/hashicorp/terraform/issues/12639)) + * provider/aws: Add support for IPv6 to aws\_security\_group ([#12655](https://github.com/hashicorp/terraform/issues/12655)) + * provider/aws: Add replace\_unhealthy\_instances to spot\_fleet\_request ([#12681](https://github.com/hashicorp/terraform/issues/12681)) + * provider/aws: Remove restriction on running aws\_opsworks\_* on us-east-1 ([#12688](https://github.com/hashicorp/terraform/issues/12688)) + * provider/aws: Improve error message on S3 Bucket Object deletion ([#12712](https://github.com/hashicorp/terraform/issues/12712)) + * provider/aws: Add log message about if changes are being applied now or later ([#12691](https://github.com/hashicorp/terraform/issues/12691)) + * provider/azurerm: Mark the azurerm_scale_set machine password as sensitive ([#11982](https://github.com/hashicorp/terraform/issues/11982)) + * provider/azurerm: Mark the azurerm_sql_server admin password as sensitive ([#12004](https://github.com/hashicorp/terraform/issues/12004)) + * provider/azurerm: Add support for managed availability sets. ([#12532](https://github.com/hashicorp/terraform/issues/12532)) + * provider/azurerm: Add support for extensions on virtual machine scale sets ([#12124](https://github.com/hashicorp/terraform/issues/12124)) + * provider/dnsimple: Upgrade DNSimple provider to API v2 ([#10760](https://github.com/hashicorp/terraform/issues/10760)) + * provider/docker: added support for linux capabilities ([#12045](https://github.com/hashicorp/terraform/issues/12045)) + * provider/fastly: Add Fastly SSL validation fields ([#12578](https://github.com/hashicorp/terraform/issues/12578)) + * provider/ignition: Migrate all of the igition resources to data sources ([#11851](https://github.com/hashicorp/terraform/issues/11851)) + * provider/openstack: Set Availability Zone in Instances ([#12610](https://github.com/hashicorp/terraform/issues/12610)) + * provider/openstack: Force Deletion of Instances ([#12689](https://github.com/hashicorp/terraform/issues/12689)) + * provider/rancher: Better comparison of compose files ([#12561](https://github.com/hashicorp/terraform/issues/12561)) + * provider/azurerm: store only hash of `azurerm_virtual_machine` and `azurerm_virtual_machine_scale_set` custom_data - reduces size of state ([#12214](https://github.com/hashicorp/terraform/issues/12214)) + * provider/vault: read vault token from `~/.vault-token` as a fallback for the + `VAULT_TOKEN` environment variable. ([#11529](https://github.com/hashicorp/terraform/issues/11529)) + * provisioners: All provisioners now respond very quickly to interrupts for + fast cancellation. ([#10934](https://github.com/hashicorp/terraform/issues/10934)) + +BUG FIXES: + + * core: targeting will remove untargeted providers ([#12050](https://github.com/hashicorp/terraform/issues/12050)) + * core: doing a map lookup in a resource config with a computed set no longer crashes ([#12210](https://github.com/hashicorp/terraform/issues/12210)) + * provider/aws: Fixes issue for aws_lb_ssl_negotiation_policy of already deleted ELB ([#12360](https://github.com/hashicorp/terraform/issues/12360)) + * provider/aws: Populate the iam_instance_profile uniqueId ([#12449](https://github.com/hashicorp/terraform/issues/12449)) + * provider/aws: Only send iops when creating io1 devices ([#12392](https://github.com/hashicorp/terraform/issues/12392)) + * provider/aws: Fix spurious aws_spot_fleet_request diffs ([#12437](https://github.com/hashicorp/terraform/issues/12437)) + * provider/aws: Changing volumes in ECS task definition should force new revision ([#11403](https://github.com/hashicorp/terraform/issues/11403)) + * provider/aws: Ignore whitespace in json diff for aws_dms_replication_task options ([#12380](https://github.com/hashicorp/terraform/issues/12380)) + * provider/aws: Check spot instance is running before trying to attach volumes ([#12459](https://github.com/hashicorp/terraform/issues/12459)) + * provider/aws: Add the IPV6 cidr block to the vpc datasource ([#12529](https://github.com/hashicorp/terraform/issues/12529)) + * provider/aws: Error on trying to recreate an existing customer gateway ([#12501](https://github.com/hashicorp/terraform/issues/12501)) + * provider/aws: Prevent aws_dms_replication_task panic ([#12539](https://github.com/hashicorp/terraform/issues/12539)) + * provider/aws: output the task definition name when errors occur during refresh ([#12609](https://github.com/hashicorp/terraform/issues/12609)) + * provider/aws: Refresh iam saml provider from state on 404 ([#12602](https://github.com/hashicorp/terraform/issues/12602)) + * provider/aws: Add address, port, hosted_zone_id and endpoint for aws_db_instance datasource ([#12623](https://github.com/hashicorp/terraform/issues/12623)) + * provider/aws: Allow recreation of `aws_opsworks_user_profile` when the `user_arn` is changed ([#12595](https://github.com/hashicorp/terraform/issues/12595)) + * provider/aws: Guard clause to prevent panic on ELB connectionSettings ([#12685](https://github.com/hashicorp/terraform/issues/12685)) + * provider/azurerm: bug fix to prevent crashes during azurerm_container_service provisioning ([#12516](https://github.com/hashicorp/terraform/issues/12516)) + * provider/cobbler: Fix Profile Repos ([#12452](https://github.com/hashicorp/terraform/issues/12452)) + * provider/datadog: Update to datadog_monitor to use default values ([#12497](https://github.com/hashicorp/terraform/issues/12497)) + * provider/datadog: Default notify_no_data on datadog_monitor to false ([#11903](https://github.com/hashicorp/terraform/issues/11903)) + * provider/google: Correct the incorrect instance group manager URL returned from GKE ([#4336](https://github.com/hashicorp/terraform/issues/4336)) + * provider/google: Fix a plan/apply cycle in IAM policies ([#12387](https://github.com/hashicorp/terraform/issues/12387)) + * provider/google: Fix a plan/apply cycle in forwarding rules when only a single port is specified ([#12662](https://github.com/hashicorp/terraform/issues/12662)) + * provider/google: Minor correction : "Deleting disk" message in Delete method ([#12521](https://github.com/hashicorp/terraform/issues/12521)) + * provider/mysql: Avoid crash on un-interpolated provider cfg ([#12391](https://github.com/hashicorp/terraform/issues/12391)) + * provider/ns1: Fix incorrect schema (causing crash) for 'ns1_user.notify' ([#12721](https://github.com/hashicorp/terraform/issues/12721)) + * provider/openstack: Handle cases where volumes are disabled ([#12374](https://github.com/hashicorp/terraform/issues/12374)) + * provider/openstack: Toggle Creation of Default Security Group Rules ([#12119](https://github.com/hashicorp/terraform/issues/12119)) + * provider/openstack: Change Port fixed_ip to a Set ([#12613](https://github.com/hashicorp/terraform/issues/12613)) + * provider/openstack: Add network_id to Network data source ([#12615](https://github.com/hashicorp/terraform/issues/12615)) + * provider/openstack: Check for ErrDefault500 when creating/deleting pool member ([#12664](https://github.com/hashicorp/terraform/issues/12664)) + * provider/rancher: Apply the set value for finish_upgrade to set to prevent recurring plans ([#12545](https://github.com/hashicorp/terraform/issues/12545)) + * provider/scaleway: work around API concurrency issue ([#12707](https://github.com/hashicorp/terraform/issues/12707)) + * provider/statuscake: use default status code list when updating test ([#12375](https://github.com/hashicorp/terraform/issues/12375)) + +## 0.9.0 from 0.9.0-beta2 (March 15, 2017) + +**This only includes changes from 0.9.0-beta2 to 0.9.0 final. The section above has the complete 0.8.x to 0.9.0 CHANGELOG.** + +FEATURES: + + * **New Provider:** `Circonus` ([#12578](https://github.com/hashicorp/terraform/issues/12578)) + +BACKWARDS INCOMPATIBILITIES / NOTES: + + * provider/aws: `aws_codebuild_project` renamed `timeout` to `build_timeout` ([#12503](https://github.com/hashicorp/terraform/issues/12503)) + * provider/azurem: `azurerm_virtual_machine` and `azurerm_virtual_machine_scale_set` now store has of custom_data not all custom_data ([#12214](https://github.com/hashicorp/terraform/issues/12214)) + * provider/google: compute_instance, compute_instance_template, and compute_disk all have a subtly changed logic when specifying an image family as the image; in 0.8.x they would pin to the latest image in the family when the resource is created; in 0.9.x they pass the family to the API and use its behaviour. New input formats are also supported. ([#12223](https://github.com/hashicorp/terraform/issues/12223)) + * provider/google: removed the unused and deprecated region field from google_compute_backend_service ([#12663](https://github.com/hashicorp/terraform/issues/12663)) + * provider/google: removed the deprecated account_file field for the Google Cloud provider ([#12668](https://github.com/hashicorp/terraform/issues/12668)) + * provider/google: removed the deprecated fields from google_project ([#12659](https://github.com/hashicorp/terraform/issues/12659)) + +IMPROVEMENTS: + + * provider/azurerm: store only hash of `azurerm_virtual_machine` and `azurerm_virtual_machine_scale_set` custom_data - reduces size of state ([#12214](https://github.com/hashicorp/terraform/issues/12214)) + * report all errors encountered during config validation ([#12383](https://github.com/hashicorp/terraform/issues/12383)) + +BUG FIXES: + + * provider/google: Correct the incorrect instance group manager URL returned from GKE ([#4336](https://github.com/hashicorp/terraform/issues/4336)) + * provider/google: Fix a plan/apply cycle in IAM policies ([#12387](https://github.com/hashicorp/terraform/issues/12387)) + * provider/google: Fix a plan/apply cycle in forwarding rules when only a single port is specified ([#12662](https://github.com/hashicorp/terraform/issues/12662)) + ## 0.9.0-beta2 (March 2, 2017) BACKWARDS INCOMPATIBILITIES / NOTES: diff --git a/Makefile b/Makefile index bc08c01b6a..319492ef13 100644 --- a/Makefile +++ b/Makefile @@ -38,10 +38,10 @@ plugin-dev: generate mv $(GOPATH)/bin/$(PLUGIN) $(GOPATH)/bin/terraform-$(PLUGIN) # test runs the unit tests -test:# fmtcheck errcheck generate +test: fmtcheck errcheck generate go test -i $(TEST) || exit 1 echo $(TEST) | \ - xargs -t -n4 go test $(TESTARGS) -timeout=30s -parallel=4 + xargs -t -n4 go test $(TESTARGS) -timeout=60s -parallel=4 # testacc runs acceptance tests testacc: fmtcheck generate diff --git a/README.md b/README.md index a21edd80ea..a7b9eea326 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ Terraform ========= -- Website: http://www.terraform.io +- Website: https://www.terraform.io - [![Gitter chat](https://badges.gitter.im/hashicorp-terraform/Lobby.png)](https://gitter.im/hashicorp-terraform/Lobby) - Mailing list: [Google Groups](http://groups.google.com/group/terraform-tool) @@ -29,7 +29,7 @@ All documentation is available on the [Terraform website](http://www.terraform.i Developing Terraform -------------------- -If you wish to work on Terraform itself or any of its built-in providers, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.7+ is *required*). Alternatively, you can use the Vagrantfile in the root of this repo to stand up a virtual machine with the appropriate dev tooling already set up for you. +If you wish to work on Terraform itself or any of its built-in providers, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.8+ is *required*). Alternatively, you can use the Vagrantfile in the root of this repo to stand up a virtual machine with the appropriate dev tooling already set up for you. For local dev first make sure Go is properly installed, including setting up a [GOPATH](http://golang.org/doc/code.html#GOPATH). You will also need to add `$GOPATH/bin` to your `$PATH`. diff --git a/backend/atlas/backend.go b/backend/atlas/backend.go index f6ce3ea51d..660327ae02 100644 --- a/backend/atlas/backend.go +++ b/backend/atlas/backend.go @@ -110,8 +110,8 @@ func (b *Backend) init() { "address": &schema.Schema{ Type: schema.TypeString, Optional: true, - Default: defaultAtlasServer, Description: schemaDescriptions["address"], + DefaultFunc: schema.EnvDefaultFunc("ATLAS_ADDRESS", defaultAtlasServer), }, }, diff --git a/backend/atlas/backend_test.go b/backend/atlas/backend_test.go index 14c8d1f9eb..313a528d27 100644 --- a/backend/atlas/backend_test.go +++ b/backend/atlas/backend_test.go @@ -1,12 +1,49 @@ package atlas import ( + "os" "testing" "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/config" + "github.com/hashicorp/terraform/terraform" ) func TestImpl(t *testing.T) { var _ backend.Backend = new(Backend) var _ backend.CLI = new(Backend) } + +func TestConfigure_envAddr(t *testing.T) { + defer os.Setenv("ATLAS_ADDRESS", os.Getenv("ATLAS_ADDRESS")) + os.Setenv("ATLAS_ADDRESS", "http://foo.com") + + b := &Backend{} + err := b.Configure(terraform.NewResourceConfig(config.TestRawConfig(t, map[string]interface{}{ + "name": "foo/bar", + }))) + if err != nil { + t.Fatalf("err: %s", err) + } + + if b.stateClient.Server != "http://foo.com" { + t.Fatalf("bad: %#v", b.stateClient) + } +} + +func TestConfigure_envToken(t *testing.T) { + defer os.Setenv("ATLAS_TOKEN", os.Getenv("ATLAS_TOKEN")) + os.Setenv("ATLAS_TOKEN", "foo") + + b := &Backend{} + err := b.Configure(terraform.NewResourceConfig(config.TestRawConfig(t, map[string]interface{}{ + "name": "foo/bar", + }))) + if err != nil { + t.Fatalf("err: %s", err) + } + + if b.stateClient.AccessToken != "foo" { + t.Fatalf("bad: %#v", b.stateClient) + } +} diff --git a/backend/init/init.go b/backend/init/init.go index 7297904b01..685276dde6 100644 --- a/backend/init/init.go +++ b/backend/init/init.go @@ -12,6 +12,7 @@ import ( backendlocal "github.com/hashicorp/terraform/backend/local" backendconsul "github.com/hashicorp/terraform/backend/remote-state/consul" backendinmem "github.com/hashicorp/terraform/backend/remote-state/inmem" + backendS3 "github.com/hashicorp/terraform/backend/remote-state/s3" ) // backends is the list of available backends. This is a global variable @@ -36,6 +37,7 @@ func init() { "local": func() backend.Backend { return &backendlocal.Local{} }, "consul": func() backend.Backend { return backendconsul.New() }, "inmem": func() backend.Backend { return backendinmem.New() }, + "s3": func() backend.Backend { return backendS3.New() }, } // Add the legacy remote backends that haven't yet been convertd to diff --git a/backend/local/backend.go b/backend/local/backend.go index 61df56bde7..063766b1ec 100644 --- a/backend/local/backend.go +++ b/backend/local/backend.go @@ -127,7 +127,7 @@ func (b *Local) States() ([]string, error) { // the listing always start with "default" envs := []string{backend.DefaultStateName} - entries, err := ioutil.ReadDir(DefaultEnvDir) + entries, err := ioutil.ReadDir(b.stateEnvDir()) // no error if there's no envs configured if os.IsNotExist(err) { return envs, nil @@ -166,7 +166,7 @@ func (b *Local) DeleteState(name string) error { } delete(b.states, name) - return os.RemoveAll(filepath.Join(DefaultEnvDir, name)) + return os.RemoveAll(filepath.Join(b.stateEnvDir(), name)) } func (b *Local) State(name string) (state.State, error) { @@ -320,17 +320,12 @@ func (b *Local) StatePaths(name string) (string, string, string) { name = backend.DefaultStateName } - envDir := DefaultEnvDir - if b.StateEnvDir != "" { - envDir = b.StateEnvDir - } - if name == backend.DefaultStateName { if statePath == "" { statePath = DefaultStateFilename } } else { - statePath = filepath.Join(envDir, name, DefaultStateFilename) + statePath = filepath.Join(b.stateEnvDir(), name, DefaultStateFilename) } if stateOutPath == "" { @@ -353,12 +348,7 @@ func (b *Local) createState(name string) error { return nil } - envDir := DefaultEnvDir - if b.StateEnvDir != "" { - envDir = b.StateEnvDir - } - - stateDir := filepath.Join(envDir, name) + stateDir := filepath.Join(b.stateEnvDir(), name) s, err := os.Stat(stateDir) if err == nil && s.IsDir() { // no need to check for os.IsNotExist, since that is covered by os.MkdirAll @@ -374,6 +364,15 @@ func (b *Local) createState(name string) error { return nil } +// stateEnvDir returns the directory where state environments are stored. +func (b *Local) stateEnvDir() string { + if b.StateEnvDir != "" { + return b.StateEnvDir + } + + return DefaultEnvDir +} + // currentStateName returns the name of the current named state as set in the // configuration files. // If there are no configured environments, currentStateName returns "default" diff --git a/backend/local/backend_plan.go b/backend/local/backend_plan.go index afb483dad7..f637358736 100644 --- a/backend/local/backend_plan.go +++ b/backend/local/backend_plan.go @@ -110,6 +110,12 @@ func (b *Local) opPlan( // Write the backend if we have one plan.Backend = op.PlanOutBackend + // This works around a bug (#12871) which is no longer possible to + // trigger but will exist for already corrupted upgrades. + if plan.Backend != nil && plan.State != nil { + plan.State.Remote = nil + } + log.Printf("[INFO] backend/local: writing plan output to: %s", path) f, err := os.Create(path) if err == nil { diff --git a/backend/local/backend_refresh.go b/backend/local/backend_refresh.go index 1de9902d11..c8b23bd323 100644 --- a/backend/local/backend_refresh.go +++ b/backend/local/backend_refresh.go @@ -4,6 +4,7 @@ import ( "context" "fmt" "os" + "strings" "github.com/hashicorp/errwrap" "github.com/hashicorp/go-multierror" @@ -22,24 +23,17 @@ func (b *Local) opRefresh( if b.Backend == nil { if _, err := os.Stat(b.StatePath); err != nil { if os.IsNotExist(err) { - runningOp.Err = fmt.Errorf( - "The Terraform state file for your infrastructure does not\n"+ - "exist. The 'refresh' command only works and only makes sense\n"+ - "when there is existing state that Terraform is managing. Please\n"+ - "double-check the value given below and try again. If you\n"+ - "haven't created infrastructure with Terraform yet, use the\n"+ - "'terraform apply' command.\n\n"+ - "Path: %s", - b.StatePath) - return + err = nil } - runningOp.Err = fmt.Errorf( - "There was an error reading the Terraform state that is needed\n"+ - "for refreshing. The path and error are shown below.\n\n"+ - "Path: %s\n\nError: %s", - b.StatePath, err) - return + if err != nil { + runningOp.Err = fmt.Errorf( + "There was an error reading the Terraform state that is needed\n"+ + "for refreshing. The path and error are shown below.\n\n"+ + "Path: %s\n\nError: %s", + b.StatePath, err) + return + } } } @@ -74,6 +68,12 @@ func (b *Local) opRefresh( // Set our state runningOp.State = opState.State() + if runningOp.State.Empty() || !runningOp.State.HasResources() { + if b.CLI != nil { + b.CLI.Output(b.Colorize().Color( + strings.TrimSpace(refreshNoState) + "\n")) + } + } // Perform operation and write the resulting state to the running op newState, err := tfCtx.Refresh() @@ -93,3 +93,11 @@ func (b *Local) opRefresh( return } } + +const refreshNoState = ` +[reset][bold][yellow]Empty or non-existent state file.[reset][yellow] + +Refresh will do nothing. Refresh does not error or return an erroneous +exit status because many automation scripts use refresh, plan, then apply +and may not have a state file yet for the first run. +` diff --git a/backend/local/backend_test.go b/backend/local/backend_test.go index f929e74413..3b5f1f9bdf 100644 --- a/backend/local/backend_test.go +++ b/backend/local/backend_test.go @@ -20,6 +20,12 @@ func TestLocal_impl(t *testing.T) { var _ backend.CLI = new(Local) } +func TestLocal_backend(t *testing.T) { + defer testTmpDir(t)() + b := &Local{} + backend.TestBackend(t, b, b) +} + func checkState(t *testing.T, path, expected string) { // Read the state f, err := os.Open(path) diff --git a/backend/local/testing.go b/backend/local/testing.go index 67048766f4..91ba0f9004 100644 --- a/backend/local/testing.go +++ b/backend/local/testing.go @@ -21,6 +21,7 @@ func TestLocal(t *testing.T) *Local { StatePath: filepath.Join(tempDir, "state.tfstate"), StateOutPath: filepath.Join(tempDir, "state.tfstate"), StateBackupPath: filepath.Join(tempDir, "state.tfstate.bak"), + StateEnvDir: filepath.Join(tempDir, "state.tfstate.d"), ContextOpts: &terraform.ContextOpts{}, } } diff --git a/backend/remote-state/consul/backend.go b/backend/remote-state/consul/backend.go index 7193bf8a96..79aeea4024 100644 --- a/backend/remote-state/consul/backend.go +++ b/backend/remote-state/consul/backend.go @@ -53,6 +53,20 @@ func New() backend.Backend { Description: "HTTP Auth in the format of 'username:password'", Default: "", // To prevent input }, + + "gzip": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Description: "Compress the state data using gzip", + Default: false, + }, + + "lock": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Description: "Lock state access", + Default: true, + }, }, } @@ -64,13 +78,18 @@ func New() backend.Backend { type Backend struct { *schema.Backend + // The fields below are set from configure configData *schema.ResourceData + lock bool } func (b *Backend) configure(ctx context.Context) error { // Grab the resource data b.configData = schema.FromContextBackendConfig(ctx) + // Store the lock information + b.lock = b.configData.Get("lock").(bool) + // Initialize a client to test config _, err := b.clientRaw() return err diff --git a/backend/remote-state/consul/backend_state.go b/backend/remote-state/consul/backend_state.go index 6e6d115f0d..74f30c8427 100644 --- a/backend/remote-state/consul/backend_state.go +++ b/backend/remote-state/consul/backend_state.go @@ -56,7 +56,7 @@ func (b *Backend) States() ([]string, error) { } func (b *Backend) DeleteState(name string) error { - if name == backend.DefaultStateName { + if name == backend.DefaultStateName || name == "" { return fmt.Errorf("can't delete default state") } @@ -85,27 +85,39 @@ func (b *Backend) State(name string) (state.State, error) { // Determine the path of the data path := b.path(name) + // Determine whether to gzip or not + gzip := b.configData.Get("gzip").(bool) + // Build the state client - stateMgr := &remote.State{ + var stateMgr state.State = &remote.State{ Client: &RemoteClient{ Client: client, Path: path, + GZip: gzip, }, } + // If we're not locking, disable it + if !b.lock { + stateMgr = &state.LockDisabled{Inner: stateMgr} + } + + // Get the locker, which we know always exists + stateMgrLocker := stateMgr.(state.Locker) + // Grab a lock, we use this to write an empty state if one doesn't // exist already. We have to write an empty state as a sentinel value // so States() knows it exists. lockInfo := state.NewLockInfo() lockInfo.Operation = "init" - lockId, err := stateMgr.Lock(lockInfo) + lockId, err := stateMgrLocker.Lock(lockInfo) if err != nil { return nil, fmt.Errorf("failed to lock state in Consul: %s", err) } // Local helper function so we can call it multiple places lockUnlock := func(parent error) error { - if err := stateMgr.Unlock(lockId); err != nil { + if err := stateMgrLocker.Unlock(lockId); err != nil { return fmt.Errorf(strings.TrimSpace(errStateUnlock), lockId, err) } diff --git a/backend/remote-state/consul/backend_test.go b/backend/remote-state/consul/backend_test.go index fb2c0e0f69..b75d252511 100644 --- a/backend/remote-state/consul/backend_test.go +++ b/backend/remote-state/consul/backend_test.go @@ -2,10 +2,12 @@ package consul import ( "fmt" + "io/ioutil" "os" "testing" "time" + "github.com/hashicorp/consul/testutil" "github.com/hashicorp/terraform/backend" ) @@ -13,19 +15,80 @@ func TestBackend_impl(t *testing.T) { var _ backend.Backend = new(Backend) } -func TestBackend(t *testing.T) { - addr := os.Getenv("CONSUL_HTTP_ADDR") - if addr == "" { - t.Log("consul tests require CONSUL_HTTP_ADDR") +func newConsulTestServer(t *testing.T) *testutil.TestServer { + skip := os.Getenv("TF_ACC") == "" && os.Getenv("TF_CONSUL_TEST") == "" + if skip { + t.Log("consul server tests require setting TF_ACC or TF_CONSUL_TEST") t.Skip() } - // Get the backend - b := backend.TestBackendConfig(t, New(), map[string]interface{}{ - "address": addr, - "path": fmt.Sprintf("tf-unit/%s", time.Now().String()), + srv := testutil.NewTestServerConfig(t, func(c *testutil.TestServerConfig) { + c.LogLevel = "warn" + + if !testing.Verbose() { + c.Stdout = ioutil.Discard + c.Stderr = ioutil.Discard + } + }) + + return srv +} + +func TestBackend(t *testing.T) { + srv := newConsulTestServer(t) + defer srv.Stop() + + path := fmt.Sprintf("tf-unit/%s", time.Now().String()) + + // Get the backend. We need two to test locking. + b1 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "address": srv.HTTPAddr, + "path": path, + }) + + b2 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "address": srv.HTTPAddr, + "path": path, }) // Test - backend.TestBackend(t, b) + backend.TestBackend(t, b1, b2) +} + +func TestBackend_lockDisabled(t *testing.T) { + srv := newConsulTestServer(t) + defer srv.Stop() + + path := fmt.Sprintf("tf-unit/%s", time.Now().String()) + + // Get the backend. We need two to test locking. + b1 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "address": srv.HTTPAddr, + "path": path, + "lock": false, + }) + + b2 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "address": srv.HTTPAddr, + "path": path + "different", // Diff so locking test would fail if it was locking + "lock": false, + }) + + // Test + backend.TestBackend(t, b1, b2) +} + +func TestBackend_gzip(t *testing.T) { + srv := newConsulTestServer(t) + defer srv.Stop() + + // Get the backend + b := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "address": srv.HTTPAddr, + "path": fmt.Sprintf("tf-unit/%s", time.Now().String()), + "gzip": true, + }) + + // Test + backend.TestBackend(t, b, nil) } diff --git a/backend/remote-state/consul/client.go b/backend/remote-state/consul/client.go index 8a26165e85..cd59711631 100644 --- a/backend/remote-state/consul/client.go +++ b/backend/remote-state/consul/client.go @@ -1,6 +1,8 @@ package consul import ( + "bytes" + "compress/gzip" "crypto/md5" "encoding/json" "errors" @@ -22,6 +24,7 @@ const ( type RemoteClient struct { Client *consulapi.Client Path string + GZip bool consulLock *consulapi.Lock lockCh <-chan struct{} @@ -36,18 +39,37 @@ func (c *RemoteClient) Get() (*remote.Payload, error) { return nil, nil } + payload := pair.Value + // If the payload starts with 0x1f, it's gzip, not json + if len(pair.Value) >= 1 && pair.Value[0] == '\x1f' { + if data, err := uncompressState(pair.Value); err == nil { + payload = data + } else { + return nil, err + } + } + md5 := md5.Sum(pair.Value) return &remote.Payload{ - Data: pair.Value, + Data: payload, MD5: md5[:], }, nil } func (c *RemoteClient) Put(data []byte) error { + payload := data + if c.GZip { + if compressedState, err := compressState(data); err == nil { + payload = compressedState + } else { + return err + } + } + kv := c.Client.KV() _, err := kv.Put(&consulapi.KVPair{ Key: c.Path, - Value: data, + Value: payload, }, nil) return err } @@ -177,3 +199,31 @@ func (c *RemoteClient) Unlock(id string) error { return err } + +func compressState(data []byte) ([]byte, error) { + b := new(bytes.Buffer) + gz := gzip.NewWriter(b) + if _, err := gz.Write(data); err != nil { + return nil, err + } + if err := gz.Flush(); err != nil { + return nil, err + } + if err := gz.Close(); err != nil { + return nil, err + } + return b.Bytes(), nil +} + +func uncompressState(data []byte) ([]byte, error) { + b := new(bytes.Buffer) + gz, err := gzip.NewReader(bytes.NewReader(data)) + if err != nil { + return nil, err + } + b.ReadFrom(gz) + if err := gz.Close(); err != nil { + return nil, err + } + return b.Bytes(), nil +} diff --git a/backend/remote-state/consul/client_test.go b/backend/remote-state/consul/client_test.go index d123e39c78..57b7c452ee 100644 --- a/backend/remote-state/consul/client_test.go +++ b/backend/remote-state/consul/client_test.go @@ -2,7 +2,6 @@ package consul import ( "fmt" - "os" "testing" "time" @@ -16,15 +15,12 @@ func TestRemoteClient_impl(t *testing.T) { } func TestRemoteClient(t *testing.T) { - addr := os.Getenv("CONSUL_HTTP_ADDR") - if addr == "" { - t.Log("consul tests require CONSUL_HTTP_ADDR") - t.Skip() - } + srv := newConsulTestServer(t) + defer srv.Stop() // Get the backend b := backend.TestBackendConfig(t, New(), map[string]interface{}{ - "address": addr, + "address": srv.HTTPAddr, "path": fmt.Sprintf("tf-unit/%s", time.Now().String()), }) @@ -38,18 +34,54 @@ func TestRemoteClient(t *testing.T) { remote.TestClient(t, state.(*remote.State).Client) } -func TestConsul_stateLock(t *testing.T) { - addr := os.Getenv("CONSUL_HTTP_ADDR") - if addr == "" { - t.Log("consul lock tests require CONSUL_HTTP_ADDR") - t.Skip() +// test the gzip functionality of the client +func TestRemoteClient_gzipUpgrade(t *testing.T) { + srv := newConsulTestServer(t) + defer srv.Stop() + + statePath := fmt.Sprintf("tf-unit/%s", time.Now().String()) + + // Get the backend + b := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "address": srv.HTTPAddr, + "path": statePath, + }) + + // Grab the client + state, err := b.State(backend.DefaultStateName) + if err != nil { + t.Fatalf("err: %s", err) } + // Test + remote.TestClient(t, state.(*remote.State).Client) + + // create a new backend with gzip + b = backend.TestBackendConfig(t, New(), map[string]interface{}{ + "address": srv.HTTPAddr, + "path": statePath, + "gzip": true, + }) + + // Grab the client + state, err = b.State(backend.DefaultStateName) + if err != nil { + t.Fatalf("err: %s", err) + } + + // Test + remote.TestClient(t, state.(*remote.State).Client) +} + +func TestConsul_stateLock(t *testing.T) { + srv := newConsulTestServer(t) + defer srv.Stop() + path := fmt.Sprintf("tf-unit/%s", time.Now().String()) // create 2 instances to get 2 remote.Clients sA, err := backend.TestBackendConfig(t, New(), map[string]interface{}{ - "address": addr, + "address": srv.HTTPAddr, "path": path, }).State(backend.DefaultStateName) if err != nil { @@ -57,7 +89,7 @@ func TestConsul_stateLock(t *testing.T) { } sB, err := backend.TestBackendConfig(t, New(), map[string]interface{}{ - "address": addr, + "address": srv.HTTPAddr, "path": path, }).State(backend.DefaultStateName) if err != nil { diff --git a/backend/remote-state/s3/backend.go b/backend/remote-state/s3/backend.go new file mode 100644 index 0000000000..8265d7f255 --- /dev/null +++ b/backend/remote-state/s3/backend.go @@ -0,0 +1,198 @@ +package s3 + +import ( + "context" + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/aws/aws-sdk-go/service/s3" + cleanhttp "github.com/hashicorp/go-cleanhttp" + multierror "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/helper/schema" + + terraformAWS "github.com/hashicorp/terraform/builtin/providers/aws" +) + +// New creates a new backend for S3 remote state. +func New() backend.Backend { + s := &schema.Backend{ + Schema: map[string]*schema.Schema{ + "bucket": &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "The name of the S3 bucket", + }, + + "key": &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "The path to the state file inside the bucket", + }, + + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "The region of the S3 bucket.", + DefaultFunc: schema.EnvDefaultFunc("AWS_DEFAULT_REGION", nil), + }, + + "endpoint": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "A custom endpoint for the S3 API", + DefaultFunc: schema.EnvDefaultFunc("AWS_S3_ENDPOINT", ""), + }, + + "encrypt": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Description: "Whether to enable server side encryption of the state file", + Default: false, + }, + + "acl": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "Canned ACL to be applied to the state file", + Default: "", + }, + + "access_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "AWS access key", + Default: "", + }, + + "secret_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "AWS secret key", + Default: "", + }, + + "kms_key_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "The ARN of a KMS Key to use for encrypting the state", + Default: "", + }, + + "lock_table": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "DynamoDB table for state locking", + Default: "", + }, + + "profile": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "AWS profile name", + Default: "", + }, + + "shared_credentials_file": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "Path to a shared credentials file", + Default: "", + }, + + "token": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "MFA token", + Default: "", + }, + + "role_arn": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Description: "The role to be assumed", + Default: "", + }, + }, + } + + result := &Backend{Backend: s} + result.Backend.ConfigureFunc = result.configure + return result +} + +type Backend struct { + *schema.Backend + + // The fields below are set from configure + s3Client *s3.S3 + dynClient *dynamodb.DynamoDB + + bucketName string + keyName string + serverSideEncryption bool + acl string + kmsKeyID string + lockTable string +} + +func (b *Backend) configure(ctx context.Context) error { + if b.s3Client != nil { + return nil + } + + // Grab the resource data + data := schema.FromContextBackendConfig(ctx) + + b.bucketName = data.Get("bucket").(string) + b.keyName = data.Get("key").(string) + b.serverSideEncryption = data.Get("encrypt").(bool) + b.acl = data.Get("acl").(string) + b.kmsKeyID = data.Get("kms_key_id").(string) + b.lockTable = data.Get("lock_table").(string) + + var errs []error + creds, err := terraformAWS.GetCredentials(&terraformAWS.Config{ + AccessKey: data.Get("access_key").(string), + SecretKey: data.Get("secret_key").(string), + Token: data.Get("token").(string), + Profile: data.Get("profile").(string), + CredsFilename: data.Get("shared_credentials_file").(string), + AssumeRoleARN: data.Get("role_arn").(string), + }) + if err != nil { + return err + } + + // Call Get to check for credential provider. If nothing found, we'll get an + // error, and we can present it nicely to the user + _, err = creds.Get() + if err != nil { + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoCredentialProviders" { + errs = append(errs, fmt.Errorf(`No valid credential sources found for AWS S3 remote. +Please see https://www.terraform.io/docs/state/remote/s3.html for more information on +providing credentials for the AWS S3 remote`)) + } else { + errs = append(errs, fmt.Errorf("Error loading credentials for AWS S3 remote: %s", err)) + } + return &multierror.Error{Errors: errs} + } + + endpoint := data.Get("endpoint").(string) + region := data.Get("region").(string) + + awsConfig := &aws.Config{ + Credentials: creds, + Endpoint: aws.String(endpoint), + Region: aws.String(region), + HTTPClient: cleanhttp.DefaultClient(), + } + sess := session.New(awsConfig) + b.s3Client = s3.New(sess) + b.dynClient = dynamodb.New(sess) + + return nil +} diff --git a/backend/remote-state/s3/backend_state.go b/backend/remote-state/s3/backend_state.go new file mode 100644 index 0000000000..2d745156e9 --- /dev/null +++ b/backend/remote-state/s3/backend_state.go @@ -0,0 +1,159 @@ +package s3 + +import ( + "fmt" + "sort" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/state" + "github.com/hashicorp/terraform/state/remote" + "github.com/hashicorp/terraform/terraform" +) + +const ( + // This will be used as directory name, the odd looking colon is simply to + // reduce the chance of name conflicts with existing objects. + keyEnvPrefix = "env:" +) + +func (b *Backend) States() ([]string, error) { + params := &s3.ListObjectsInput{ + Bucket: &b.bucketName, + Prefix: aws.String(keyEnvPrefix + "/"), + } + + resp, err := b.s3Client.ListObjects(params) + if err != nil { + return nil, err + } + + var envs []string + for _, obj := range resp.Contents { + env := keyEnv(*obj.Key) + if env != "" { + envs = append(envs, env) + } + } + + sort.Strings(envs) + envs = append([]string{backend.DefaultStateName}, envs...) + return envs, nil +} + +// extract the env name from the S3 key +func keyEnv(key string) string { + parts := strings.Split(key, "/") + if len(parts) < 3 { + // no env here + return "" + } + + if parts[0] != keyEnvPrefix { + // not our key, so ignore + return "" + } + + return parts[1] +} + +func (b *Backend) DeleteState(name string) error { + if name == backend.DefaultStateName || name == "" { + return fmt.Errorf("can't delete default state") + } + + params := &s3.DeleteObjectInput{ + Bucket: &b.bucketName, + Key: aws.String(b.path(name)), + } + + _, err := b.s3Client.DeleteObject(params) + if err != nil { + return err + } + + return nil +} + +func (b *Backend) State(name string) (state.State, error) { + client := &RemoteClient{ + s3Client: b.s3Client, + dynClient: b.dynClient, + bucketName: b.bucketName, + path: b.path(name), + serverSideEncryption: b.serverSideEncryption, + acl: b.acl, + kmsKeyID: b.kmsKeyID, + lockTable: b.lockTable, + } + + stateMgr := &remote.State{Client: client} + + //if this isn't the default state name, we need to create the object so + //it's listed by States. + if name != backend.DefaultStateName { + // take a lock on this state while we write it + lockInfo := state.NewLockInfo() + lockInfo.Operation = "init" + lockId, err := client.Lock(lockInfo) + if err != nil { + return nil, fmt.Errorf("failed to lock s3 state: %s", err) + } + + // Local helper function so we can call it multiple places + lockUnlock := func(parent error) error { + if err := stateMgr.Unlock(lockId); err != nil { + return fmt.Errorf(strings.TrimSpace(errStateUnlock), lockId, err) + } + return parent + } + + // Grab the value + if err := stateMgr.RefreshState(); err != nil { + err = lockUnlock(err) + return nil, err + } + + // If we have no state, we have to create an empty state + if v := stateMgr.State(); v == nil { + if err := stateMgr.WriteState(terraform.NewState()); err != nil { + err = lockUnlock(err) + return nil, err + } + if err := stateMgr.PersistState(); err != nil { + err = lockUnlock(err) + return nil, err + } + } + + // Unlock, the state should now be initialized + if err := lockUnlock(nil); err != nil { + return nil, err + } + + } + + return stateMgr, nil +} + +func (b *Backend) client() *RemoteClient { + return &RemoteClient{} +} + +func (b *Backend) path(name string) string { + if name == backend.DefaultStateName { + return b.keyName + } + + return strings.Join([]string{keyEnvPrefix, name, b.keyName}, "/") +} + +const errStateUnlock = ` +Error unlocking S3 state. Lock ID: %s + +Error: %s + +You may have to force-unlock this state in order to use it again. +` diff --git a/backend/remote-state/s3/backend_test.go b/backend/remote-state/s3/backend_test.go new file mode 100644 index 0000000000..f8b664b801 --- /dev/null +++ b/backend/remote-state/s3/backend_test.go @@ -0,0 +1,213 @@ +package s3 + +import ( + "fmt" + "os" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/terraform/backend" +) + +// verify that we are doing ACC tests or the S3 tests specifically +func testACC(t *testing.T) { + skip := os.Getenv("TF_ACC") == "" && os.Getenv("TF_S3_TEST") == "" + if skip { + t.Log("s3 backend tests require setting TF_ACC or TF_S3_TEST") + t.Skip() + } + if os.Getenv("AWS_DEFAULT_REGION") == "" { + os.Setenv("AWS_DEFAULT_REGION", "us-west-2") + } +} + +func TestBackend_impl(t *testing.T) { + var _ backend.Backend = new(Backend) +} + +func TestBackendConfig(t *testing.T) { + // This test just instantiates the client. Shouldn't make any actual + // requests nor incur any costs. + + config := map[string]interface{}{ + "region": "us-west-1", + "bucket": "tf-test", + "key": "state", + "encrypt": true, + "access_key": "ACCESS_KEY", + "secret_key": "SECRET_KEY", + "lock_table": "dynamoTable", + } + + b := backend.TestBackendConfig(t, New(), config).(*Backend) + + if *b.s3Client.Config.Region != "us-west-1" { + t.Fatalf("Incorrect region was populated") + } + if b.bucketName != "tf-test" { + t.Fatalf("Incorrect bucketName was populated") + } + if b.keyName != "state" { + t.Fatalf("Incorrect keyName was populated") + } + + credentials, err := b.s3Client.Config.Credentials.Get() + if err != nil { + t.Fatalf("Error when requesting credentials") + } + if credentials.AccessKeyID != "ACCESS_KEY" { + t.Fatalf("Incorrect Access Key Id was populated") + } + if credentials.SecretAccessKey != "SECRET_KEY" { + t.Fatalf("Incorrect Secret Access Key was populated") + } +} + +func TestBackend(t *testing.T) { + testACC(t) + + bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix()) + keyName := "testState" + + b := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "bucket": bucketName, + "key": keyName, + "encrypt": true, + }).(*Backend) + + createS3Bucket(t, b.s3Client, bucketName) + defer deleteS3Bucket(t, b.s3Client, bucketName) + + backend.TestBackend(t, b, nil) +} + +func TestBackendLocked(t *testing.T) { + testACC(t) + + bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix()) + keyName := "testState" + + b1 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "bucket": bucketName, + "key": keyName, + "encrypt": true, + "lock_table": bucketName, + }).(*Backend) + + b2 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "bucket": bucketName, + "key": keyName, + "encrypt": true, + "lock_table": bucketName, + }).(*Backend) + + createS3Bucket(t, b1.s3Client, bucketName) + defer deleteS3Bucket(t, b1.s3Client, bucketName) + createDynamoDBTable(t, b1.dynClient, bucketName) + defer deleteDynamoDBTable(t, b1.dynClient, bucketName) + + backend.TestBackend(t, b1, b2) +} + +func createS3Bucket(t *testing.T, s3Client *s3.S3, bucketName string) { + createBucketReq := &s3.CreateBucketInput{ + Bucket: &bucketName, + } + + // Be clear about what we're doing in case the user needs to clean + // this up later. + t.Logf("creating S3 bucket %s in %s", bucketName, *s3Client.Config.Region) + _, err := s3Client.CreateBucket(createBucketReq) + if err != nil { + t.Fatal("failed to create test S3 bucket:", err) + } +} + +func deleteS3Bucket(t *testing.T, s3Client *s3.S3, bucketName string) { + warning := "WARNING: Failed to delete the test S3 bucket. It may have been left in your AWS account and may incur storage charges. (error was %s)" + + // first we have to get rid of the env objects, or we can't delete the bucket + resp, err := s3Client.ListObjects(&s3.ListObjectsInput{Bucket: &bucketName}) + if err != nil { + t.Logf(warning, err) + return + } + for _, obj := range resp.Contents { + if _, err := s3Client.DeleteObject(&s3.DeleteObjectInput{Bucket: &bucketName, Key: obj.Key}); err != nil { + // this will need cleanup no matter what, so just warn and exit + t.Logf(warning, err) + return + } + } + + if _, err := s3Client.DeleteBucket(&s3.DeleteBucketInput{Bucket: &bucketName}); err != nil { + t.Logf(warning, err) + } +} + +// create the dynamoDB table, and wait until we can query it. +func createDynamoDBTable(t *testing.T, dynClient *dynamodb.DynamoDB, tableName string) { + createInput := &dynamodb.CreateTableInput{ + AttributeDefinitions: []*dynamodb.AttributeDefinition{ + { + AttributeName: aws.String("LockID"), + AttributeType: aws.String("S"), + }, + }, + KeySchema: []*dynamodb.KeySchemaElement{ + { + AttributeName: aws.String("LockID"), + KeyType: aws.String("HASH"), + }, + }, + ProvisionedThroughput: &dynamodb.ProvisionedThroughput{ + ReadCapacityUnits: aws.Int64(5), + WriteCapacityUnits: aws.Int64(5), + }, + TableName: aws.String(tableName), + } + + _, err := dynClient.CreateTable(createInput) + if err != nil { + t.Fatal(err) + } + + // now wait until it's ACTIVE + start := time.Now() + time.Sleep(time.Second) + + describeInput := &dynamodb.DescribeTableInput{ + TableName: aws.String(tableName), + } + + for { + resp, err := dynClient.DescribeTable(describeInput) + if err != nil { + t.Fatal(err) + } + + if *resp.Table.TableStatus == "ACTIVE" { + return + } + + if time.Since(start) > time.Minute { + t.Fatalf("timed out creating DynamoDB table %s", tableName) + } + + time.Sleep(3 * time.Second) + } + +} + +func deleteDynamoDBTable(t *testing.T, dynClient *dynamodb.DynamoDB, tableName string) { + params := &dynamodb.DeleteTableInput{ + TableName: aws.String(tableName), + } + _, err := dynClient.DeleteTable(params) + if err != nil { + t.Logf("WARNING: Failed to delete the test DynamoDB table %q. It has been left in your AWS account and may incur charges. (error was %s)", tableName, err) + } +} diff --git a/state/remote/s3.go b/backend/remote-state/s3/client.go similarity index 53% rename from state/remote/s3.go rename to backend/remote-state/s3/client.go index d9799e4373..735180ba91 100644 --- a/state/remote/s3.go +++ b/backend/remote-state/s3/client.go @@ -1,4 +1,4 @@ -package remote +package s3 import ( "bytes" @@ -6,127 +6,32 @@ import ( "fmt" "io" "log" - "os" - "strconv" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" - "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/go-cleanhttp" - "github.com/hashicorp/go-multierror" + multierror "github.com/hashicorp/go-multierror" uuid "github.com/hashicorp/go-uuid" - terraformAws "github.com/hashicorp/terraform/builtin/providers/aws" "github.com/hashicorp/terraform/state" + "github.com/hashicorp/terraform/state/remote" ) -func s3Factory(conf map[string]string) (Client, error) { - bucketName, ok := conf["bucket"] - if !ok { - return nil, fmt.Errorf("missing 'bucket' configuration") - } - - keyName, ok := conf["key"] - if !ok { - return nil, fmt.Errorf("missing 'key' configuration") - } - - endpoint, ok := conf["endpoint"] - if !ok { - endpoint = os.Getenv("AWS_S3_ENDPOINT") - } - - regionName, ok := conf["region"] - if !ok { - regionName = os.Getenv("AWS_DEFAULT_REGION") - if regionName == "" { - return nil, fmt.Errorf( - "missing 'region' configuration or AWS_DEFAULT_REGION environment variable") - } - } - - serverSideEncryption := false - if raw, ok := conf["encrypt"]; ok { - v, err := strconv.ParseBool(raw) - if err != nil { - return nil, fmt.Errorf( - "'encrypt' field couldn't be parsed as bool: %s", err) - } - - serverSideEncryption = v - } - - acl := "" - if raw, ok := conf["acl"]; ok { - acl = raw - } - kmsKeyID := conf["kms_key_id"] - - var errs []error - creds, err := terraformAws.GetCredentials(&terraformAws.Config{ - AccessKey: conf["access_key"], - SecretKey: conf["secret_key"], - Token: conf["token"], - Profile: conf["profile"], - CredsFilename: conf["shared_credentials_file"], - AssumeRoleARN: conf["role_arn"], - }) - if err != nil { - return nil, err - } - - // Call Get to check for credential provider. If nothing found, we'll get an - // error, and we can present it nicely to the user - _, err = creds.Get() - if err != nil { - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoCredentialProviders" { - errs = append(errs, fmt.Errorf(`No valid credential sources found for AWS S3 remote. -Please see https://www.terraform.io/docs/state/remote/s3.html for more information on -providing credentials for the AWS S3 remote`)) - } else { - errs = append(errs, fmt.Errorf("Error loading credentials for AWS S3 remote: %s", err)) - } - return nil, &multierror.Error{Errors: errs} - } - - awsConfig := &aws.Config{ - Credentials: creds, - Endpoint: aws.String(endpoint), - Region: aws.String(regionName), - HTTPClient: cleanhttp.DefaultClient(), - } - sess := session.New(awsConfig) - nativeClient := s3.New(sess) - dynClient := dynamodb.New(sess) - - return &S3Client{ - nativeClient: nativeClient, - bucketName: bucketName, - keyName: keyName, - serverSideEncryption: serverSideEncryption, - acl: acl, - kmsKeyID: kmsKeyID, - dynClient: dynClient, - lockTable: conf["lock_table"], - }, nil -} - -type S3Client struct { - nativeClient *s3.S3 +type RemoteClient struct { + s3Client *s3.S3 + dynClient *dynamodb.DynamoDB bucketName string - keyName string + path string serverSideEncryption bool acl string kmsKeyID string - dynClient *dynamodb.DynamoDB lockTable string } -func (c *S3Client) Get() (*Payload, error) { - output, err := c.nativeClient.GetObject(&s3.GetObjectInput{ +func (c *RemoteClient) Get() (*remote.Payload, error) { + output, err := c.s3Client.GetObject(&s3.GetObjectInput{ Bucket: &c.bucketName, - Key: &c.keyName, + Key: &c.path, }) if err != nil { @@ -148,7 +53,7 @@ func (c *S3Client) Get() (*Payload, error) { return nil, fmt.Errorf("Failed to read remote state: %s", err) } - payload := &Payload{ + payload := &remote.Payload{ Data: buf.Bytes(), } @@ -160,7 +65,7 @@ func (c *S3Client) Get() (*Payload, error) { return payload, nil } -func (c *S3Client) Put(data []byte) error { +func (c *RemoteClient) Put(data []byte) error { contentType := "application/json" contentLength := int64(len(data)) @@ -169,7 +74,7 @@ func (c *S3Client) Put(data []byte) error { ContentLength: &contentLength, Body: bytes.NewReader(data), Bucket: &c.bucketName, - Key: &c.keyName, + Key: &c.path, } if c.serverSideEncryption { @@ -187,28 +92,28 @@ func (c *S3Client) Put(data []byte) error { log.Printf("[DEBUG] Uploading remote state to S3: %#v", i) - if _, err := c.nativeClient.PutObject(i); err == nil { + if _, err := c.s3Client.PutObject(i); err == nil { return nil } else { return fmt.Errorf("Failed to upload state: %v", err) } } -func (c *S3Client) Delete() error { - _, err := c.nativeClient.DeleteObject(&s3.DeleteObjectInput{ +func (c *RemoteClient) Delete() error { + _, err := c.s3Client.DeleteObject(&s3.DeleteObjectInput{ Bucket: &c.bucketName, - Key: &c.keyName, + Key: &c.path, }) return err } -func (c *S3Client) Lock(info *state.LockInfo) (string, error) { +func (c *RemoteClient) Lock(info *state.LockInfo) (string, error) { if c.lockTable == "" { return "", nil } - stateName := fmt.Sprintf("%s/%s", c.bucketName, c.keyName) + stateName := fmt.Sprintf("%s/%s", c.bucketName, c.path) info.Path = stateName if info.ID == "" { @@ -245,10 +150,10 @@ func (c *S3Client) Lock(info *state.LockInfo) (string, error) { return info.ID, nil } -func (c *S3Client) getLockInfo() (*state.LockInfo, error) { +func (c *RemoteClient) getLockInfo() (*state.LockInfo, error) { getParams := &dynamodb.GetItemInput{ Key: map[string]*dynamodb.AttributeValue{ - "LockID": {S: aws.String(fmt.Sprintf("%s/%s", c.bucketName, c.keyName))}, + "LockID": {S: aws.String(fmt.Sprintf("%s/%s", c.bucketName, c.path))}, }, ProjectionExpression: aws.String("LockID, Info"), TableName: aws.String(c.lockTable), @@ -273,7 +178,7 @@ func (c *S3Client) getLockInfo() (*state.LockInfo, error) { return lockInfo, nil } -func (c *S3Client) Unlock(id string) error { +func (c *RemoteClient) Unlock(id string) error { if c.lockTable == "" { return nil } @@ -297,7 +202,7 @@ func (c *S3Client) Unlock(id string) error { params := &dynamodb.DeleteItemInput{ Key: map[string]*dynamodb.AttributeValue{ - "LockID": {S: aws.String(fmt.Sprintf("%s/%s", c.bucketName, c.keyName))}, + "LockID": {S: aws.String(fmt.Sprintf("%s/%s", c.bucketName, c.path))}, }, TableName: aws.String(c.lockTable), } diff --git a/backend/remote-state/s3/client_test.go b/backend/remote-state/s3/client_test.go new file mode 100644 index 0000000000..0cef7c9edc --- /dev/null +++ b/backend/remote-state/s3/client_test.go @@ -0,0 +1,76 @@ +package s3 + +import ( + "fmt" + "testing" + "time" + + "github.com/hashicorp/terraform/backend" + "github.com/hashicorp/terraform/state/remote" +) + +func TestRemoteClient_impl(t *testing.T) { + var _ remote.Client = new(RemoteClient) + var _ remote.ClientLocker = new(RemoteClient) +} + +func TestRemoteClient(t *testing.T) { + testACC(t) + + bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix()) + keyName := "testState" + + b := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "bucket": bucketName, + "key": keyName, + "encrypt": true, + }).(*Backend) + + state, err := b.State(backend.DefaultStateName) + if err != nil { + t.Fatal(err) + } + + createS3Bucket(t, b.s3Client, bucketName) + defer deleteS3Bucket(t, b.s3Client, bucketName) + + remote.TestClient(t, state.(*remote.State).Client) +} + +func TestRemoteClientLocks(t *testing.T) { + testACC(t) + + bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix()) + keyName := "testState" + + b1 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "bucket": bucketName, + "key": keyName, + "encrypt": true, + "lock_table": bucketName, + }).(*Backend) + + b2 := backend.TestBackendConfig(t, New(), map[string]interface{}{ + "bucket": bucketName, + "key": keyName, + "encrypt": true, + "lock_table": bucketName, + }).(*Backend) + + s1, err := b1.State(backend.DefaultStateName) + if err != nil { + t.Fatal(err) + } + + s2, err := b2.State(backend.DefaultStateName) + if err != nil { + t.Fatal(err) + } + + createS3Bucket(t, b1.s3Client, bucketName) + defer deleteS3Bucket(t, b1.s3Client, bucketName) + createDynamoDBTable(t, b1.dynClient, bucketName) + defer deleteDynamoDBTable(t, b1.dynClient, bucketName) + + remote.TestRemoteLocks(t, s1.(*remote.State).Client, s2.(*remote.State).Client) +} diff --git a/backend/testing.go b/backend/testing.go index 5298131cfe..936f1ddfa2 100644 --- a/backend/testing.go +++ b/backend/testing.go @@ -6,6 +6,7 @@ import ( "testing" "github.com/hashicorp/terraform/config" + "github.com/hashicorp/terraform/state" "github.com/hashicorp/terraform/terraform" ) @@ -40,8 +41,15 @@ func TestBackendConfig(t *testing.T, b Backend, c map[string]interface{}) Backen // assumed to already be configured. This will test state functionality. // If the backend reports it doesn't support multi-state by returning the // error ErrNamedStatesNotSupported, then it will not test that. -func TestBackend(t *testing.T, b Backend) { - testBackendStates(t, b) +// +// If you want to test locking, two backends must be given. If b2 is nil, +// then state lockign won't be tested. +func TestBackend(t *testing.T, b1, b2 Backend) { + testBackendStates(t, b1) + + if b2 != nil { + testBackendStateLock(t, b1, b2) + } } func testBackendStates(t *testing.T, b Backend) { @@ -57,53 +65,109 @@ func testBackendStates(t *testing.T, b Backend) { } // Create a couple states - fooState, err := b.State("foo") + foo, err := b.State("foo") if err != nil { t.Fatalf("error: %s", err) } - if err := fooState.RefreshState(); err != nil { + if err := foo.RefreshState(); err != nil { t.Fatalf("bad: %s", err) } - if v := fooState.State(); v.HasResources() { + if v := foo.State(); v.HasResources() { t.Fatalf("should be empty: %s", v) } - barState, err := b.State("bar") + bar, err := b.State("bar") if err != nil { t.Fatalf("error: %s", err) } - if err := barState.RefreshState(); err != nil { + if err := bar.RefreshState(); err != nil { t.Fatalf("bad: %s", err) } - if v := barState.State(); v.HasResources() { + if v := bar.State(); v.HasResources() { t.Fatalf("should be empty: %s", v) } - // Verify they are distinct states + // Verify they are distinct states that can be read back from storage { - s := barState.State() - s.Lineage = "bar" - if err := barState.WriteState(s); err != nil { + // start with a fresh state, and record the lineage being + // written to "bar" + barState := terraform.NewState() + barLineage := barState.Lineage + + // the foo lineage should be distinct from bar, and unchanged after + // modifying bar + fooState := terraform.NewState() + fooLineage := fooState.Lineage + + // write a known state to foo + if err := foo.WriteState(fooState); err != nil { + t.Fatal("error writing foo state:", err) + } + if err := foo.PersistState(); err != nil { + t.Fatal("error persisting foo state:", err) + } + + // write a distinct known state to bar + if err := bar.WriteState(barState); err != nil { t.Fatalf("bad: %s", err) } - if err := barState.PersistState(); err != nil { + if err := bar.PersistState(); err != nil { t.Fatalf("bad: %s", err) } - if err := fooState.RefreshState(); err != nil { - t.Fatalf("bad: %s", err) + // verify that foo is unchanged with the existing state manager + if err := foo.RefreshState(); err != nil { + t.Fatal("error refreshing foo:", err) } - if v := fooState.State(); v.Lineage == "bar" { - t.Fatalf("bad: %#v", v) + fooState = foo.State() + switch { + case fooState == nil: + t.Fatal("nil state read from foo") + case fooState.Lineage == barLineage: + t.Fatalf("bar lineage read from foo: %#v", fooState) + case fooState.Lineage != fooLineage: + t.Fatal("foo lineage alterred") + } + + // fetch foo again from the backend + foo, err = b.State("foo") + if err != nil { + t.Fatal("error re-fetching state:", err) + } + if err := foo.RefreshState(); err != nil { + t.Fatal("error refreshing foo:", err) + } + fooState = foo.State() + switch { + case fooState == nil: + t.Fatal("nil state read from foo") + case fooState.Lineage != fooLineage: + t.Fatal("incorrect state returned from backend") + } + + // fetch the bar again from the backend + bar, err = b.State("bar") + if err != nil { + t.Fatal("error re-fetching state:", err) + } + if err := bar.RefreshState(); err != nil { + t.Fatal("error refreshing bar:", err) + } + barState = bar.State() + switch { + case barState == nil: + t.Fatal("nil state read from bar") + case barState.Lineage != barLineage: + t.Fatal("incorrect state returned from backend") } } // Verify we can now list them { + // we determined that named stated are supported earlier states, err := b.States() - if err == ErrNamedStatesNotSupported { - t.Logf("TestBackend: named states not supported in %T, skipping", b) - return + if err != nil { + t.Fatal(err) } sort.Strings(states) @@ -138,3 +202,77 @@ func testBackendStates(t *testing.T, b Backend) { } } } + +func testBackendStateLock(t *testing.T, b1, b2 Backend) { + // Get the default state for each + b1StateMgr, err := b1.State(DefaultStateName) + if err != nil { + t.Fatalf("error: %s", err) + } + if err := b1StateMgr.RefreshState(); err != nil { + t.Fatalf("bad: %s", err) + } + + // Fast exit if this doesn't support locking at all + if _, ok := b1StateMgr.(state.Locker); !ok { + t.Logf("TestBackend: backend %T doesn't support state locking, not testing", b1) + return + } + + t.Logf("TestBackend: testing state locking for %T", b1) + + b2StateMgr, err := b2.State(DefaultStateName) + if err != nil { + t.Fatalf("error: %s", err) + } + if err := b2StateMgr.RefreshState(); err != nil { + t.Fatalf("bad: %s", err) + } + + // Reassign so its obvious whats happening + lockerA := b1StateMgr.(state.Locker) + lockerB := b2StateMgr.(state.Locker) + + infoA := state.NewLockInfo() + infoA.Operation = "test" + infoA.Who = "clientA" + + infoB := state.NewLockInfo() + infoB.Operation = "test" + infoB.Who = "clientB" + + lockIDA, err := lockerA.Lock(infoA) + if err != nil { + t.Fatal("unable to get initial lock:", err) + } + + // If the lock ID is blank, assume locking is disabled + if lockIDA == "" { + t.Logf("TestBackend: %T: empty string returned for lock, assuming disabled", b1) + return + } + + _, err = lockerB.Lock(infoB) + if err == nil { + lockerA.Unlock(lockIDA) + t.Fatal("client B obtained lock while held by client A") + } + + if err := lockerA.Unlock(lockIDA); err != nil { + t.Fatal("error unlocking client A", err) + } + + lockIDB, err := lockerB.Lock(infoB) + if err != nil { + t.Fatal("unable to obtain lock from client B") + } + + if lockIDB == lockIDA { + t.Fatalf("duplicate lock IDs: %q", lockIDB) + } + + if err = lockerB.Unlock(lockIDB); err != nil { + t.Fatal("error unlocking client B:", err) + } + +} diff --git a/builtin/providers/alicloud/validators.go b/builtin/providers/alicloud/validators.go index 7eb85ed431..9c7fec01af 100644 --- a/builtin/providers/alicloud/validators.go +++ b/builtin/providers/alicloud/validators.go @@ -2,26 +2,17 @@ package alicloud import ( "fmt" - "net" - "strconv" + "regexp" "strings" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" - "github.com/denverdino/aliyungo/slb" - "regexp" + "github.com/hashicorp/terraform/helper/validation" ) // common func validateInstancePort(v interface{}, k string) (ws []string, errors []error) { - value := v.(int) - if value < 1 || value > 65535 { - errors = append(errors, fmt.Errorf( - "%q must be a valid instance port between 1 and 65535", - k)) - return - } - return + return validation.IntBetween(1, 65535)(v, k) } func validateInstanceProtocol(v interface{}, k string) (ws []string, errors []error) { @@ -37,12 +28,11 @@ func validateInstanceProtocol(v interface{}, k string) (ws []string, errors []er // ecs func validateDiskCategory(v interface{}, k string) (ws []string, errors []error) { - category := ecs.DiskCategory(v.(string)) - if category != ecs.DiskCategoryCloud && category != ecs.DiskCategoryCloudEfficiency && category != ecs.DiskCategoryCloudSSD { - errors = append(errors, fmt.Errorf("%s must be one of %s %s %s", k, ecs.DiskCategoryCloud, ecs.DiskCategoryCloudEfficiency, ecs.DiskCategoryCloudSSD)) - } - - return + return validation.StringInSlice([]string{ + string(ecs.DiskCategoryCloud), + string(ecs.DiskCategoryCloudEfficiency), + string(ecs.DiskCategoryCloudSSD), + }, false)(v, k) } func validateInstanceName(v interface{}, k string) (ws []string, errors []error) { @@ -59,12 +49,7 @@ func validateInstanceName(v interface{}, k string) (ws []string, errors []error) } func validateInstanceDescription(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if len(value) < 2 || len(value) > 256 { - errors = append(errors, fmt.Errorf("%q cannot be longer than 256 characters", k)) - - } - return + return validation.StringLenBetween(2, 256)(v, k) } func validateDiskName(v interface{}, k string) (ws []string, errors []error) { @@ -86,12 +71,7 @@ func validateDiskName(v interface{}, k string) (ws []string, errors []error) { } func validateDiskDescription(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if len(value) < 2 || len(value) > 256 { - errors = append(errors, fmt.Errorf("%q cannot be longer than 256 characters", k)) - - } - return + return validation.StringLenBetween(2, 128)(v, k) } //security group @@ -109,225 +89,114 @@ func validateSecurityGroupName(v interface{}, k string) (ws []string, errors []e } func validateSecurityGroupDescription(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if len(value) < 2 || len(value) > 256 { - errors = append(errors, fmt.Errorf("%q cannot be longer than 256 characters", k)) - - } - return + return validation.StringLenBetween(2, 256)(v, k) } func validateSecurityRuleType(v interface{}, k string) (ws []string, errors []error) { - rt := GroupRuleDirection(v.(string)) - if rt != GroupRuleIngress && rt != GroupRuleEgress { - errors = append(errors, fmt.Errorf("%s must be one of %s %s", k, GroupRuleIngress, GroupRuleEgress)) - } - - return + return validation.StringInSlice([]string{ + string(GroupRuleIngress), + string(GroupRuleEgress), + }, false)(v, k) } func validateSecurityRuleIpProtocol(v interface{}, k string) (ws []string, errors []error) { - pt := GroupRuleIpProtocol(v.(string)) - if pt != GroupRuleTcp && pt != GroupRuleUdp && pt != GroupRuleIcmp && pt != GroupRuleGre && pt != GroupRuleAll { - errors = append(errors, fmt.Errorf("%s must be one of %s %s %s %s %s", k, - GroupRuleTcp, GroupRuleUdp, GroupRuleIcmp, GroupRuleGre, GroupRuleAll)) - } - - return + return validation.StringInSlice([]string{ + string(GroupRuleTcp), + string(GroupRuleUdp), + string(GroupRuleIcmp), + string(GroupRuleGre), + string(GroupRuleAll), + }, false)(v, k) } func validateSecurityRuleNicType(v interface{}, k string) (ws []string, errors []error) { - pt := GroupRuleNicType(v.(string)) - if pt != GroupRuleInternet && pt != GroupRuleIntranet { - errors = append(errors, fmt.Errorf("%s must be one of %s %s", k, GroupRuleInternet, GroupRuleIntranet)) - } - - return + return validation.StringInSlice([]string{ + string(GroupRuleInternet), + string(GroupRuleIntranet), + }, false)(v, k) } func validateSecurityRulePolicy(v interface{}, k string) (ws []string, errors []error) { - pt := GroupRulePolicy(v.(string)) - if pt != GroupRulePolicyAccept && pt != GroupRulePolicyDrop { - errors = append(errors, fmt.Errorf("%s must be one of %s %s", k, GroupRulePolicyAccept, GroupRulePolicyDrop)) - } - - return + return validation.StringInSlice([]string{ + string(GroupRulePolicyAccept), + string(GroupRulePolicyDrop), + }, false)(v, k) } func validateSecurityPriority(v interface{}, k string) (ws []string, errors []error) { - value := v.(int) - if value < 1 || value > 100 { - errors = append(errors, fmt.Errorf( - "%q must be a valid authorization policy priority between 1 and 100", - k)) - return - } - return + return validation.IntBetween(1, 100)(v, k) } // validateCIDRNetworkAddress ensures that the string value is a valid CIDR that // represents a network address - it adds an error otherwise func validateCIDRNetworkAddress(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - _, ipnet, err := net.ParseCIDR(value) - if err != nil { - errors = append(errors, fmt.Errorf( - "%q must contain a valid CIDR, got error parsing: %s", k, err)) - return - } - - if ipnet == nil || value != ipnet.String() { - errors = append(errors, fmt.Errorf( - "%q must contain a valid network CIDR, expected %q, got %q", - k, ipnet, value)) - } - - return + return validation.CIDRNetwork(0, 32)(v, k) } func validateRouteEntryNextHopType(v interface{}, k string) (ws []string, errors []error) { - nht := ecs.NextHopType(v.(string)) - if nht != ecs.NextHopIntance && nht != ecs.NextHopTunnel { - errors = append(errors, fmt.Errorf("%s must be one of %s %s", k, - ecs.NextHopIntance, ecs.NextHopTunnel)) - } - - return + return validation.StringInSlice([]string{ + string(ecs.NextHopIntance), + string(ecs.NextHopTunnel), + }, false)(v, k) } func validateSwitchCIDRNetworkAddress(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - _, ipnet, err := net.ParseCIDR(value) - if err != nil { - errors = append(errors, fmt.Errorf( - "%q must contain a valid CIDR, got error parsing: %s", k, err)) - return - } - - if ipnet == nil || value != ipnet.String() { - errors = append(errors, fmt.Errorf( - "%q must contain a valid network CIDR, expected %q, got %q", - k, ipnet, value)) - return - } - - mark, _ := strconv.Atoi(strings.Split(ipnet.String(), "/")[1]) - if mark < 16 || mark > 29 { - errors = append(errors, fmt.Errorf( - "%q must contain a network CIDR which mark between 16 and 29", - k)) - } - - return + return validation.CIDRNetwork(16, 29)(v, k) } // validateIoOptimized ensures that the string value is a valid IoOptimized that // represents a IoOptimized - it adds an error otherwise func validateIoOptimized(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - ioOptimized := ecs.IoOptimized(value) - if ioOptimized != ecs.IoOptimizedNone && - ioOptimized != ecs.IoOptimizedOptimized { - errors = append(errors, fmt.Errorf( - "%q must contain a valid IoOptimized, expected %s or %s, got %q", - k, ecs.IoOptimizedNone, ecs.IoOptimizedOptimized, ioOptimized)) - } - } - - return + return validation.StringInSlice([]string{ + "", + string(ecs.IoOptimizedNone), + string(ecs.IoOptimizedOptimized), + }, false)(v, k) } // validateInstanceNetworkType ensures that the string value is a classic or vpc func validateInstanceNetworkType(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - network := InstanceNetWork(value) - if network != ClassicNet && - network != VpcNet { - errors = append(errors, fmt.Errorf( - "%q must contain a valid InstanceNetworkType, expected %s or %s, go %q", - k, ClassicNet, VpcNet, network)) - } - } - return + return validation.StringInSlice([]string{ + "", + string(ClassicNet), + string(VpcNet), + }, false)(v, k) } func validateInstanceChargeType(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - chargeType := common.InstanceChargeType(value) - if chargeType != common.PrePaid && - chargeType != common.PostPaid { - errors = append(errors, fmt.Errorf( - "%q must contain a valid InstanceChargeType, expected %s or %s, got %q", - k, common.PrePaid, common.PostPaid, chargeType)) - } - } - - return + return validation.StringInSlice([]string{ + "", + string(common.PrePaid), + string(common.PostPaid), + }, false)(v, k) } func validateInternetChargeType(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - chargeType := common.InternetChargeType(value) - if chargeType != common.PayByBandwidth && - chargeType != common.PayByTraffic { - errors = append(errors, fmt.Errorf( - "%q must contain a valid InstanceChargeType, expected %s or %s, got %q", - k, common.PayByBandwidth, common.PayByTraffic, chargeType)) - } - } - - return + return validation.StringInSlice([]string{ + "", + string(common.PayByBandwidth), + string(common.PayByTraffic), + }, false)(v, k) } func validateInternetMaxBandWidthOut(v interface{}, k string) (ws []string, errors []error) { - value := v.(int) - if value < 1 || value > 100 { - errors = append(errors, fmt.Errorf( - "%q must be a valid internet bandwidth out between 1 and 1000", - k)) - return - } - return + return validation.IntBetween(1, 100)(v, k) } // SLB func validateSlbName(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - if len(value) < 1 || len(value) > 80 { - errors = append(errors, fmt.Errorf( - "%q must be a valid load balancer name characters between 1 and 80", - k)) - return - } - } - - return + return validation.StringLenBetween(0, 80)(v, k) } func validateSlbInternetChargeType(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - chargeType := common.InternetChargeType(value) - - if chargeType != "paybybandwidth" && - chargeType != "paybytraffic" { - errors = append(errors, fmt.Errorf( - "%q must contain a valid InstanceChargeType, expected %s or %s, got %q", - k, "paybybandwidth", "paybytraffic", value)) - } - } - - return + return validation.StringInSlice([]string{ + "paybybandwidth", + "paybytraffic", + }, false)(v, k) } func validateSlbBandwidth(v interface{}, k string) (ws []string, errors []error) { - value := v.(int) - if value < 1 || value > 1000 { - errors = append(errors, fmt.Errorf( - "%q must be a valid load balancer bandwidth between 1 and 1000", - k)) - return - } - return + return validation.IntBetween(1, 1000)(v, k) } func validateSlbListenerBandwidth(v interface{}, k string) (ws []string, errors []error) { @@ -342,67 +211,23 @@ func validateSlbListenerBandwidth(v interface{}, k string) (ws []string, errors } func validateSlbListenerScheduler(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - scheduler := slb.SchedulerType(value) - - if scheduler != "wrr" && scheduler != "wlc" { - errors = append(errors, fmt.Errorf( - "%q must contain a valid SchedulerType, expected %s or %s, got %q", - k, "wrr", "wlc", value)) - } - } - - return + return validation.StringInSlice([]string{"wrr", "wlc"}, false)(v, k) } func validateSlbListenerStickySession(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - flag := slb.FlagType(value) - - if flag != "on" && flag != "off" { - errors = append(errors, fmt.Errorf( - "%q must contain a valid StickySession, expected %s or %s, got %q", - k, "on", "off", value)) - } - } - return + return validation.StringInSlice([]string{"", "on", "off"}, false)(v, k) } func validateSlbListenerStickySessionType(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - flag := slb.StickySessionType(value) - - if flag != "insert" && flag != "server" { - errors = append(errors, fmt.Errorf( - "%q must contain a valid StickySessionType, expected %s or %s, got %q", - k, "insert", "server", value)) - } - } - return + return validation.StringInSlice([]string{"", "insert", "server"}, false)(v, k) } func validateSlbListenerCookie(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - flag := slb.StickySessionType(value) - - if flag != "insert" && flag != "server" { - errors = append(errors, fmt.Errorf( - "%q must contain a valid StickySessionType, expected %s or %s, got %q", - k, "insert", "server", value)) - } - } - return + return validation.StringInSlice([]string{"", "insert", "server"}, false)(v, k) } func validateSlbListenerPersistenceTimeout(v interface{}, k string) (ws []string, errors []error) { - value := v.(int) - if value < 0 || value > 86400 { - errors = append(errors, fmt.Errorf( - "%q must be a valid load balancer persistence timeout between 0 and 86400", - k)) - return - } - return + return validation.IntBetween(0, 86400)(v, k) } //data source validate func @@ -419,19 +244,14 @@ func validateNameRegex(v interface{}, k string) (ws []string, errors []error) { } func validateImageOwners(v interface{}, k string) (ws []string, errors []error) { - if value := v.(string); value != "" { - owners := ecs.ImageOwnerAlias(value) - if owners != ecs.ImageOwnerSystem && - owners != ecs.ImageOwnerSelf && - owners != ecs.ImageOwnerOthers && - owners != ecs.ImageOwnerMarketplace && - owners != ecs.ImageOwnerDefault { - errors = append(errors, fmt.Errorf( - "%q must contain a valid Image owner , expected %s, %s, %s, %s or %s, got %q", - k, ecs.ImageOwnerSystem, ecs.ImageOwnerSelf, ecs.ImageOwnerOthers, ecs.ImageOwnerMarketplace, ecs.ImageOwnerDefault, owners)) - } - } - return + return validation.StringInSlice([]string{ + "", + string(ecs.ImageOwnerSystem), + string(ecs.ImageOwnerSelf), + string(ecs.ImageOwnerOthers), + string(ecs.ImageOwnerMarketplace), + string(ecs.ImageOwnerDefault), + }, false)(v, k) } func validateRegion(v interface{}, k string) (ws []string, errors []error) { diff --git a/builtin/providers/arukas/provider.go b/builtin/providers/arukas/provider.go index d977e68a84..81f4a32641 100644 --- a/builtin/providers/arukas/provider.go +++ b/builtin/providers/arukas/provider.go @@ -35,7 +35,7 @@ func Provider() terraform.ResourceProvider { "timeout": &schema.Schema{ Type: schema.TypeInt, Optional: true, - DefaultFunc: schema.EnvDefaultFunc(JSONTimeoutParamName, "600"), + DefaultFunc: schema.EnvDefaultFunc(JSONTimeoutParamName, "900"), }, }, ResourcesMap: map[string]*schema.Resource{ diff --git a/builtin/providers/arukas/resource_arukas_container.go b/builtin/providers/arukas/resource_arukas_container.go index bc2132815f..cb40dfc0f2 100644 --- a/builtin/providers/arukas/resource_arukas_container.go +++ b/builtin/providers/arukas/resource_arukas_container.go @@ -2,10 +2,11 @@ package arukas import ( "fmt" - API "github.com/arukasio/cli" - "github.com/hashicorp/terraform/helper/schema" "strings" - "time" + + API "github.com/arukasio/cli" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" ) func resourceArukasContainer() *schema.Resource { @@ -169,11 +170,27 @@ func resourceArukasContainerCreate(d *schema.ResourceData, meta interface{}) err return err } - if err := sleepUntilUp(client, appSet.Container.ID, client.Timeout); err != nil { + d.SetId(appSet.Container.ID) + + stateConf := &resource.StateChangeConf{ + Target: []string{"running"}, + Pending: []string{"stopped", "booting"}, + Timeout: client.Timeout, + Refresh: func() (interface{}, string, error) { + var container API.Container + err := client.Get(&container, fmt.Sprintf("/containers/%s", appSet.Container.ID)) + if err != nil { + return nil, "", err + } + + return container, container.StatusText, nil + }, + } + _, err := stateConf.WaitForState() + if err != nil { return err } - d.SetId(appSet.Container.ID) return resourceArukasContainerRead(d, meta) } @@ -270,24 +287,3 @@ func resourceArukasContainerDelete(d *schema.ResourceData, meta interface{}) err return nil } - -func sleepUntilUp(client *ArukasClient, containerID string, timeout time.Duration) error { - current := 0 * time.Second - interval := 5 * time.Second - for { - var container API.Container - if err := client.Get(&container, fmt.Sprintf("/containers/%s", containerID)); err != nil { - return err - } - - if container.IsRunning { - return nil - } - time.Sleep(interval) - current += interval - - if timeout > 0 && current > timeout { - return fmt.Errorf("Timeout: sleepUntilUp") - } - } -} diff --git a/builtin/providers/arukas/resource_arukas_container_test.go b/builtin/providers/arukas/resource_arukas_container_test.go index 88b69f2d87..3fabc9b0db 100644 --- a/builtin/providers/arukas/resource_arukas_container_test.go +++ b/builtin/providers/arukas/resource_arukas_container_test.go @@ -3,6 +3,7 @@ package arukas import ( "fmt" API "github.com/arukasio/cli" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "testing" @@ -10,17 +11,21 @@ import ( func TestAccArukasContainer_Basic(t *testing.T) { var container API.Container + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + name := fmt.Sprintf("terraform_acc_test_%s", randString) + endpoint := fmt.Sprintf("terraform-acc-test-endpoint-%s", randString) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckArukasContainerDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckArukasContainerConfig_basic, + Config: testAccCheckArukasContainerConfig_basic(randString), Check: resource.ComposeTestCheckFunc( testAccCheckArukasContainerExists("arukas_container.foobar", &container), resource.TestCheckResourceAttr( - "arukas_container.foobar", "name", "terraform_for_arukas_test_foobar"), + "arukas_container.foobar", "name", name), resource.TestCheckResourceAttr( "arukas_container.foobar", "image", "nginx:latest"), resource.TestCheckResourceAttr( @@ -28,7 +33,7 @@ func TestAccArukasContainer_Basic(t *testing.T) { resource.TestCheckResourceAttr( "arukas_container.foobar", "memory", "256"), resource.TestCheckResourceAttr( - "arukas_container.foobar", "endpoint", "terraform-for-arukas-test-endpoint"), + "arukas_container.foobar", "endpoint", endpoint), resource.TestCheckResourceAttr( "arukas_container.foobar", "ports.#", "1"), resource.TestCheckResourceAttr( @@ -51,17 +56,23 @@ func TestAccArukasContainer_Basic(t *testing.T) { func TestAccArukasContainer_Update(t *testing.T) { var container API.Container + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + name := fmt.Sprintf("terraform_acc_test_%s", randString) + updatedName := fmt.Sprintf("terraform_acc_test_update_%s", randString) + endpoint := fmt.Sprintf("terraform-acc-test-endpoint-%s", randString) + updatedEndpoint := fmt.Sprintf("terraform-acc-test-endpoint-update-%s", randString) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckArukasContainerDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckArukasContainerConfig_basic, + Config: testAccCheckArukasContainerConfig_basic(randString), Check: resource.ComposeTestCheckFunc( testAccCheckArukasContainerExists("arukas_container.foobar", &container), resource.TestCheckResourceAttr( - "arukas_container.foobar", "name", "terraform_for_arukas_test_foobar"), + "arukas_container.foobar", "name", name), resource.TestCheckResourceAttr( "arukas_container.foobar", "image", "nginx:latest"), resource.TestCheckResourceAttr( @@ -69,7 +80,7 @@ func TestAccArukasContainer_Update(t *testing.T) { resource.TestCheckResourceAttr( "arukas_container.foobar", "memory", "256"), resource.TestCheckResourceAttr( - "arukas_container.foobar", "endpoint", "terraform-for-arukas-test-endpoint"), + "arukas_container.foobar", "endpoint", endpoint), resource.TestCheckResourceAttr( "arukas_container.foobar", "ports.#", "1"), resource.TestCheckResourceAttr( @@ -87,11 +98,11 @@ func TestAccArukasContainer_Update(t *testing.T) { ), }, resource.TestStep{ - Config: testAccCheckArukasContainerConfig_update, + Config: testAccCheckArukasContainerConfig_update(randString), Check: resource.ComposeTestCheckFunc( testAccCheckArukasContainerExists("arukas_container.foobar", &container), resource.TestCheckResourceAttr( - "arukas_container.foobar", "name", "terraform_for_arukas_test_foobar_upd"), + "arukas_container.foobar", "name", updatedName), resource.TestCheckResourceAttr( "arukas_container.foobar", "image", "nginx:latest"), resource.TestCheckResourceAttr( @@ -99,7 +110,7 @@ func TestAccArukasContainer_Update(t *testing.T) { resource.TestCheckResourceAttr( "arukas_container.foobar", "memory", "512"), resource.TestCheckResourceAttr( - "arukas_container.foobar", "endpoint", "terraform-for-arukas-test-endpoint-upd"), + "arukas_container.foobar", "endpoint", updatedEndpoint), resource.TestCheckResourceAttr( "arukas_container.foobar", "ports.#", "2"), resource.TestCheckResourceAttr( @@ -130,17 +141,20 @@ func TestAccArukasContainer_Update(t *testing.T) { func TestAccArukasContainer_Minimum(t *testing.T) { var container API.Container + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + name := fmt.Sprintf("terraform_acc_test_minimum_%s", randString) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckArukasContainerDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckArukasContainerConfig_minimum, + Config: testAccCheckArukasContainerConfig_minimum(randString), Check: resource.ComposeTestCheckFunc( testAccCheckArukasContainerExists("arukas_container.foobar", &container), resource.TestCheckResourceAttr( - "arukas_container.foobar", "name", "terraform_for_arukas_test_foobar"), + "arukas_container.foobar", "name", name), resource.TestCheckResourceAttr( "arukas_container.foobar", "image", "nginx:latest"), resource.TestCheckResourceAttr( @@ -163,13 +177,15 @@ func TestAccArukasContainer_Minimum(t *testing.T) { func TestAccArukasContainer_Import(t *testing.T) { resourceName := "arukas_container.foobar" + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckArukasContainerDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccCheckArukasContainerConfig_basic, + Config: testAccCheckArukasContainerConfig_basic(randString), }, resource.TestStep{ ResourceName: resourceName, @@ -227,13 +243,14 @@ func testAccCheckArukasContainerDestroy(s *terraform.State) error { return nil } -const testAccCheckArukasContainerConfig_basic = ` +func testAccCheckArukasContainerConfig_basic(randString string) string { + return fmt.Sprintf(` resource "arukas_container" "foobar" { - name = "terraform_for_arukas_test_foobar" + name = "terraform_acc_test_%s" image = "nginx:latest" instances = 1 memory = 256 - endpoint = "terraform-for-arukas-test-endpoint" + endpoint = "terraform-acc-test-endpoint-%s" ports = { protocol = "tcp" number = "80" @@ -242,15 +259,17 @@ resource "arukas_container" "foobar" { key = "key" value = "value" } -}` +}`, randString, randString) +} -const testAccCheckArukasContainerConfig_update = ` +func testAccCheckArukasContainerConfig_update(randString string) string { + return fmt.Sprintf(` resource "arukas_container" "foobar" { - name = "terraform_for_arukas_test_foobar_upd" + name = "terraform_acc_test_update_%s" image = "nginx:latest" instances = 2 memory = 512 - endpoint = "terraform-for-arukas-test-endpoint-upd" + endpoint = "terraform-acc-test-endpoint-update-%s" ports = { protocol = "tcp" number = "80" @@ -267,13 +286,16 @@ resource "arukas_container" "foobar" { key = "key_upd" value = "value_upd" } -}` +}`, randString, randString) +} -const testAccCheckArukasContainerConfig_minimum = ` +func testAccCheckArukasContainerConfig_minimum(randString string) string { + return fmt.Sprintf(` resource "arukas_container" "foobar" { - name = "terraform_for_arukas_test_foobar" + name = "terraform_acc_test_minimum_%s" image = "nginx:latest" ports = { number = "80" } -}` +}`, randString) +} diff --git a/builtin/providers/aws/auth_helpers.go b/builtin/providers/aws/auth_helpers.go index 3969175d1a..1a73c6e8b5 100644 --- a/builtin/providers/aws/auth_helpers.go +++ b/builtin/providers/aws/auth_helpers.go @@ -134,7 +134,7 @@ func GetCredentials(c *Config) (*awsCredentials.Credentials, error) { if usedEndpoint == "" { usedEndpoint = "default location" } - log.Printf("[WARN] Ignoring AWS metadata API endpoint at %s "+ + log.Printf("[INFO] Ignoring AWS metadata API endpoint at %s "+ "as it doesn't return any instance-id", usedEndpoint) } } diff --git a/builtin/providers/aws/auth_helpers_test.go b/builtin/providers/aws/auth_helpers_test.go index fb7dd68849..25120c43bd 100644 --- a/builtin/providers/aws/auth_helpers_test.go +++ b/builtin/providers/aws/auth_helpers_test.go @@ -1,7 +1,6 @@ package aws import ( - "bytes" "encoding/json" "fmt" "io/ioutil" @@ -10,13 +9,9 @@ import ( "net/http/httptest" "os" "testing" - "time" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" - awsCredentials "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" - "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/sts" ) @@ -28,9 +23,14 @@ func TestAWSGetAccountInfo_shouldBeValid_fromEC2Role(t *testing.T) { awsTs := awsEnv(t) defer awsTs() - iamEndpoints := []*iamEndpoint{} - ts, iamConn, stsConn := getMockedAwsIamStsApi(iamEndpoints) - defer ts() + closeEmpty, emptySess, err := getMockedAwsApiSession("zero", []*awsMockEndpoint{}) + defer closeEmpty() + if err != nil { + t.Fatal(err) + } + + iamConn := iam.New(emptySess) + stsConn := sts.New(emptySess) part, id, err := GetAccountInfo(iamConn, stsConn, ec2rolecreds.ProviderName) if err != nil { @@ -55,14 +55,24 @@ func TestAWSGetAccountInfo_shouldBeValid_EC2RoleHasPriority(t *testing.T) { awsTs := awsEnv(t) defer awsTs() - iamEndpoints := []*iamEndpoint{ + iamEndpoints := []*awsMockEndpoint{ { - Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &iamResponse{200, iamResponse_GetUser_valid, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_GetUser_valid, "text/xml"}, }, } - ts, iamConn, stsConn := getMockedAwsIamStsApi(iamEndpoints) - defer ts() + closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } + iamConn := iam.New(iamSess) + closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) + defer closeSts() + if err != nil { + t.Fatal(err) + } + stsConn := sts.New(stsSess) part, id, err := GetAccountInfo(iamConn, stsConn, ec2rolecreds.ProviderName) if err != nil { @@ -81,15 +91,26 @@ func TestAWSGetAccountInfo_shouldBeValid_EC2RoleHasPriority(t *testing.T) { } func TestAWSGetAccountInfo_shouldBeValid_fromIamUser(t *testing.T) { - iamEndpoints := []*iamEndpoint{ + iamEndpoints := []*awsMockEndpoint{ { - Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &iamResponse{200, iamResponse_GetUser_valid, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_GetUser_valid, "text/xml"}, }, } - ts, iamConn, stsConn := getMockedAwsIamStsApi(iamEndpoints) - defer ts() + closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } + closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) + defer closeSts() + if err != nil { + t.Fatal(err) + } + + iamConn := iam.New(iamSess) + stsConn := sts.New(stsSess) part, id, err := GetAccountInfo(iamConn, stsConn, "") if err != nil { @@ -108,18 +129,32 @@ func TestAWSGetAccountInfo_shouldBeValid_fromIamUser(t *testing.T) { } func TestAWSGetAccountInfo_shouldBeValid_fromGetCallerIdentity(t *testing.T) { - iamEndpoints := []*iamEndpoint{ + iamEndpoints := []*awsMockEndpoint{ { - Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &iamResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, - }, - { - Request: &iamRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, - Response: &iamResponse{200, stsResponse_GetCallerIdentity_valid, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, }, } - ts, iamConn, stsConn := getMockedAwsIamStsApi(iamEndpoints) - defer ts() + closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } + + stsEndpoints := []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{200, stsResponse_GetCallerIdentity_valid, "text/xml"}, + }, + } + closeSts, stsSess, err := getMockedAwsApiSession("STS", stsEndpoints) + defer closeSts() + if err != nil { + t.Fatal(err) + } + + iamConn := iam.New(iamSess) + stsConn := sts.New(stsSess) part, id, err := GetAccountInfo(iamConn, stsConn, "") if err != nil { @@ -138,22 +173,36 @@ func TestAWSGetAccountInfo_shouldBeValid_fromGetCallerIdentity(t *testing.T) { } func TestAWSGetAccountInfo_shouldBeValid_fromIamListRoles(t *testing.T) { - iamEndpoints := []*iamEndpoint{ + iamEndpoints := []*awsMockEndpoint{ { - Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &iamResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, }, { - Request: &iamRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, - Response: &iamResponse{403, stsResponse_GetCallerIdentity_unauthorized, "text/xml"}, - }, - { - Request: &iamRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, - Response: &iamResponse{200, iamResponse_ListRoles_valid, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_ListRoles_valid, "text/xml"}, }, } - ts, iamConn, stsConn := getMockedAwsIamStsApi(iamEndpoints) - defer ts() + closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } + + stsEndpoints := []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{403, stsResponse_GetCallerIdentity_unauthorized, "text/xml"}, + }, + } + closeSts, stsSess, err := getMockedAwsApiSession("STS", stsEndpoints) + defer closeSts() + if err != nil { + t.Fatal(err) + } + + iamConn := iam.New(iamSess) + stsConn := sts.New(stsSess) part, id, err := GetAccountInfo(iamConn, stsConn, "") if err != nil { @@ -172,18 +221,30 @@ func TestAWSGetAccountInfo_shouldBeValid_fromIamListRoles(t *testing.T) { } func TestAWSGetAccountInfo_shouldBeValid_federatedRole(t *testing.T) { - iamEndpoints := []*iamEndpoint{ + iamEndpoints := []*awsMockEndpoint{ { - Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &iamResponse{400, iamResponse_GetUser_federatedFailure, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{400, iamResponse_GetUser_federatedFailure, "text/xml"}, }, { - Request: &iamRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, - Response: &iamResponse{200, iamResponse_ListRoles_valid, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_ListRoles_valid, "text/xml"}, }, } - ts, iamConn, stsConn := getMockedAwsIamStsApi(iamEndpoints) - defer ts() + closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } + + closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) + defer closeSts() + if err != nil { + t.Fatal(err) + } + + iamConn := iam.New(iamSess) + stsConn := sts.New(stsSess) part, id, err := GetAccountInfo(iamConn, stsConn, "") if err != nil { @@ -202,18 +263,30 @@ func TestAWSGetAccountInfo_shouldBeValid_federatedRole(t *testing.T) { } func TestAWSGetAccountInfo_shouldError_unauthorizedFromIam(t *testing.T) { - iamEndpoints := []*iamEndpoint{ + iamEndpoints := []*awsMockEndpoint{ { - Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &iamResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, }, { - Request: &iamRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, - Response: &iamResponse{403, iamResponse_ListRoles_unauthorized, "text/xml"}, + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_ListRoles_unauthorized, "text/xml"}, }, } - ts, iamConn, stsConn := getMockedAwsIamStsApi(iamEndpoints) - defer ts() + closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } + + closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) + defer closeSts() + if err != nil { + t.Fatal(err) + } + + iamConn := iam.New(iamSess) + stsConn := sts.New(stsSess) part, id, err := GetAccountInfo(iamConn, stsConn, "") if err == nil { @@ -697,51 +770,6 @@ func invalidAwsEnv(t *testing.T) func() { return ts.Close } -// getMockedAwsIamStsApi establishes a httptest server to simulate behaviour -// of a real AWS' IAM & STS server -func getMockedAwsIamStsApi(endpoints []*iamEndpoint) (func(), *iam.IAM, *sts.STS) { - ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - buf := new(bytes.Buffer) - buf.ReadFrom(r.Body) - requestBody := buf.String() - - log.Printf("[DEBUG] Received API %q request to %q: %s", - r.Method, r.RequestURI, requestBody) - - for _, e := range endpoints { - if r.Method == e.Request.Method && r.RequestURI == e.Request.Uri && requestBody == e.Request.Body { - log.Printf("[DEBUG] Mock API responding with %d: %s", e.Response.StatusCode, e.Response.Body) - - w.WriteHeader(e.Response.StatusCode) - w.Header().Set("Content-Type", e.Response.ContentType) - w.Header().Set("X-Amzn-Requestid", "1b206dd1-f9a8-11e5-becf-051c60f11c4a") - w.Header().Set("Date", time.Now().Format(time.RFC1123)) - - fmt.Fprintln(w, e.Response.Body) - return - } - } - - w.WriteHeader(400) - return - })) - - sc := awsCredentials.NewStaticCredentials("accessKey", "secretKey", "") - - sess, err := session.NewSession(&aws.Config{ - Credentials: sc, - Region: aws.String("us-east-1"), - Endpoint: aws.String(ts.URL), - CredentialsChainVerboseErrors: aws.Bool(true), - }) - if err != nil { - panic(fmt.Sprintf("Error creating AWS Session: %s", err)) - } - iamConn := iam.New(sess) - stsConn := sts.New(sess) - return ts.Close, iamConn, stsConn -} - func getEnv() *currentEnv { // Grab any existing AWS keys and preserve. In some tests we'll unset these, so // we need to have them and restore them after @@ -790,23 +818,6 @@ const metadataApiRoutes = ` } ` -type iamEndpoint struct { - Request *iamRequest - Response *iamResponse -} - -type iamRequest struct { - Method string - Uri string - Body string -} - -type iamResponse struct { - StatusCode int - Body string - ContentType string -} - const iamResponse_GetUser_valid = ` diff --git a/builtin/providers/aws/cloudfront_distribution_configuration_structure.go b/builtin/providers/aws/cloudfront_distribution_configuration_structure.go index ccac0d9c50..489e9883c1 100644 --- a/builtin/providers/aws/cloudfront_distribution_configuration_structure.go +++ b/builtin/providers/aws/cloudfront_distribution_configuration_structure.go @@ -443,10 +443,10 @@ func expandLambdaFunctionAssociation(lf map[string]interface{}) *cloudfront.Lamb return &lfa } -func flattenLambdaFunctionAssociations(lfa *cloudfront.LambdaFunctionAssociations) []interface{} { - s := make([]interface{}, len(lfa.Items)) - for i, v := range lfa.Items { - s[i] = flattenLambdaFunctionAssociation(v) +func flattenLambdaFunctionAssociations(lfa *cloudfront.LambdaFunctionAssociations) *schema.Set { + s := schema.NewSet(lambdaFunctionAssociationHash, []interface{}{}) + for _, v := range lfa.Items { + s.Add(flattenLambdaFunctionAssociation(v)) } return s } diff --git a/builtin/providers/aws/cloudfront_distribution_configuration_structure_test.go b/builtin/providers/aws/cloudfront_distribution_configuration_structure_test.go index 14cdad322d..0092cb8d27 100644 --- a/builtin/providers/aws/cloudfront_distribution_configuration_structure_test.go +++ b/builtin/providers/aws/cloudfront_distribution_configuration_structure_test.go @@ -364,14 +364,8 @@ func TestCloudFrontStructure_flattenCacheBehavior(t *testing.T) { t.Fatalf("Expected out[target_origin_id] to be myS3Origin, got %v", out["target_origin_id"]) } - // the flattened lambda function associations are a slice of maps, - // where as the default cache behavior LFAs are a set. Here we double check - // that and conver the slice to a set, and use Set's Equal() method to check - // equality - var outSet *schema.Set - if outSlice, ok := out["lambda_function_association"].([]interface{}); ok { - outSet = schema.NewSet(lambdaFunctionAssociationHash, outSlice) - } else { + var outSet, ok = out["lambda_function_association"].(*schema.Set) + if !ok { t.Fatalf("out['lambda_function_association'] is not a slice as expected: %#v", out["lambda_function_association"]) } @@ -496,7 +490,7 @@ func TestCloudFrontStructure_flattenlambdaFunctionAssociations(t *testing.T) { lfa := expandLambdaFunctionAssociations(in.List()) out := flattenLambdaFunctionAssociations(lfa) - if reflect.DeepEqual(in.List(), out) != true { + if reflect.DeepEqual(in.List(), out.List()) != true { t.Fatalf("Expected out to be %v, got %v", in, out) } } diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index e3608f53c6..1cfda12b74 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -136,6 +136,7 @@ type AWSClient struct { r53conn *route53.Route53 partition string accountid string + supportedplatforms []string region string rdsconn *rds.RDS iamconn *iam.IAM @@ -224,10 +225,7 @@ func (c *Config) Client() (interface{}, error) { return nil, errwrap.Wrapf("Error creating AWS session: {{err}}", err) } - // Removes the SDK Version handler, so we only have the provider User-Agent - // Ex: "User-Agent: APN/1.0 HashiCorp/1.0 Terraform/0.7.9-dev" - sess.Handlers.Build.Remove(request.NamedHandler{Name: "core.SDKVersionUserAgentHandler"}) - sess.Handlers.Build.PushFrontNamed(addTerraformVersionToUserAgent) + sess.Handlers.Build.PushBackNamed(addTerraformVersionToUserAgent) if extraDebug := os.Getenv("TERRAFORM_AWS_AUTHFAILURE_DEBUG"); extraDebug != "" { sess.Handlers.UnmarshalError.PushFrontNamed(debugAuthFailure) @@ -272,6 +270,17 @@ func (c *Config) Client() (interface{}, error) { return nil, authErr } + client.ec2conn = ec2.New(awsEc2Sess) + + supportedPlatforms, err := GetSupportedEC2Platforms(client.ec2conn) + if err != nil { + // We intentionally fail *silently* because there's a chance + // user just doesn't have ec2:DescribeAccountAttributes permissions + log.Printf("[WARN] Unable to get supported EC2 platforms: %s", err) + } else { + client.supportedplatforms = supportedPlatforms + } + client.acmconn = acm.New(sess) client.apigateway = apigateway.New(sess) client.appautoscalingconn = applicationautoscaling.New(sess) @@ -290,7 +299,6 @@ func (c *Config) Client() (interface{}, error) { client.codepipelineconn = codepipeline.New(sess) client.dsconn = directoryservice.New(sess) client.dynamodbconn = dynamodb.New(dynamoSess) - client.ec2conn = ec2.New(awsEc2Sess) client.ecrconn = ecr.New(sess) client.ecsconn = ecs.New(sess) client.efsconn = efs.New(sess) @@ -308,7 +316,7 @@ func (c *Config) Client() (interface{}, error) { client.kmsconn = kms.New(sess) client.lambdaconn = lambda.New(sess) client.lightsailconn = lightsail.New(usEast1Sess) - client.opsworksconn = opsworks.New(usEast1Sess) + client.opsworksconn = opsworks.New(sess) client.r53conn = route53.New(usEast1Sess) client.rdsconn = rds.New(sess) client.redshiftconn = redshift.New(sess) @@ -389,6 +397,34 @@ func (c *Config) ValidateAccountId(accountId string) error { return nil } +func GetSupportedEC2Platforms(conn *ec2.EC2) ([]string, error) { + attrName := "supported-platforms" + + input := ec2.DescribeAccountAttributesInput{ + AttributeNames: []*string{aws.String(attrName)}, + } + attributes, err := conn.DescribeAccountAttributes(&input) + if err != nil { + return nil, err + } + + var platforms []string + for _, attr := range attributes.AccountAttributes { + if *attr.AttributeName == attrName { + for _, v := range attr.AttributeValues { + platforms = append(platforms, *v.AttributeValue) + } + break + } + } + + if len(platforms) == 0 { + return nil, fmt.Errorf("No EC2 platforms detected") + } + + return platforms, nil +} + // addTerraformVersionToUserAgent is a named handler that will add Terraform's // version information to requests made by the AWS SDK. var addTerraformVersionToUserAgent = request.NamedHandler{ diff --git a/builtin/providers/aws/config_test.go b/builtin/providers/aws/config_test.go new file mode 100644 index 0000000000..50b175c1ed --- /dev/null +++ b/builtin/providers/aws/config_test.go @@ -0,0 +1,118 @@ +package aws + +import ( + "bytes" + "fmt" + "log" + "net/http" + "net/http/httptest" + "reflect" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + awsCredentials "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/session" + "github.com/aws/aws-sdk-go/service/ec2" +) + +func TestGetSupportedEC2Platforms(t *testing.T) { + ec2Endpoints := []*awsMockEndpoint{ + &awsMockEndpoint{ + Request: &awsMockRequest{"POST", "/", "Action=DescribeAccountAttributes&" + + "AttributeName.1=supported-platforms&Version=2016-11-15"}, + Response: &awsMockResponse{200, test_ec2_describeAccountAttributes_response, "text/xml"}, + }, + } + closeFunc, sess, err := getMockedAwsApiSession("EC2", ec2Endpoints) + if err != nil { + t.Fatal(err) + } + defer closeFunc() + conn := ec2.New(sess) + + platforms, err := GetSupportedEC2Platforms(conn) + if err != nil { + t.Fatalf("Expected no error, received: %s", err) + } + expectedPlatforms := []string{"VPC", "EC2"} + if !reflect.DeepEqual(platforms, expectedPlatforms) { + t.Fatalf("Received platforms: %q\nExpected: %q\n", platforms, expectedPlatforms) + } +} + +// getMockedAwsApiSession establishes a httptest server to simulate behaviour +// of a real AWS API server +func getMockedAwsApiSession(svcName string, endpoints []*awsMockEndpoint) (func(), *session.Session, error) { + ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + buf := new(bytes.Buffer) + buf.ReadFrom(r.Body) + requestBody := buf.String() + + log.Printf("[DEBUG] Received %s API %q request to %q: %s", + svcName, r.Method, r.RequestURI, requestBody) + + for _, e := range endpoints { + if r.Method == e.Request.Method && r.RequestURI == e.Request.Uri && requestBody == e.Request.Body { + log.Printf("[DEBUG] Mocked %s API responding with %d: %s", + svcName, e.Response.StatusCode, e.Response.Body) + + w.WriteHeader(e.Response.StatusCode) + w.Header().Set("Content-Type", e.Response.ContentType) + w.Header().Set("X-Amzn-Requestid", "1b206dd1-f9a8-11e5-becf-051c60f11c4a") + w.Header().Set("Date", time.Now().Format(time.RFC1123)) + + fmt.Fprintln(w, e.Response.Body) + return + } + } + + w.WriteHeader(400) + return + })) + + sc := awsCredentials.NewStaticCredentials("accessKey", "secretKey", "") + + sess, err := session.NewSession(&aws.Config{ + Credentials: sc, + Region: aws.String("us-east-1"), + Endpoint: aws.String(ts.URL), + CredentialsChainVerboseErrors: aws.Bool(true), + }) + + return ts.Close, sess, err +} + +type awsMockEndpoint struct { + Request *awsMockRequest + Response *awsMockResponse +} + +type awsMockRequest struct { + Method string + Uri string + Body string +} + +type awsMockResponse struct { + StatusCode int + Body string + ContentType string +} + +var test_ec2_describeAccountAttributes_response = ` + 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE + + + supported-platforms + + + VPC + + + EC2 + + + + +` diff --git a/builtin/providers/aws/data_source_aws_db_instance.go b/builtin/providers/aws/data_source_aws_db_instance.go index 719c399b3b..8adec41271 100644 --- a/builtin/providers/aws/data_source_aws_db_instance.go +++ b/builtin/providers/aws/data_source_aws_db_instance.go @@ -20,6 +20,11 @@ func dataSourceAwsDbInstance() *schema.Resource { ForceNew: true, }, + "address": { + Type: schema.TypeString, + Computed: true, + }, + "allocated_storage": { Type: schema.TypeInt, Computed: true, @@ -82,6 +87,11 @@ func dataSourceAwsDbInstance() *schema.Resource { Computed: true, }, + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "engine": { Type: schema.TypeString, Computed: true, @@ -92,6 +102,11 @@ func dataSourceAwsDbInstance() *schema.Resource { Computed: true, }, + "hosted_zone_id": { + Type: schema.TypeString, + Computed: true, + }, + "iops": { Type: schema.TypeInt, Computed: true, @@ -133,6 +148,11 @@ func dataSourceAwsDbInstance() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, }, + "port": { + Type: schema.TypeInt, + Computed: true, + }, + "preferred_backup_window": { Type: schema.TypeString, Computed: true, @@ -232,6 +252,10 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error d.Set("master_username", dbInstance.MasterUsername) d.Set("monitoring_interval", dbInstance.MonitoringInterval) d.Set("monitoring_role_arn", dbInstance.MonitoringRoleArn) + d.Set("address", dbInstance.Endpoint.Address) + d.Set("port", dbInstance.Endpoint.Port) + d.Set("hosted_zone_id", dbInstance.Endpoint.HostedZoneId) + d.Set("endpoint", fmt.Sprintf("%s:%d", *dbInstance.Endpoint.Address, *dbInstance.Endpoint.Port)) var optionGroups []string for _, v := range dbInstance.OptionGroupMemberships { diff --git a/builtin/providers/aws/data_source_aws_db_instance_test.go b/builtin/providers/aws/data_source_aws_db_instance_test.go index 4e37c372da..5d3a200ec2 100644 --- a/builtin/providers/aws/data_source_aws_db_instance_test.go +++ b/builtin/providers/aws/data_source_aws_db_instance_test.go @@ -28,6 +28,25 @@ func TestAccAWSDataDbInstance_basic(t *testing.T) { }) } +func TestAccAWSDataDbInstance_endpoint(t *testing.T) { + rInt := acctest.RandInt() + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfigWithDataSource(rInt), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "address"), + resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "port"), + resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "hosted_zone_id"), + resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "endpoint"), + ), + }, + }, + }) +} + func testAccAWSDBInstanceConfigWithDataSource(rInt int) string { return fmt.Sprintf(` resource "aws_db_instance" "bar" { diff --git a/builtin/providers/aws/data_source_aws_ecs_task_definition.go b/builtin/providers/aws/data_source_aws_ecs_task_definition.go index 4abdc021d7..3a5096a3b9 100644 --- a/builtin/providers/aws/data_source_aws_ecs_task_definition.go +++ b/builtin/providers/aws/data_source_aws_ecs_task_definition.go @@ -51,7 +51,7 @@ func dataSourceAwsEcsTaskDefinitionRead(d *schema.ResourceData, meta interface{} }) if err != nil { - return err + return fmt.Errorf("Failed getting task definition %s %q", err, d.Get("task_definition").(string)) } taskDefinition := *desc.TaskDefinition diff --git a/builtin/providers/aws/data_source_aws_route53_zone_test.go b/builtin/providers/aws/data_source_aws_route53_zone_test.go index 4da1b5f3fd..42d0eb72f6 100644 --- a/builtin/providers/aws/data_source_aws_route53_zone_test.go +++ b/builtin/providers/aws/data_source_aws_route53_zone_test.go @@ -4,69 +4,52 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccDataSourceAwsRoute53Zone(t *testing.T) { + rInt := acctest.RandInt() + publicResourceName := "aws_route53_zon.test" + publicDomain := fmt.Sprintf("terraformtestacchz-%d.com.", rInt) + privateResourceName := "aws_route53_zone.test_private" + privateDomain := fmt.Sprintf("test.acc-%d.", rInt) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccDataSourceAwsRoute53ZoneConfig, + { + Config: testAccDataSourceAwsRoute53ZoneConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccDataSourceAwsRoute53ZoneCheck("data.aws_route53_zone.by_zone_id"), - testAccDataSourceAwsRoute53ZoneCheck("data.aws_route53_zone.by_name"), - testAccDataSourceAwsRoute53ZoneCheckPrivate("data.aws_route53_zone.by_vpc"), - testAccDataSourceAwsRoute53ZoneCheckPrivate("data.aws_route53_zone.by_tag"), + testAccDataSourceAwsRoute53ZoneCheck( + publicResourceName, "data.aws_route53_zone.by_zone_id", publicDomain), + testAccDataSourceAwsRoute53ZoneCheck( + publicResourceName, "data.aws_route53_zone.by_name", publicDomain), + testAccDataSourceAwsRoute53ZoneCheck( + privateResourceName, "data.aws_route53_zone.by_vpc", privateDomain), + testAccDataSourceAwsRoute53ZoneCheck( + privateResourceName, "data.aws_route53_zone.by_tag", privateDomain), ), }, }, }) } -func testAccDataSourceAwsRoute53ZoneCheck(name string) resource.TestCheckFunc { +// rsName for the name of the created resource +// dsName for the name of the created data source +// zName for the name of the domain +func testAccDataSourceAwsRoute53ZoneCheck(rsName, dsName, zName string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] + rs, ok := s.RootModule().Resources[rsName] if !ok { - return fmt.Errorf("root module has no resource called %s", name) + return fmt.Errorf("root module has no resource called %s", rsName) } - hostedZone, ok := s.RootModule().Resources["aws_route53_zone.test"] + hostedZone, ok := s.RootModule().Resources[dsName] if !ok { - return fmt.Errorf("can't find aws_hosted_zone.test in state") - } - attr := rs.Primary.Attributes - if attr["id"] != hostedZone.Primary.Attributes["id"] { - return fmt.Errorf( - "id is %s; want %s", - attr["id"], - hostedZone.Primary.Attributes["id"], - ) - } - - if attr["name"] != "terraformtestacchz.com." { - return fmt.Errorf( - "Route53 Zone name is %s; want terraformtestacchz.com.", - attr["name"], - ) - } - - return nil - } -} - -func testAccDataSourceAwsRoute53ZoneCheckPrivate(name string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] - if !ok { - return fmt.Errorf("root module has no resource called %s", name) - } - - hostedZone, ok := s.RootModule().Resources["aws_route53_zone.test_private"] - if !ok { - return fmt.Errorf("can't find aws_hosted_zone.test in state") + return fmt.Errorf("can't find zone %q in state", dsName) } attr := rs.Primary.Attributes @@ -78,56 +61,54 @@ func testAccDataSourceAwsRoute53ZoneCheckPrivate(name string) resource.TestCheck ) } - if attr["name"] != "test.acc." { - return fmt.Errorf( - "Route53 Zone name is %s; want test.acc.", - attr["name"], - ) + if attr["name"] != zName { + return fmt.Errorf("Route53 Zone name is %q; want %q", attr["name"], zName) } return nil } } -const testAccDataSourceAwsRoute53ZoneConfig = ` +func testAccDataSourceAwsRoute53ZoneConfig(rInt int) string { + return fmt.Sprintf(` + provider "aws" { + region = "us-east-2" + } -provider "aws" { - region = "us-east-2" -} + resource "aws_vpc" "test" { + cidr_block = "172.16.0.0/16" + } -resource "aws_vpc" "test" { - cidr_block = "172.16.0.0/16" -} + resource "aws_route53_zone" "test_private" { + name = "test.acc-%d." + vpc_id = "${aws_vpc.test.id}" + tags { + Environment = "dev-%d" + } + } -resource "aws_route53_zone" "test_private" { - name = "test.acc." - vpc_id = "${aws_vpc.test.id}" - tags { - Environment = "dev" - } -} -data "aws_route53_zone" "by_vpc" { - name = "${aws_route53_zone.test_private.name}" - vpc_id = "${aws_vpc.test.id}" -} + data "aws_route53_zone" "by_vpc" { + name = "${aws_route53_zone.test_private.name}" + vpc_id = "${aws_vpc.test.id}" + } -data "aws_route53_zone" "by_tag" { - name = "${aws_route53_zone.test_private.name}" - private_zone = true - tags { - Environment = "dev" - } -} + data "aws_route53_zone" "by_tag" { + name = "${aws_route53_zone.test_private.name}" + private_zone = true + tags { + Environment = "dev-%d" + } + } -resource "aws_route53_zone" "test" { - name = "terraformtestacchz.com." -} -data "aws_route53_zone" "by_zone_id" { - zone_id = "${aws_route53_zone.test.zone_id}" -} + resource "aws_route53_zone" "test" { + name = "terraformtestacchz-%d.com." + } -data "aws_route53_zone" "by_name" { - name = "${data.aws_route53_zone.by_zone_id.name}" -} + data "aws_route53_zone" "by_zone_id" { + zone_id = "${aws_route53_zone.test.zone_id}" + } -` + data "aws_route53_zone" "by_name" { + name = "${data.aws_route53_zone.by_zone_id.name}" + }`, rInt, rInt, rInt, rInt) +} diff --git a/builtin/providers/aws/data_source_aws_route_table.go b/builtin/providers/aws/data_source_aws_route_table.go index 6f6667262e..c332bdd913 100644 --- a/builtin/providers/aws/data_source_aws_route_table.go +++ b/builtin/providers/aws/data_source_aws_route_table.go @@ -41,6 +41,16 @@ func dataSourceAwsRouteTable() *schema.Resource { Computed: true, }, + "ipv6_cidr_block": { + Type: schema.TypeString, + Computed: true, + }, + + "egress_only_gateway_id": { + Type: schema.TypeString, + Computed: true, + }, + "gateway_id": { Type: schema.TypeString, Computed: true, @@ -177,6 +187,12 @@ func dataSourceRoutesRead(ec2Routes []*ec2.Route) []map[string]interface{} { if r.DestinationCidrBlock != nil { m["cidr_block"] = *r.DestinationCidrBlock } + if r.DestinationIpv6CidrBlock != nil { + m["ipv6_cidr_block"] = *r.DestinationIpv6CidrBlock + } + if r.EgressOnlyInternetGatewayId != nil { + m["egress_only_gateway_id"] = *r.EgressOnlyInternetGatewayId + } if r.GatewayId != nil { m["gateway_id"] = *r.GatewayId } diff --git a/builtin/providers/aws/data_source_aws_route_table_test.go b/builtin/providers/aws/data_source_aws_route_table_test.go index f459dd33bf..71957541f1 100644 --- a/builtin/providers/aws/data_source_aws_route_table_test.go +++ b/builtin/providers/aws/data_source_aws_route_table_test.go @@ -14,7 +14,7 @@ func TestAccDataSourceAwsRouteTable_basic(t *testing.T) { PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsRouteTableGroupConfig, Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsRouteTableCheck("data.aws_route_table.by_tag"), @@ -33,7 +33,7 @@ func TestAccDataSourceAwsRouteTable_main(t *testing.T) { PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsRouteTableMainRoute, Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsRouteTableCheckMain("data.aws_route_table.by_filter"), diff --git a/builtin/providers/aws/data_source_aws_vpc_test.go b/builtin/providers/aws/data_source_aws_vpc_test.go index dbc09fea15..e8344db981 100644 --- a/builtin/providers/aws/data_source_aws_vpc_test.go +++ b/builtin/providers/aws/data_source_aws_vpc_test.go @@ -2,24 +2,30 @@ package aws import ( "fmt" + "math/rand" "testing" + "time" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccDataSourceAwsVpc_basic(t *testing.T) { + rand.Seed(time.Now().UTC().UnixNano()) + rInt := rand.Intn(16) + cidr := fmt.Sprintf("172.%d.0.0/16", rInt) + tag := fmt.Sprintf("terraform-testacc-vpc-data-source-%d", rInt) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceAwsVpcConfig, + Config: testAccDataSourceAwsVpcConfig(cidr, tag), Check: resource.ComposeTestCheckFunc( - testAccDataSourceAwsVpcCheck("data.aws_vpc.by_id"), - testAccDataSourceAwsVpcCheck("data.aws_vpc.by_cidr"), - testAccDataSourceAwsVpcCheck("data.aws_vpc.by_tag"), - testAccDataSourceAwsVpcCheck("data.aws_vpc.by_filter"), + testAccDataSourceAwsVpcCheck("data.aws_vpc.by_id", cidr, tag), + testAccDataSourceAwsVpcCheck("data.aws_vpc.by_cidr", cidr, tag), + testAccDataSourceAwsVpcCheck("data.aws_vpc.by_tag", cidr, tag), + testAccDataSourceAwsVpcCheck("data.aws_vpc.by_filter", cidr, tag), ), }, }, @@ -27,14 +33,18 @@ func TestAccDataSourceAwsVpc_basic(t *testing.T) { } func TestAccDataSourceAwsVpc_ipv6Associated(t *testing.T) { + rand.Seed(time.Now().UTC().UnixNano()) + rInt := rand.Intn(16) + cidr := fmt.Sprintf("172.%d.0.0/16", rInt) + tag := fmt.Sprintf("terraform-testacc-vpc-data-source-%d", rInt) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceAwsVpcConfigIpv6, + Config: testAccDataSourceAwsVpcConfigIpv6(cidr, tag), Check: resource.ComposeTestCheckFunc( - testAccDataSourceAwsVpcCheck("data.aws_vpc.by_id"), + testAccDataSourceAwsVpcCheck("data.aws_vpc.by_id", cidr, tag), resource.TestCheckResourceAttrSet( "data.aws_vpc.by_id", "ipv6_association_id"), resource.TestCheckResourceAttrSet( @@ -45,7 +55,7 @@ func TestAccDataSourceAwsVpc_ipv6Associated(t *testing.T) { }) } -func testAccDataSourceAwsVpcCheck(name string) resource.TestCheckFunc { +func testAccDataSourceAwsVpcCheck(name, cidr, tag string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[name] if !ok { @@ -67,10 +77,10 @@ func testAccDataSourceAwsVpcCheck(name string) resource.TestCheckFunc { ) } - if attr["cidr_block"] != "172.16.0.0/16" { - return fmt.Errorf("bad cidr_block %s", attr["cidr_block"]) + if attr["cidr_block"] != cidr { + return fmt.Errorf("bad cidr_block %s, expected: %s", attr["cidr_block"], cidr) } - if attr["tags.Name"] != "terraform-testacc-vpc-data-source" { + if attr["tags.Name"] != tag { return fmt.Errorf("bad Name tag %s", attr["tags.Name"]) } @@ -78,35 +88,37 @@ func testAccDataSourceAwsVpcCheck(name string) resource.TestCheckFunc { } } -const testAccDataSourceAwsVpcConfigIpv6 = ` +func testAccDataSourceAwsVpcConfigIpv6(cidr, tag string) string { + return fmt.Sprintf(` provider "aws" { region = "us-west-2" } resource "aws_vpc" "test" { - cidr_block = "172.16.0.0/16" + cidr_block = "%s" assign_generated_ipv6_cidr_block = true tags { - Name = "terraform-testacc-vpc-data-source" + Name = "%s" } } data "aws_vpc" "by_id" { id = "${aws_vpc.test.id}" +}`, cidr, tag) } -` -const testAccDataSourceAwsVpcConfig = ` +func testAccDataSourceAwsVpcConfig(cidr, tag string) string { + return fmt.Sprintf(` provider "aws" { region = "us-west-2" } resource "aws_vpc" "test" { - cidr_block = "172.16.0.0/16" + cidr_block = "%s" tags { - Name = "terraform-testacc-vpc-data-source" + Name = "%s" } } @@ -129,5 +141,5 @@ data "aws_vpc" "by_filter" { name = "cidr" values = ["${aws_vpc.test.cidr_block}"] } +}`, cidr, tag) } -` diff --git a/builtin/providers/aws/import_aws_api_gateway_usage_plan_test.go b/builtin/providers/aws/import_aws_api_gateway_usage_plan_test.go new file mode 100644 index 0000000000..76a58e0c5d --- /dev/null +++ b/builtin/providers/aws/import_aws_api_gateway_usage_plan_test.go @@ -0,0 +1,30 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSAPIGatewayUsagePlan_importBasic(t *testing.T) { + resourceName := "aws_api_gateway_usage_plan.main" + rName := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} diff --git a/builtin/providers/aws/import_aws_iam_account_alias_test.go b/builtin/providers/aws/import_aws_iam_account_alias_test.go new file mode 100644 index 0000000000..e2d00b68cf --- /dev/null +++ b/builtin/providers/aws/import_aws_iam_account_alias_test.go @@ -0,0 +1,31 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSIAMAccountAlias_importBasic(t *testing.T) { + resourceName := "aws_iam_account_alias.test" + + rstring := acctest.RandString(5) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMAccountAliasDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSIAMAccountAliasConfig(rstring), + }, + + resource.TestStep{ + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} diff --git a/builtin/providers/aws/import_aws_network_acl_test.go b/builtin/providers/aws/import_aws_network_acl_test.go index 407d3e45eb..6adf8a47d5 100644 --- a/builtin/providers/aws/import_aws_network_acl_test.go +++ b/builtin/providers/aws/import_aws_network_acl_test.go @@ -23,11 +23,11 @@ func TestAccAWSNetworkAcl_importBasic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSNetworkAclDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSNetworkAclEgressNIngressConfig, }, - resource.TestStep{ + { ResourceName: "aws_network_acl.bar", ImportState: true, ImportStateVerify: true, diff --git a/builtin/providers/aws/import_aws_route_table.go b/builtin/providers/aws/import_aws_route_table.go index a3ff401be4..185d994111 100644 --- a/builtin/providers/aws/import_aws_route_table.go +++ b/builtin/providers/aws/import_aws_route_table.go @@ -51,6 +51,7 @@ func resourceAwsRouteTableImportState( d.SetType("aws_route") d.Set("route_table_id", id) d.Set("destination_cidr_block", route.DestinationCidrBlock) + d.Set("destination_ipv6_cidr_block", route.DestinationIpv6CidrBlock) d.SetId(routeIDHash(d, route)) results = append(results, d) } diff --git a/builtin/providers/aws/import_aws_route_table_test.go b/builtin/providers/aws/import_aws_route_table_test.go index 248bf03ddd..8200bc8394 100644 --- a/builtin/providers/aws/import_aws_route_table_test.go +++ b/builtin/providers/aws/import_aws_route_table_test.go @@ -23,11 +23,11 @@ func TestAccAWSRouteTable_importBasic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableConfig, }, - resource.TestStep{ + { ResourceName: "aws_route_table.foo", ImportState: true, ImportStateCheck: checkFn, @@ -51,11 +51,11 @@ func TestAccAWSRouteTable_complex(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableConfig_complexImport, }, - resource.TestStep{ + { ResourceName: "aws_route_table.mod", ImportState: true, ImportStateCheck: checkFn, diff --git a/builtin/providers/aws/import_aws_security_group.go b/builtin/providers/aws/import_aws_security_group.go index 21b7e64e20..d802c75e23 100644 --- a/builtin/providers/aws/import_aws_security_group.go +++ b/builtin/providers/aws/import_aws_security_group.go @@ -66,13 +66,20 @@ func resourceAwsSecurityGroupImportStatePerm(sg *ec2.SecurityGroup, ruleType str p := &ec2.IpPermission{ FromPort: perm.FromPort, IpProtocol: perm.IpProtocol, - IpRanges: perm.IpRanges, PrefixListIds: perm.PrefixListIds, ToPort: perm.ToPort, UserIdGroupPairs: []*ec2.UserIdGroupPair{pair}, } + if perm.Ipv6Ranges != nil { + p.Ipv6Ranges = perm.Ipv6Ranges + } + + if perm.IpRanges != nil { + p.IpRanges = perm.IpRanges + } + r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p) if err != nil { return nil, err diff --git a/builtin/providers/aws/import_aws_security_group_test.go b/builtin/providers/aws/import_aws_security_group_test.go index d2bf912057..4b0597670f 100644 --- a/builtin/providers/aws/import_aws_security_group_test.go +++ b/builtin/providers/aws/import_aws_security_group_test.go @@ -23,11 +23,39 @@ func TestAccAWSSecurityGroup_importBasic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig, }, - resource.TestStep{ + { + ResourceName: "aws_security_group.web", + ImportState: true, + ImportStateCheck: checkFn, + }, + }, + }) +} + +func TestAccAWSSecurityGroup_importIpv6(t *testing.T) { + checkFn := func(s []*terraform.InstanceState) error { + // Expect 3: group, 2 rules + if len(s) != 3 { + return fmt.Errorf("expected 3 states: %#v", s) + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupConfigIpv6, + }, + + { ResourceName: "aws_security_group.web", ImportState: true, ImportStateCheck: checkFn, @@ -42,11 +70,11 @@ func TestAccAWSSecurityGroup_importSelf(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_importSelf, }, - resource.TestStep{ + { ResourceName: "aws_security_group.allow_all", ImportState: true, ImportStateVerify: true, @@ -61,11 +89,11 @@ func TestAccAWSSecurityGroup_importSourceSecurityGroup(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_importSourceSecurityGroup, }, - resource.TestStep{ + { ResourceName: "aws_security_group.test_group_1", ImportState: true, ImportStateVerify: true, diff --git a/builtin/providers/aws/network_acl_entry.go b/builtin/providers/aws/network_acl_entry.go index a8450ef520..c57f82222c 100644 --- a/builtin/providers/aws/network_acl_entry.go +++ b/builtin/providers/aws/network_acl_entry.go @@ -32,7 +32,14 @@ func expandNetworkAclEntries(configured []interface{}, entryType string) ([]*ec2 Egress: aws.Bool(entryType == "egress"), RuleAction: aws.String(data["action"].(string)), RuleNumber: aws.Int64(int64(data["rule_no"].(int))), - CidrBlock: aws.String(data["cidr_block"].(string)), + } + + if v, ok := data["ipv6_cidr_block"]; ok { + e.Ipv6CidrBlock = aws.String(v.(string)) + } + + if v, ok := data["cidr_block"]; ok { + e.CidrBlock = aws.String(v.(string)) } // Specify additional required fields for ICMP @@ -55,14 +62,24 @@ func flattenNetworkAclEntries(list []*ec2.NetworkAclEntry) []map[string]interfac entries := make([]map[string]interface{}, 0, len(list)) for _, entry := range list { - entries = append(entries, map[string]interface{}{ - "from_port": *entry.PortRange.From, - "to_port": *entry.PortRange.To, - "action": *entry.RuleAction, - "rule_no": *entry.RuleNumber, - "protocol": *entry.Protocol, - "cidr_block": *entry.CidrBlock, - }) + + newEntry := map[string]interface{}{ + "from_port": *entry.PortRange.From, + "to_port": *entry.PortRange.To, + "action": *entry.RuleAction, + "rule_no": *entry.RuleNumber, + "protocol": *entry.Protocol, + } + + if entry.CidrBlock != nil { + newEntry["cidr_block"] = *entry.CidrBlock + } + + if entry.Ipv6CidrBlock != nil { + newEntry["ipv6_cidr_block"] = *entry.Ipv6CidrBlock + } + + entries = append(entries, newEntry) } return entries diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go index d6b156bb0a..744eb21ad9 100644 --- a/builtin/providers/aws/provider.go +++ b/builtin/providers/aws/provider.go @@ -219,6 +219,8 @@ func Provider() terraform.ResourceProvider { "aws_api_gateway_model": resourceAwsApiGatewayModel(), "aws_api_gateway_resource": resourceAwsApiGatewayResource(), "aws_api_gateway_rest_api": resourceAwsApiGatewayRestApi(), + "aws_api_gateway_usage_plan": resourceAwsApiGatewayUsagePlan(), + "aws_api_gateway_usage_plan_key": resourceAwsApiGatewayUsagePlanKey(), "aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(), "aws_appautoscaling_target": resourceAwsAppautoscalingTarget(), "aws_appautoscaling_policy": resourceAwsAppautoscalingPolicy(), @@ -298,6 +300,7 @@ func Provider() terraform.ResourceProvider { "aws_flow_log": resourceAwsFlowLog(), "aws_glacier_vault": resourceAwsGlacierVault(), "aws_iam_access_key": resourceAwsIamAccessKey(), + "aws_iam_account_alias": resourceAwsIamAccountAlias(), "aws_iam_account_password_policy": resourceAwsIamAccountPasswordPolicy(), "aws_iam_group_policy": resourceAwsIamGroupPolicy(), "aws_iam_group": resourceAwsIamGroup(), diff --git a/builtin/providers/aws/resource_aws_alb.go b/builtin/providers/aws/resource_aws_alb.go index 0efe4566a5..1a577e4b0a 100644 --- a/builtin/providers/aws/resource_aws_alb.go +++ b/builtin/providers/aws/resource_aws_alb.go @@ -69,7 +69,6 @@ func resourceAwsAlb() *schema.Resource { "subnets": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, - ForceNew: true, Required: true, Set: schema.HashString, }, @@ -312,6 +311,20 @@ func resourceAwsAlbUpdate(d *schema.ResourceData, meta interface{}) error { } + if d.HasChange("subnets") { + subnets := expandStringList(d.Get("subnets").(*schema.Set).List()) + + params := &elbv2.SetSubnetsInput{ + LoadBalancerArn: aws.String(d.Id()), + Subnets: subnets, + } + + _, err := elbconn.SetSubnets(params) + if err != nil { + return fmt.Errorf("Failure Setting ALB Subnets: %s", err) + } + } + return resourceAwsAlbRead(d, meta) } diff --git a/builtin/providers/aws/resource_aws_alb_test.go b/builtin/providers/aws/resource_aws_alb_test.go index 3e50be3a08..de127dfc40 100644 --- a/builtin/providers/aws/resource_aws_alb_test.go +++ b/builtin/providers/aws/resource_aws_alb_test.go @@ -179,6 +179,35 @@ func TestAccAWSALB_updatedSecurityGroups(t *testing.T) { }) } +func TestAccAWSALB_updatedSubnets(t *testing.T) { + var pre, post elbv2.LoadBalancer + albName := fmt.Sprintf("testaccawsalb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_alb.alb_test", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSALBDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSALBConfig_basic(albName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSALBExists("aws_alb.alb_test", &pre), + resource.TestCheckResourceAttr("aws_alb.alb_test", "subnets.#", "2"), + ), + }, + { + Config: testAccAWSALBConfig_updateSubnets(albName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSALBExists("aws_alb.alb_test", &post), + resource.TestCheckResourceAttr("aws_alb.alb_test", "subnets.#", "3"), + testAccCheckAWSAlbARNs(&pre, &post), + ), + }, + }, + }) +} + // TestAccAWSALB_noSecurityGroup regression tests the issue in #8264, // where if an ALB is created without a security group, a default one // is assigned. @@ -426,6 +455,73 @@ resource "aws_security_group" "alb_test" { }`, albName) } +func testAccAWSALBConfig_updateSubnets(albName string) string { + return fmt.Sprintf(`resource "aws_alb" "alb_test" { + name = "%s" + internal = true + security_groups = ["${aws_security_group.alb_test.id}"] + subnets = ["${aws_subnet.alb_test.*.id}"] + + idle_timeout = 30 + enable_deletion_protection = false + + tags { + TestName = "TestAccAWSALB_basic" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "alb_test" { + cidr_block = "10.0.0.0/16" + + tags { + TestName = "TestAccAWSALB_basic" + } +} + +resource "aws_subnet" "alb_test" { + count = 3 + vpc_id = "${aws_vpc.alb_test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + TestName = "TestAccAWSALB_basic" + } +} + +resource "aws_security_group" "alb_test" { + name = "allow_all_alb_test" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.alb_test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + TestName = "TestAccAWSALB_basic" + } +}`, albName) +} + func testAccAWSALBConfig_generatedName() string { return fmt.Sprintf(` resource "aws_alb" "alb_test" { diff --git a/builtin/providers/aws/resource_aws_ami.go b/builtin/providers/aws/resource_aws_ami.go index 2a5c2b3a40..6e4ee15220 100644 --- a/builtin/providers/aws/resource_aws_ami.go +++ b/builtin/providers/aws/resource_aws_ami.go @@ -13,9 +13,17 @@ import ( "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) +const ( + AWSAMIRetryTimeout = 10 * time.Minute + AWSAMIDeleteRetryTimeout = 20 * time.Minute + AWSAMIRetryDelay = 5 * time.Second + AWSAMIRetryMinTimeout = 3 * time.Second +) + func resourceAwsAmi() *schema.Resource { // Our schema is shared also with aws_ami_copy and aws_ami_from_instance resourceSchema := resourceAwsAmiCommonSchema(false) @@ -281,7 +289,56 @@ func resourceAwsAmiDelete(d *schema.ResourceData, meta interface{}) error { } } + // Verify that the image is actually removed, if not we need to wait for it to be removed + if err := resourceAwsAmiWaitForDestroy(d.Id(), client); err != nil { + return err + } + + // No error, ami was deleted successfully d.SetId("") + return nil +} + +func AMIStateRefreshFunc(client *ec2.EC2, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + emptyResp := &ec2.DescribeImagesOutput{} + + resp, err := client.DescribeImages(&ec2.DescribeImagesInput{ImageIds: []*string{aws.String(id)}}) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidAMIID.NotFound" { + return emptyResp, "destroyed", nil + } else if resp != nil && len(resp.Images) == 0 { + return emptyResp, "destroyed", nil + } else { + return emptyResp, "", fmt.Errorf("Error on refresh: %+v", err) + } + } + + if resp == nil || resp.Images == nil || len(resp.Images) == 0 { + return emptyResp, "destroyed", nil + } + + // AMI is valid, so return it's state + return resp.Images[0], *resp.Images[0].State, nil + } +} + +func resourceAwsAmiWaitForDestroy(id string, client *ec2.EC2) error { + log.Printf("Waiting for AMI %s to be deleted...", id) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"available", "pending", "failed"}, + Target: []string{"destroyed"}, + Refresh: AMIStateRefreshFunc(client, id), + Timeout: AWSAMIDeleteRetryTimeout, + Delay: AWSAMIRetryDelay, + MinTimeout: AWSAMIRetryTimeout, + } + + _, err := stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for AMI (%s) to be deleted: %v", id, err) + } return nil } @@ -289,51 +346,20 @@ func resourceAwsAmiDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsAmiWaitForAvailable(id string, client *ec2.EC2) (*ec2.Image, error) { log.Printf("Waiting for AMI %s to become available...", id) - req := &ec2.DescribeImagesInput{ - ImageIds: []*string{aws.String(id)}, + stateConf := &resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: []string{"available"}, + Refresh: AMIStateRefreshFunc(client, id), + Timeout: AWSAMIRetryTimeout, + Delay: AWSAMIRetryDelay, + MinTimeout: AWSAMIRetryMinTimeout, } - pollsWhereNotFound := 0 - for { - res, err := client.DescribeImages(req) - if err != nil { - // When using RegisterImage (for aws_ami) the AMI sometimes isn't available at all - // right after the API responds, so we need to tolerate a couple Not Found errors - // before an available AMI shows up. - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidAMIID.NotFound" { - pollsWhereNotFound++ - // We arbitrarily stop polling after getting a "not found" error five times, - // assuming that the AMI has been deleted by something other than Terraform. - if pollsWhereNotFound > 5 { - return nil, fmt.Errorf("gave up waiting for AMI to be created: %s", err) - } - time.Sleep(4 * time.Second) - continue - } - return nil, fmt.Errorf("error reading AMI: %s", err) - } - if len(res.Images) != 1 { - return nil, fmt.Errorf("new AMI vanished while pending") - } - - state := *res.Images[0].State - - if state == "pending" { - // Give it a few seconds before we poll again. - time.Sleep(4 * time.Second) - continue - } - - if state == "available" { - // We're done! - return res.Images[0], nil - } - - // If we're not pending or available then we're in one of the invalid/error - // states, so stop polling and bail out. - stateReason := *res.Images[0].StateReason - return nil, fmt.Errorf("new AMI became %s while pending: %s", state, stateReason) + info, err := stateConf.WaitForState() + if err != nil { + return nil, fmt.Errorf("Error waiting for AMI (%s) to be ready: %v", id, err) } + return info.(*ec2.Image), nil } func resourceAwsAmiCommonSchema(computed bool) map[string]*schema.Schema { diff --git a/builtin/providers/aws/resource_aws_ami_from_instance_test.go b/builtin/providers/aws/resource_aws_ami_from_instance_test.go index e7ead234f6..e130a6cbc5 100644 --- a/builtin/providers/aws/resource_aws_ami_from_instance_test.go +++ b/builtin/providers/aws/resource_aws_ami_from_instance_test.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -16,13 +17,14 @@ import ( func TestAccAWSAMIFromInstance(t *testing.T) { var amiId string snapshots := []string{} + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAMIFromInstanceConfig, + { + Config: testAccAWSAMIFromInstanceConfig(rInt), Check: func(state *terraform.State) error { rs, ok := state.RootModule().Resources["aws_ami_from_instance.test"] if !ok { @@ -51,13 +53,13 @@ func TestAccAWSAMIFromInstance(t *testing.T) { image := describe.Images[0] if expected := "available"; *image.State != expected { - return fmt.Errorf("invalid image state; expected %v, got %v", expected, image.State) + return fmt.Errorf("invalid image state; expected %v, got %v", expected, *image.State) } if expected := "machine"; *image.ImageType != expected { - return fmt.Errorf("wrong image type; expected %v, got %v", expected, image.ImageType) + return fmt.Errorf("wrong image type; expected %v, got %v", expected, *image.ImageType) } - if expected := "terraform-acc-ami-from-instance"; *image.Name != expected { - return fmt.Errorf("wrong name; expected %v, got %v", expected, image.Name) + if expected := fmt.Sprintf("terraform-acc-ami-from-instance-%d", rInt); *image.Name != expected { + return fmt.Errorf("wrong name; expected %v, got %v", expected, *image.Name) } for _, bdm := range image.BlockDeviceMappings { @@ -137,24 +139,25 @@ func TestAccAWSAMIFromInstance(t *testing.T) { }) } -var testAccAWSAMIFromInstanceConfig = ` -provider "aws" { - region = "us-east-1" -} +func testAccAWSAMIFromInstanceConfig(rInt int) string { + return fmt.Sprintf(` + provider "aws" { + region = "us-east-1" + } -resource "aws_instance" "test" { - // This AMI has one block device mapping, so we expect to have - // one snapshot in our created AMI. - ami = "ami-408c7f28" - instance_type = "t1.micro" - tags { - Name = "testAccAWSAMIFromInstanceConfig_TestAMI" - } -} + resource "aws_instance" "test" { + // This AMI has one block device mapping, so we expect to have + // one snapshot in our created AMI. + ami = "ami-408c7f28" + instance_type = "t1.micro" + tags { + Name = "testAccAWSAMIFromInstanceConfig_TestAMI" + } + } -resource "aws_ami_from_instance" "test" { - name = "terraform-acc-ami-from-instance" - description = "Testing Terraform aws_ami_from_instance resource" - source_instance_id = "${aws_instance.test.id}" + resource "aws_ami_from_instance" "test" { + name = "terraform-acc-ami-from-instance-%d" + description = "Testing Terraform aws_ami_from_instance resource" + source_instance_id = "${aws_instance.test.id}" + }`, rInt) } -` diff --git a/builtin/providers/aws/resource_aws_api_gateway_api_key.go b/builtin/providers/aws/resource_aws_api_gateway_api_key.go index fe606a5e0b..66a7154de8 100644 --- a/builtin/providers/aws/resource_aws_api_gateway_api_key.go +++ b/builtin/providers/aws/resource_aws_api_gateway_api_key.go @@ -42,8 +42,9 @@ func resourceAwsApiGatewayApiKey() *schema.Resource { }, "stage_key": { - Type: schema.TypeSet, - Optional: true, + Type: schema.TypeSet, + Optional: true, + Deprecated: "Since the API Gateway usage plans feature was launched on August 11, 2016, usage plans are now required to associate an API key with an API stage", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "rest_api_id": { @@ -68,6 +69,15 @@ func resourceAwsApiGatewayApiKey() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + "value": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Sensitive: true, + ValidateFunc: validateApiGatewayApiKeyValue, + }, }, } } @@ -80,6 +90,7 @@ func resourceAwsApiGatewayApiKeyCreate(d *schema.ResourceData, meta interface{}) Name: aws.String(d.Get("name").(string)), Description: aws.String(d.Get("description").(string)), Enabled: aws.Bool(d.Get("enabled").(bool)), + Value: aws.String(d.Get("value").(string)), StageKeys: expandApiGatewayStageKeys(d), }) if err != nil { @@ -96,7 +107,8 @@ func resourceAwsApiGatewayApiKeyRead(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Reading API Gateway API Key: %s", d.Id()) apiKey, err := conn.GetApiKey(&apigateway.GetApiKeyInput{ - ApiKey: aws.String(d.Id()), + ApiKey: aws.String(d.Id()), + IncludeValue: aws.Bool(true), }) if err != nil { if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NotFoundException" { @@ -111,6 +123,7 @@ func resourceAwsApiGatewayApiKeyRead(d *schema.ResourceData, meta interface{}) e d.Set("description", apiKey.Description) d.Set("enabled", apiKey.Enabled) d.Set("stage_key", flattenApiGatewayStageKeys(apiKey.StageKeys)) + d.Set("value", apiKey.Value) if err := d.Set("created_date", apiKey.CreatedDate.Format(time.RFC3339)); err != nil { log.Printf("[DEBUG] Error setting created_date: %s", err) diff --git a/builtin/providers/aws/resource_aws_api_gateway_api_key_test.go b/builtin/providers/aws/resource_aws_api_gateway_api_key_test.go index cafb890ea4..a7d519ae68 100644 --- a/builtin/providers/aws/resource_aws_api_gateway_api_key_test.go +++ b/builtin/providers/aws/resource_aws_api_gateway_api_key_test.go @@ -33,6 +33,8 @@ func TestAccAWSAPIGatewayApiKey_basic(t *testing.T) { "aws_api_gateway_api_key.test", "created_date"), resource.TestCheckResourceAttrSet( "aws_api_gateway_api_key.test", "last_updated_date"), + resource.TestCheckResourceAttr( + "aws_api_gateway_api_key.custom", "value", "MyCustomToken#@&\"'(§!ç)-_*$€¨^£%ù+=/:.;?,|"), ), }, }, @@ -176,4 +178,15 @@ resource "aws_api_gateway_api_key" "test" { stage_name = "${aws_api_gateway_deployment.test.stage_name}" } } + +resource "aws_api_gateway_api_key" "custom" { + name = "bar" + enabled = true + value = "MyCustomToken#@&\"'(§!ç)-_*$€¨^£%ù+=/:.;?,|" + + stage_key { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "${aws_api_gateway_deployment.test.stage_name}" + } +} ` diff --git a/builtin/providers/aws/resource_aws_api_gateway_domain_name.go b/builtin/providers/aws/resource_aws_api_gateway_domain_name.go index 69f50fa8b0..103f7bed4e 100644 --- a/builtin/providers/aws/resource_aws_api_gateway_domain_name.go +++ b/builtin/providers/aws/resource_aws_api_gateway_domain_name.go @@ -21,27 +21,34 @@ func resourceAwsApiGatewayDomainName() *schema.Resource { Schema: map[string]*schema.Schema{ + //According to AWS Documentation, ACM will be the only way to add certificates + //to ApiGateway DomainNames. When this happens, we will be deprecating all certificate methods + //except certificate_arn. We are not quite sure when this will happen. "certificate_body": { - Type: schema.TypeString, - ForceNew: true, - Required: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + ConflictsWith: []string{"certificate_arn"}, }, "certificate_chain": { - Type: schema.TypeString, - ForceNew: true, - Required: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + ConflictsWith: []string{"certificate_arn"}, }, "certificate_name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"certificate_arn"}, }, "certificate_private_key": { - Type: schema.TypeString, - ForceNew: true, - Required: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + ConflictsWith: []string{"certificate_arn"}, }, "domain_name": { @@ -50,6 +57,12 @@ func resourceAwsApiGatewayDomainName() *schema.Resource { ForceNew: true, }, + "certificate_arn": { + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"certificate_body", "certificate_chain", "certificate_name", "certificate_private_key"}, + }, + "cloudfront_domain_name": { Type: schema.TypeString, Computed: true, @@ -72,13 +85,31 @@ func resourceAwsApiGatewayDomainNameCreate(d *schema.ResourceData, meta interfac conn := meta.(*AWSClient).apigateway log.Printf("[DEBUG] Creating API Gateway Domain Name") - domainName, err := conn.CreateDomainName(&apigateway.CreateDomainNameInput{ - CertificateBody: aws.String(d.Get("certificate_body").(string)), - CertificateChain: aws.String(d.Get("certificate_chain").(string)), - CertificateName: aws.String(d.Get("certificate_name").(string)), - CertificatePrivateKey: aws.String(d.Get("certificate_private_key").(string)), - DomainName: aws.String(d.Get("domain_name").(string)), - }) + params := &apigateway.CreateDomainNameInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + if v, ok := d.GetOk("certificate_arn"); ok { + params.CertificateArn = aws.String(v.(string)) + } + + if v, ok := d.GetOk("certificate_name"); ok { + params.CertificateName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("certificate_body"); ok { + params.CertificateBody = aws.String(v.(string)) + } + + if v, ok := d.GetOk("certificate_chain"); ok { + params.CertificateChain = aws.String(v.(string)) + } + + if v, ok := d.GetOk("certificate_private_key"); ok { + params.CertificatePrivateKey = aws.String(v.(string)) + } + + domainName, err := conn.CreateDomainName(params) if err != nil { return fmt.Errorf("Error creating API Gateway Domain Name: %s", err) } @@ -113,6 +144,7 @@ func resourceAwsApiGatewayDomainNameRead(d *schema.ResourceData, meta interface{ } d.Set("cloudfront_domain_name", domainName.DistributionDomainName) d.Set("domain_name", domainName.DomainName) + d.Set("certificate_arn", domainName.CertificateArn) return nil } @@ -128,6 +160,14 @@ func resourceAwsApiGatewayDomainNameUpdateOperations(d *schema.ResourceData) []* }) } + if d.HasChange("certificate_arn") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/certificateArn"), + Value: aws.String(d.Get("certificate_arn").(string)), + }) + } + return operations } @@ -139,6 +179,7 @@ func resourceAwsApiGatewayDomainNameUpdate(d *schema.ResourceData, meta interfac DomainName: aws.String(d.Id()), PatchOperations: resourceAwsApiGatewayDomainNameUpdateOperations(d), }) + if err != nil { return err } diff --git a/builtin/providers/aws/resource_aws_api_gateway_method_test.go b/builtin/providers/aws/resource_aws_api_gateway_method_test.go index 5b1e993f37..34f3e01345 100644 --- a/builtin/providers/aws/resource_aws_api_gateway_method_test.go +++ b/builtin/providers/aws/resource_aws_api_gateway_method_test.go @@ -15,6 +15,7 @@ import ( func TestAccAWSAPIGatewayMethod_basic(t *testing.T) { var conf apigateway.Method + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -22,7 +23,7 @@ func TestAccAWSAPIGatewayMethod_basic(t *testing.T) { CheckDestroy: testAccCheckAWSAPIGatewayMethodDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSAPIGatewayMethodConfig, + Config: testAccAWSAPIGatewayMethodConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayMethodExists("aws_api_gateway_method.test", &conf), testAccCheckAWSAPIGatewayMethodAttributes(&conf), @@ -36,7 +37,7 @@ func TestAccAWSAPIGatewayMethod_basic(t *testing.T) { }, { - Config: testAccAWSAPIGatewayMethodConfigUpdate, + Config: testAccAWSAPIGatewayMethodConfigUpdate(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayMethodExists("aws_api_gateway_method.test", &conf), testAccCheckAWSAPIGatewayMethodAttributesUpdate(&conf), @@ -72,7 +73,7 @@ func TestAccAWSAPIGatewayMethod_customauthorizer(t *testing.T) { }, { - Config: testAccAWSAPIGatewayMethodConfigUpdate, + Config: testAccAWSAPIGatewayMethodConfigUpdate(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayMethodExists("aws_api_gateway_method.test", &conf), testAccCheckAWSAPIGatewayMethodAttributesUpdate(&conf), @@ -199,7 +200,7 @@ func testAccCheckAWSAPIGatewayMethodDestroy(s *terraform.State) error { func testAccAWSAPIGatewayMethodConfigWithCustomAuthorizer(rInt int) string { return fmt.Sprintf(` resource "aws_api_gateway_rest_api" "test" { - name = "tf-acc-test-custom-auth" + name = "tf-acc-test-custom-auth-%d" } resource "aws_iam_role" "invocation_role" { @@ -261,7 +262,7 @@ EOF resource "aws_lambda_function" "authorizer" { filename = "test-fixtures/lambdatest.zip" source_code_hash = "${base64sha256(file("test-fixtures/lambdatest.zip"))}" - function_name = "tf_acc_api_gateway_authorizer" + function_name = "tf_acc_api_gateway_authorizer_%d" role = "${aws_iam_role.iam_for_lambda.arn}" handler = "exports.example" runtime = "nodejs4.3" @@ -295,12 +296,13 @@ resource "aws_api_gateway_method" "test" { "method.request.header.Content-Type" = false "method.request.querystring.page" = true } -}`, rInt, rInt, rInt) +}`, rInt, rInt, rInt, rInt, rInt) } -const testAccAWSAPIGatewayMethodConfig = ` +func testAccAWSAPIGatewayMethodConfig(rInt int) string { + return fmt.Sprintf(` resource "aws_api_gateway_rest_api" "test" { - name = "test" + name = "tf-acc-test-apig-method-%d" } resource "aws_api_gateway_resource" "test" { @@ -324,11 +326,13 @@ resource "aws_api_gateway_method" "test" { "method.request.querystring.page" = true } } -` +`, rInt) +} -const testAccAWSAPIGatewayMethodConfigUpdate = ` +func testAccAWSAPIGatewayMethodConfigUpdate(rInt int) string { + return fmt.Sprintf(` resource "aws_api_gateway_rest_api" "test" { - name = "test" + name = "tf-acc-test-apig-method-%d" } resource "aws_api_gateway_resource" "test" { @@ -351,4 +355,5 @@ resource "aws_api_gateway_method" "test" { "method.request.querystring.page" = false } } -` +`, rInt) +} diff --git a/builtin/providers/aws/resource_aws_api_gateway_usage_plan.go b/builtin/providers/aws/resource_aws_api_gateway_usage_plan.go new file mode 100644 index 0000000000..0d4930d08c --- /dev/null +++ b/builtin/providers/aws/resource_aws_api_gateway_usage_plan.go @@ -0,0 +1,499 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + "time" + + "errors" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsApiGatewayUsagePlan() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsApiGatewayUsagePlanCreate, + Read: resourceAwsApiGatewayUsagePlanRead, + Update: resourceAwsApiGatewayUsagePlanUpdate, + Delete: resourceAwsApiGatewayUsagePlanDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, // Required since not addable nor removable afterwards + }, + + "description": { + Type: schema.TypeString, + Optional: true, + }, + + "api_stages": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "api_id": { + Type: schema.TypeString, + Required: true, + }, + + "stage": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + + "quota_settings": { + Type: schema.TypeSet, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "limit": { + Type: schema.TypeInt, + Required: true, // Required as not removable singularly + }, + + "offset": { + Type: schema.TypeInt, + Default: 0, + Optional: true, + }, + + "period": { + Type: schema.TypeString, + Required: true, // Required as not removable + ValidateFunc: validateApiGatewayUsagePlanQuotaSettingsPeriod, + }, + }, + }, + }, + + "throttle_settings": { + Type: schema.TypeSet, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "burst_limit": { + Type: schema.TypeInt, + Default: 0, + Optional: true, + }, + + "rate_limit": { + Type: schema.TypeInt, + Default: 0, + Optional: true, + }, + }, + }, + }, + + "product_code": { + Type: schema.TypeString, + Optional: true, + }, + }, + } +} + +func resourceAwsApiGatewayUsagePlanCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + log.Print("[DEBUG] Creating API Gateway Usage Plan") + + params := &apigateway.CreateUsagePlanInput{ + Name: aws.String(d.Get("name").(string)), + } + + if v, ok := d.GetOk("description"); ok { + params.Description = aws.String(v.(string)) + } + + if s, ok := d.GetOk("api_stages"); ok { + stages := s.([]interface{}) + as := make([]*apigateway.ApiStage, 0) + + for _, v := range stages { + sv := v.(map[string]interface{}) + stage := &apigateway.ApiStage{} + + if v, ok := sv["api_id"].(string); ok && v != "" { + stage.ApiId = aws.String(v) + } + + if v, ok := sv["stage"].(string); ok && v != "" { + stage.Stage = aws.String(v) + } + + as = append(as, stage) + } + + if len(as) > 0 { + params.ApiStages = as + } + } + + if v, ok := d.GetOk("quota_settings"); ok { + settings := v.(*schema.Set).List() + q, ok := settings[0].(map[string]interface{}) + + if errors := validateApiGatewayUsagePlanQuotaSettings(q); len(errors) > 0 { + return fmt.Errorf("Error validating the quota settings: %v", errors) + } + + if !ok { + return errors.New("At least one field is expected inside quota_settings") + } + + qs := &apigateway.QuotaSettings{} + + if sv, ok := q["limit"].(int); ok { + qs.Limit = aws.Int64(int64(sv)) + } + + if sv, ok := q["offset"].(int); ok { + qs.Offset = aws.Int64(int64(sv)) + } + + if sv, ok := q["period"].(string); ok && sv != "" { + qs.Period = aws.String(sv) + } + + params.Quota = qs + } + + if v, ok := d.GetOk("throttle_settings"); ok { + settings := v.(*schema.Set).List() + q, ok := settings[0].(map[string]interface{}) + + if !ok { + return errors.New("At least one field is expected inside throttle_settings") + } + + ts := &apigateway.ThrottleSettings{} + + if sv, ok := q["burst_limit"].(int); ok { + ts.BurstLimit = aws.Int64(int64(sv)) + } + + if sv, ok := q["rate_limit"].(float64); ok { + ts.RateLimit = aws.Float64(float64(sv)) + } + + params.Throttle = ts + } + + up, err := conn.CreateUsagePlan(params) + if err != nil { + return fmt.Errorf("Error creating API Gateway Usage Plan: %s", err) + } + + d.SetId(*up.Id) + + // Handle case of adding the product code since not addable when + // creating the Usage Plan initially. + if v, ok := d.GetOk("product_code"); ok { + updateParameters := &apigateway.UpdateUsagePlanInput{ + UsagePlanId: aws.String(d.Id()), + PatchOperations: []*apigateway.PatchOperation{ + { + Op: aws.String("add"), + Path: aws.String("/productCode"), + Value: aws.String(v.(string)), + }, + }, + } + + up, err = conn.UpdateUsagePlan(updateParameters) + if err != nil { + return fmt.Errorf("Error creating the API Gateway Usage Plan product code: %s", err) + } + } + + return resourceAwsApiGatewayUsagePlanRead(d, meta) +} + +func resourceAwsApiGatewayUsagePlanRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + log.Printf("[DEBUG] Reading API Gateway Usage Plan: %s", d.Id()) + + up, err := conn.GetUsagePlan(&apigateway.GetUsagePlanInput{ + UsagePlanId: aws.String(d.Id()), + }) + if err != nil { + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NotFoundException" { + d.SetId("") + return nil + } + return err + } + + d.Set("name", up.Name) + d.Set("description", up.Description) + d.Set("product_code", up.ProductCode) + + if up.ApiStages != nil { + if err := d.Set("api_stages", flattenApiGatewayUsageApiStages(up.ApiStages)); err != nil { + return fmt.Errorf("[DEBUG] Error setting api_stages error: %#v", err) + } + } + + if up.Throttle != nil { + if err := d.Set("throttle_settings", flattenApiGatewayUsagePlanThrottling(up.Throttle)); err != nil { + return fmt.Errorf("[DEBUG] Error setting throttle_settings error: %#v", err) + } + } + + if up.Quota != nil { + if err := d.Set("quota_settings", flattenApiGatewayUsagePlanQuota(up.Quota)); err != nil { + return fmt.Errorf("[DEBUG] Error setting quota_settings error: %#v", err) + } + } + + return nil +} + +func resourceAwsApiGatewayUsagePlanUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + log.Print("[DEBUG] Updating API Gateway Usage Plan") + + operations := make([]*apigateway.PatchOperation, 0) + + if d.HasChange("name") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/name"), + Value: aws.String(d.Get("name").(string)), + }) + } + + if d.HasChange("description") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/description"), + Value: aws.String(d.Get("description").(string)), + }) + } + + if d.HasChange("product_code") { + v, ok := d.GetOk("product_code") + + if ok { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/productCode"), + Value: aws.String(v.(string)), + }) + } else { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("remove"), + Path: aws.String("/productCode"), + }) + } + } + + if d.HasChange("api_stages") { + o, n := d.GetChange("api_stages") + old := o.([]interface{}) + new := n.([]interface{}) + + // Remove every stages associated. Simpler to remove and add new ones, + // since there are no replacings. + for _, v := range old { + m := v.(map[string]interface{}) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("remove"), + Path: aws.String("/apiStages"), + Value: aws.String(fmt.Sprintf("%s:%s", m["api_id"].(string), m["stage"].(string))), + }) + } + + // Handle additions + if len(new) > 0 { + for _, v := range new { + m := v.(map[string]interface{}) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("add"), + Path: aws.String("/apiStages"), + Value: aws.String(fmt.Sprintf("%s:%s", m["api_id"].(string), m["stage"].(string))), + }) + } + } + } + + if d.HasChange("throttle_settings") { + o, n := d.GetChange("throttle_settings") + + os := o.(*schema.Set) + ns := n.(*schema.Set) + diff := ns.Difference(os).List() + + // Handle Removal + if len(diff) == 0 { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("remove"), + Path: aws.String("/throttle"), + }) + } + + if len(diff) > 0 { + d := diff[0].(map[string]interface{}) + + // Handle Replaces + if o != nil && n != nil { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/throttle/rateLimit"), + Value: aws.String(strconv.Itoa(d["rate_limit"].(int))), + }) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/throttle/burstLimit"), + Value: aws.String(strconv.Itoa(d["burst_limit"].(int))), + }) + } + + // Handle Additions + if o == nil && n != nil { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("add"), + Path: aws.String("/throttle/rateLimit"), + Value: aws.String(strconv.Itoa(d["rate_limit"].(int))), + }) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("add"), + Path: aws.String("/throttle/burstLimit"), + Value: aws.String(strconv.Itoa(d["burst_limit"].(int))), + }) + } + } + } + + if d.HasChange("quota_settings") { + o, n := d.GetChange("quota_settings") + + os := o.(*schema.Set) + ns := n.(*schema.Set) + diff := ns.Difference(os).List() + + // Handle Removal + if len(diff) == 0 { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("remove"), + Path: aws.String("/quota"), + }) + } + + if len(diff) > 0 { + d := diff[0].(map[string]interface{}) + + if errors := validateApiGatewayUsagePlanQuotaSettings(d); len(errors) > 0 { + return fmt.Errorf("Error validating the quota settings: %v", errors) + } + + // Handle Replaces + if o != nil && n != nil { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/quota/limit"), + Value: aws.String(strconv.Itoa(d["limit"].(int))), + }) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/quota/offset"), + Value: aws.String(strconv.Itoa(d["offset"].(int))), + }) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/quota/period"), + Value: aws.String(d["period"].(string)), + }) + } + + // Handle Additions + if o == nil && n != nil { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("add"), + Path: aws.String("/quota/limit"), + Value: aws.String(strconv.Itoa(d["limit"].(int))), + }) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("add"), + Path: aws.String("/quota/offset"), + Value: aws.String(strconv.Itoa(d["offset"].(int))), + }) + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("add"), + Path: aws.String("/quota/period"), + Value: aws.String(d["period"].(string)), + }) + } + } + } + + params := &apigateway.UpdateUsagePlanInput{ + UsagePlanId: aws.String(d.Id()), + PatchOperations: operations, + } + + _, err := conn.UpdateUsagePlan(params) + if err != nil { + return fmt.Errorf("Error updating API Gateway Usage Plan: %s", err) + } + + return resourceAwsApiGatewayUsagePlanRead(d, meta) +} + +func resourceAwsApiGatewayUsagePlanDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + + // Removing existing api stages associated + if apistages, ok := d.GetOk("api_stages"); ok { + log.Printf("[DEBUG] Deleting API Stages associated with Usage Plan: %s", d.Id()) + stages := apistages.([]interface{}) + operations := []*apigateway.PatchOperation{} + + for _, v := range stages { + sv := v.(map[string]interface{}) + + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("remove"), + Path: aws.String("/apiStages"), + Value: aws.String(fmt.Sprintf("%s:%s", sv["api_id"].(string), sv["stage"].(string))), + }) + } + + _, err := conn.UpdateUsagePlan(&apigateway.UpdateUsagePlanInput{ + UsagePlanId: aws.String(d.Id()), + PatchOperations: operations, + }) + if err != nil { + return fmt.Errorf("Error removing API Stages associated with Usage Plan: %s", err) + } + } + + log.Printf("[DEBUG] Deleting API Gateway Usage Plan: %s", d.Id()) + + return resource.Retry(5*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteUsagePlan(&apigateway.DeleteUsagePlanInput{ + UsagePlanId: aws.String(d.Id()), + }) + + if err == nil { + return nil + } + + return resource.NonRetryableError(err) + }) +} diff --git a/builtin/providers/aws/resource_aws_api_gateway_usage_plan_key.go b/builtin/providers/aws/resource_aws_api_gateway_usage_plan_key.go new file mode 100644 index 0000000000..75e7bbefde --- /dev/null +++ b/builtin/providers/aws/resource_aws_api_gateway_usage_plan_key.go @@ -0,0 +1,112 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsApiGatewayUsagePlanKey() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsApiGatewayUsagePlanKeyCreate, + Read: resourceAwsApiGatewayUsagePlanKeyRead, + Delete: resourceAwsApiGatewayUsagePlanKeyDelete, + + Schema: map[string]*schema.Schema{ + "key_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "key_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "usage_plan_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "name": { + Type: schema.TypeString, + Computed: true, + }, + + "value": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsApiGatewayUsagePlanKeyCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + log.Print("[DEBUG] Creating API Gateway Usage Plan Key") + + params := &apigateway.CreateUsagePlanKeyInput{ + KeyId: aws.String(d.Get("key_id").(string)), + KeyType: aws.String(d.Get("key_type").(string)), + UsagePlanId: aws.String(d.Get("usage_plan_id").(string)), + } + + up, err := conn.CreateUsagePlanKey(params) + if err != nil { + return fmt.Errorf("Error creating API Gateway Usage Plan Key: %s", err) + } + + d.SetId(*up.Id) + + return resourceAwsApiGatewayUsagePlanKeyRead(d, meta) +} + +func resourceAwsApiGatewayUsagePlanKeyRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + log.Printf("[DEBUG] Reading API Gateway Usage Plan Key: %s", d.Id()) + + up, err := conn.GetUsagePlanKey(&apigateway.GetUsagePlanKeyInput{ + UsagePlanId: aws.String(d.Get("usage_plan_id").(string)), + KeyId: aws.String(d.Get("key_id").(string)), + }) + if err != nil { + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NotFoundException" { + d.SetId("") + return nil + } + return err + } + + d.Set("name", up.Name) + d.Set("value", up.Value) + + return nil +} + +func resourceAwsApiGatewayUsagePlanKeyDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + + log.Printf("[DEBUG] Deleting API Gateway Usage Plan Key: %s", d.Id()) + + return resource.Retry(5*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteUsagePlanKey(&apigateway.DeleteUsagePlanKeyInput{ + UsagePlanId: aws.String(d.Get("usage_plan_id").(string)), + KeyId: aws.String(d.Get("key_id").(string)), + }) + + if err == nil { + return nil + } + + return resource.NonRetryableError(err) + }) +} diff --git a/builtin/providers/aws/resource_aws_api_gateway_usage_plan_key_test.go b/builtin/providers/aws/resource_aws_api_gateway_usage_plan_key_test.go new file mode 100644 index 0000000000..608a88fd2a --- /dev/null +++ b/builtin/providers/aws/resource_aws_api_gateway_usage_plan_key_test.go @@ -0,0 +1,232 @@ +package aws + +import ( + "fmt" + "log" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSAPIGatewayUsagePlanKey_basic(t *testing.T) { + var conf apigateway.UsagePlanKey + name := acctest.RandString(10) + updatedName := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanKeyBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanKeyExists("aws_api_gateway_usage_plan_key.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan_key.main", "key_type", "API_KEY"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "key_id"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "key_type"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "usage_plan_id"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "name"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan_key.main", "value", ""), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanKeyBasicUpdatedConfig(updatedName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanKeyExists("aws_api_gateway_usage_plan_key.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan_key.main", "key_type", "API_KEY"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "key_id"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "key_type"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "usage_plan_id"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "name"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan_key.main", "value", ""), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanKeyBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanKeyExists("aws_api_gateway_usage_plan_key.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan_key.main", "key_type", "API_KEY"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "key_id"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "key_type"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "usage_plan_id"), + resource.TestCheckResourceAttrSet("aws_api_gateway_usage_plan_key.main", "name"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan_key.main", "value", ""), + ), + }, + }, + }) +} + +func testAccCheckAWSAPIGatewayUsagePlanKeyExists(n string, res *apigateway.UsagePlanKey) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No API Gateway Usage Plan Key ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).apigateway + + req := &apigateway.GetUsagePlanKeyInput{ + UsagePlanId: aws.String(rs.Primary.Attributes["usage_plan_id"]), + KeyId: aws.String(rs.Primary.Attributes["key_id"]), + } + up, err := conn.GetUsagePlanKey(req) + if err != nil { + return err + } + + log.Printf("[DEBUG] Reading API Gateway Usage Plan Key: %#v", up) + + if *up.Id != rs.Primary.ID { + return fmt.Errorf("API Gateway Usage Plan Key not found") + } + + *res = *up + + return nil + } +} + +func testAccCheckAWSAPIGatewayUsagePlanKeyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).apigateway + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_api_gateway_usage_plan_key" { + continue + } + + req := &apigateway.GetUsagePlanKeyInput{ + UsagePlanId: aws.String(rs.Primary.ID), + KeyId: aws.String(rs.Primary.Attributes["key_id"]), + } + describe, err := conn.GetUsagePlanKey(req) + + if err == nil { + if describe.Id != nil && *describe.Id == rs.Primary.ID { + return fmt.Errorf("API Gateway Usage Plan Key still exists") + } + } + + aws2err, ok := err.(awserr.Error) + if !ok { + return err + } + if aws2err.Code() != "NotFoundException" { + return err + } + + return nil + } + + return nil +} + +const testAccAWSAPIGatewayUsagePlanKeyConfig = ` +resource "aws_api_gateway_rest_api" "test" { + name = "test" +} + +resource "aws_api_gateway_resource" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + parent_id = "${aws_api_gateway_rest_api.test.root_resource_id}" + path_part = "test" +} + +resource "aws_api_gateway_method" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "GET" + authorization = "NONE" +} + +resource "aws_api_gateway_method_response" "error" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_method.test.http_method}" + status_code = "400" +} + +resource "aws_api_gateway_integration" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_method.test.http_method}" + + type = "HTTP" + uri = "https://www.google.de" + integration_http_method = "GET" +} + +resource "aws_api_gateway_integration_response" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_integration.test.http_method}" + status_code = "${aws_api_gateway_method_response.error.status_code}" +} + +resource "aws_api_gateway_deployment" "test" { + depends_on = ["aws_api_gateway_integration.test"] + + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "test" + description = "This is a test" + + variables = { + "a" = "2" + } +} + +resource "aws_api_gateway_deployment" "foo" { + depends_on = ["aws_api_gateway_integration.test"] + + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "foo" + description = "This is a prod stage" +} + +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" +} + +resource "aws_api_gateway_usage_plan" "secondary" { + name = "secondary-%s" +} + +resource "aws_api_gateway_api_key" "mykey" { + name = "demo-%s" + + stage_key { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "${aws_api_gateway_deployment.foo.stage_name}" + } +} +` + +func testAccAWSApiGatewayUsagePlanKeyBasicConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanKeyConfig+` +resource "aws_api_gateway_usage_plan_key" "main" { + key_id = "${aws_api_gateway_api_key.mykey.id}" + key_type = "API_KEY" + usage_plan_id = "${aws_api_gateway_usage_plan.main.id}" +} +`, rName, rName, rName) +} + +func testAccAWSApiGatewayUsagePlanKeyBasicUpdatedConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanKeyConfig+` +resource "aws_api_gateway_usage_plan_key" "main" { + key_id = "${aws_api_gateway_api_key.mykey.id}" + key_type = "API_KEY" + usage_plan_id = "${aws_api_gateway_usage_plan.secondary.id}" +} +`, rName, rName, rName) +} diff --git a/builtin/providers/aws/resource_aws_api_gateway_usage_plan_test.go b/builtin/providers/aws/resource_aws_api_gateway_usage_plan_test.go new file mode 100644 index 0000000000..13d7afc2db --- /dev/null +++ b/builtin/providers/aws/resource_aws_api_gateway_usage_plan_test.go @@ -0,0 +1,557 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSAPIGatewayUsagePlan_basic(t *testing.T) { + var conf apigateway.UsagePlan + name := acctest.RandString(10) + updatedName := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", ""), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanBasicUpdatedConfig(updatedName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", updatedName), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", ""), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", ""), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayUsagePlan_description(t *testing.T) { + var conf apigateway.UsagePlan + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", ""), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanDescriptionConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", "This is a description"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanDescriptionUpdatedConfig(name), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", "This is a new description"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanDescriptionConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", "This is a description"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "description", ""), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayUsagePlan_productCode(t *testing.T) { + var conf apigateway.UsagePlan + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "product_code", ""), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanProductCodeConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "product_code", "MYCODE"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanProductCodeUpdatedConfig(name), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "product_code", "MYCODE2"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanProductCodeConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "product_code", "MYCODE"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "product_code", ""), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayUsagePlan_throttling(t *testing.T) { + var conf apigateway.UsagePlan + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckNoResourceAttr("aws_api_gateway_usage_plan.main", "throttle_settings"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanThrottlingConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "throttle_settings.4173790118.burst_limit", "2"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "throttle_settings.4173790118.rate_limit", "5"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanThrottlingModifiedConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "throttle_settings.1779463053.burst_limit", "3"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "throttle_settings.1779463053.rate_limit", "6"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckNoResourceAttr("aws_api_gateway_usage_plan.main", "throttle_settings"), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayUsagePlan_quota(t *testing.T) { + var conf apigateway.UsagePlan + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckNoResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanQuotaConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings.1956747625.limit", "100"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings.1956747625.offset", "6"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings.1956747625.period", "WEEK"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanQuotaModifiedConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings.3909168194.limit", "200"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings.3909168194.offset", "20"), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings.3909168194.period", "MONTH"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckNoResourceAttr("aws_api_gateway_usage_plan.main", "quota_settings"), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayUsagePlan_apiStages(t *testing.T) { + var conf apigateway.UsagePlan + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + // Create UsagePlan WITH Stages as the API calls are different + // when creating or updating. + { + Config: testAccAWSApiGatewayUsagePlanApiStagesConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "api_stages.0.stage", "test"), + ), + }, + // Handle api stages removal + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckNoResourceAttr("aws_api_gateway_usage_plan.main", "api_stages"), + ), + }, + // Handle api stages additions + { + Config: testAccAWSApiGatewayUsagePlanApiStagesConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "api_stages.0.stage", "test"), + ), + }, + // Handle api stages updates + { + Config: testAccAWSApiGatewayUsagePlanApiStagesModifiedConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "api_stages.0.stage", "foo"), + ), + }, + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayUsagePlanExists("aws_api_gateway_usage_plan.main", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_usage_plan.main", "name", name), + resource.TestCheckNoResourceAttr("aws_api_gateway_usage_plan.main", "api_stages"), + ), + }, + }, + }) +} + +func testAccCheckAWSAPIGatewayUsagePlanExists(n string, res *apigateway.UsagePlan) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No API Gateway Usage Plan ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).apigateway + + req := &apigateway.GetUsagePlanInput{ + UsagePlanId: aws.String(rs.Primary.ID), + } + up, err := conn.GetUsagePlan(req) + if err != nil { + return err + } + + if *up.Id != rs.Primary.ID { + return fmt.Errorf("APIGateway Usage Plan not found") + } + + *res = *up + + return nil + } +} + +func testAccCheckAWSAPIGatewayUsagePlanDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).apigateway + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_api_gateway_usage_plan" { + continue + } + + req := &apigateway.GetUsagePlanInput{ + UsagePlanId: aws.String(s.RootModule().Resources["aws_api_gateway_rest_api.test"].Primary.ID), + } + describe, err := conn.GetUsagePlan(req) + + if err == nil { + if describe.Id != nil && *describe.Id == rs.Primary.ID { + return fmt.Errorf("API Gateway Usage Plan still exists") + } + } + + aws2err, ok := err.(awserr.Error) + if !ok { + return err + } + if aws2err.Code() != "NotFoundException" { + return err + } + + return nil + } + + return nil +} + +const testAccAWSAPIGatewayUsagePlanConfig = ` +resource "aws_api_gateway_rest_api" "test" { + name = "test" +} + +resource "aws_api_gateway_resource" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + parent_id = "${aws_api_gateway_rest_api.test.root_resource_id}" + path_part = "test" +} + +resource "aws_api_gateway_method" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "GET" + authorization = "NONE" +} + +resource "aws_api_gateway_method_response" "error" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_method.test.http_method}" + status_code = "400" +} + +resource "aws_api_gateway_integration" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_method.test.http_method}" + + type = "HTTP" + uri = "https://www.google.de" + integration_http_method = "GET" +} + +resource "aws_api_gateway_integration_response" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_integration.test.http_method}" + status_code = "${aws_api_gateway_method_response.error.status_code}" +} + +resource "aws_api_gateway_deployment" "test" { + depends_on = ["aws_api_gateway_integration.test"] + + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "test" + description = "This is a test" + + variables = { + "a" = "2" + } +} + +resource "aws_api_gateway_deployment" "foo" { + depends_on = ["aws_api_gateway_integration.test"] + + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "foo" + description = "This is a prod stage" +} +` + +func testAccAWSApiGatewayUsagePlanBasicConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanDescriptionConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + description = "This is a description" +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanDescriptionUpdatedConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + description = "This is a new description" +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanProductCodeConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + product_code = "MYCODE" +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanProductCodeUpdatedConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + product_code = "MYCODE2" +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanBasicUpdatedConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanThrottlingConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + + throttle_settings { + burst_limit = 2 + rate_limit = 5 + } +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanThrottlingModifiedConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + + throttle_settings { + burst_limit = 3 + rate_limit = 6 + } +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanQuotaConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + + quota_settings { + limit = 100 + offset = 6 + period = "WEEK" + } +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanQuotaModifiedConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + + quota_settings { + limit = 200 + offset = 20 + period = "MONTH" + } +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanApiStagesConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + + api_stages { + api_id = "${aws_api_gateway_rest_api.test.id}" + stage = "${aws_api_gateway_deployment.test.stage_name}" + } +} +`, rName) +} + +func testAccAWSApiGatewayUsagePlanApiStagesModifiedConfig(rName string) string { + return fmt.Sprintf(testAccAWSAPIGatewayUsagePlanConfig+` +resource "aws_api_gateway_usage_plan" "main" { + name = "%s" + + api_stages { + api_id = "${aws_api_gateway_rest_api.test.id}" + stage = "${aws_api_gateway_deployment.foo.stage_name}" + } +} +`, rName) +} diff --git a/builtin/providers/aws/resource_aws_autoscaling_attachment.go b/builtin/providers/aws/resource_aws_autoscaling_attachment.go index d3921d9bac..c04b9d7829 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_attachment.go +++ b/builtin/providers/aws/resource_aws_autoscaling_attachment.go @@ -18,16 +18,22 @@ func resourceAwsAutoscalingAttachment() *schema.Resource { Delete: resourceAwsAutoscalingAttachmentDelete, Schema: map[string]*schema.Schema{ - "autoscaling_group_name": &schema.Schema{ + "autoscaling_group_name": { Type: schema.TypeString, ForceNew: true, Required: true, }, - "elb": &schema.Schema{ + "elb": { Type: schema.TypeString, ForceNew: true, - Required: true, + Optional: true, + }, + + "alb_target_group_arn": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, }, }, } @@ -36,17 +42,31 @@ func resourceAwsAutoscalingAttachment() *schema.Resource { func resourceAwsAutoscalingAttachmentCreate(d *schema.ResourceData, meta interface{}) error { asgconn := meta.(*AWSClient).autoscalingconn asgName := d.Get("autoscaling_group_name").(string) - elbName := d.Get("elb").(string) - attachElbInput := &autoscaling.AttachLoadBalancersInput{ - AutoScalingGroupName: aws.String(asgName), - LoadBalancerNames: []*string{aws.String(elbName)}, + if v, ok := d.GetOk("elb"); ok { + attachOpts := &autoscaling.AttachLoadBalancersInput{ + AutoScalingGroupName: aws.String(asgName), + LoadBalancerNames: []*string{aws.String(v.(string))}, + } + + log.Printf("[INFO] registering asg %s with ELBs %s", asgName, v.(string)) + + if _, err := asgconn.AttachLoadBalancers(attachOpts); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Failure attaching AutoScaling Group %s with Elastic Load Balancer: %s: {{err}}", asgName, v.(string)), err) + } } - log.Printf("[INFO] registering asg %s with ELBs %s", asgName, elbName) + if v, ok := d.GetOk("alb_target_group_arn"); ok { + attachOpts := &autoscaling.AttachLoadBalancerTargetGroupsInput{ + AutoScalingGroupName: aws.String(asgName), + TargetGroupARNs: []*string{aws.String(v.(string))}, + } - if _, err := asgconn.AttachLoadBalancers(attachElbInput); err != nil { - return errwrap.Wrapf(fmt.Sprintf("Failure attaching AutoScaling Group %s with Elastic Load Balancer: %s: {{err}}", asgName, elbName), err) + log.Printf("[INFO] registering asg %s with ALB Target Group %s", asgName, v.(string)) + + if _, err := asgconn.AttachLoadBalancerTargetGroups(attachOpts); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Failure attaching AutoScaling Group %s with ALB Target Group: %s: {{err}}", asgName, v.(string)), err) + } } d.SetId(resource.PrefixedUniqueId(fmt.Sprintf("%s-", asgName))) @@ -57,7 +77,6 @@ func resourceAwsAutoscalingAttachmentCreate(d *schema.ResourceData, meta interfa func resourceAwsAutoscalingAttachmentRead(d *schema.ResourceData, meta interface{}) error { asgconn := meta.(*AWSClient).autoscalingconn asgName := d.Get("autoscaling_group_name").(string) - elbName := d.Get("elb").(string) // Retrieve the ASG properites to get list of associated ELBs asg, err := getAwsAutoscalingGroup(asgName, asgconn) @@ -71,18 +90,36 @@ func resourceAwsAutoscalingAttachmentRead(d *schema.ResourceData, meta interface return nil } - found := false - for _, i := range asg.LoadBalancerNames { - if elbName == *i { - d.Set("elb", elbName) - found = true - break + if v, ok := d.GetOk("elb"); ok { + found := false + for _, i := range asg.LoadBalancerNames { + if v.(string) == *i { + d.Set("elb", v.(string)) + found = true + break + } + } + + if !found { + log.Printf("[WARN] Association for %s was not found in ASG assocation", v.(string)) + d.SetId("") } } - if !found { - log.Printf("[WARN] Association for %s was not found in ASG assocation", elbName) - d.SetId("") + if v, ok := d.GetOk("alb_target_group_arn"); ok { + found := false + for _, i := range asg.TargetGroupARNs { + if v.(string) == *i { + d.Set("alb_target_group_arn", v.(string)) + found = true + break + } + } + + if !found { + log.Printf("[WARN] Association for %s was not found in ASG assocation", v.(string)) + d.SetId("") + } } return nil @@ -91,17 +128,29 @@ func resourceAwsAutoscalingAttachmentRead(d *schema.ResourceData, meta interface func resourceAwsAutoscalingAttachmentDelete(d *schema.ResourceData, meta interface{}) error { asgconn := meta.(*AWSClient).autoscalingconn asgName := d.Get("autoscaling_group_name").(string) - elbName := d.Get("elb").(string) - log.Printf("[INFO] Deleting ELB %s association from: %s", elbName, asgName) + if v, ok := d.GetOk("elb"); ok { + detachOpts := &autoscaling.DetachLoadBalancersInput{ + AutoScalingGroupName: aws.String(asgName), + LoadBalancerNames: []*string{aws.String(v.(string))}, + } - detachOpts := &autoscaling.DetachLoadBalancersInput{ - AutoScalingGroupName: aws.String(asgName), - LoadBalancerNames: []*string{aws.String(elbName)}, + log.Printf("[INFO] Deleting ELB %s association from: %s", v.(string), asgName) + if _, err := asgconn.DetachLoadBalancers(detachOpts); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Failure detaching AutoScaling Group %s with Elastic Load Balancer: %s: {{err}}", asgName, v.(string)), err) + } } - if _, err := asgconn.DetachLoadBalancers(detachOpts); err != nil { - return errwrap.Wrapf(fmt.Sprintf("Failure detaching AutoScaling Group %s with Elastic Load Balancer: %s: {{err}}", asgName, elbName), err) + if v, ok := d.GetOk("alb_target_group_arn"); ok { + detachOpts := &autoscaling.DetachLoadBalancerTargetGroupsInput{ + AutoScalingGroupName: aws.String(asgName), + TargetGroupARNs: []*string{aws.String(v.(string))}, + } + + log.Printf("[INFO] Deleting ALB Target Group %s association from: %s", v.(string), asgName) + if _, err := asgconn.DetachLoadBalancerTargetGroups(detachOpts); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Failure detaching AutoScaling Group %s with ALB Target Group: %s: {{err}}", asgName, v.(string)), err) + } } return nil diff --git a/builtin/providers/aws/resource_aws_autoscaling_attachment_test.go b/builtin/providers/aws/resource_aws_autoscaling_attachment_test.go index cf1a239e3b..229cd41674 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_attachment_test.go +++ b/builtin/providers/aws/resource_aws_autoscaling_attachment_test.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAccAwsAutoscalingAttachment_basic(t *testing.T) { +func TestAccAwsAutoscalingAttachment_elb(t *testing.T) { rInt := acctest.RandInt() @@ -19,45 +19,109 @@ func TestAccAwsAutoscalingAttachment_basic(t *testing.T) { PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAutoscalingAttachment_basic(rInt), + { + Config: testAccAWSAutoscalingAttachment_elb(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSAutocalingAttachmentExists("aws_autoscaling_group.asg", 0), + testAccCheckAWSAutocalingElbAttachmentExists("aws_autoscaling_group.asg", 0), ), }, - // Add in one association - resource.TestStep{ - Config: testAccAWSAutoscalingAttachment_associated(rInt), + { + Config: testAccAWSAutoscalingAttachment_elb_associated(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSAutocalingAttachmentExists("aws_autoscaling_group.asg", 1), + testAccCheckAWSAutocalingElbAttachmentExists("aws_autoscaling_group.asg", 1), ), }, - // Test adding a 2nd - resource.TestStep{ - Config: testAccAWSAutoscalingAttachment_double_associated(rInt), + { + Config: testAccAWSAutoscalingAttachment_elb_double_associated(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSAutocalingAttachmentExists("aws_autoscaling_group.asg", 2), + testAccCheckAWSAutocalingElbAttachmentExists("aws_autoscaling_group.asg", 2), ), }, - // Now remove that newest one - resource.TestStep{ - Config: testAccAWSAutoscalingAttachment_associated(rInt), + { + Config: testAccAWSAutoscalingAttachment_elb_associated(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSAutocalingAttachmentExists("aws_autoscaling_group.asg", 1), + testAccCheckAWSAutocalingElbAttachmentExists("aws_autoscaling_group.asg", 1), ), }, - // Now remove them both - resource.TestStep{ - Config: testAccAWSAutoscalingAttachment_basic(rInt), + { + Config: testAccAWSAutoscalingAttachment_elb(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSAutocalingAttachmentExists("aws_autoscaling_group.asg", 0), + testAccCheckAWSAutocalingElbAttachmentExists("aws_autoscaling_group.asg", 0), ), }, }, }) } -func testAccCheckAWSAutocalingAttachmentExists(asgname string, loadBalancerCount int) resource.TestCheckFunc { +func TestAccAwsAutoscalingAttachment_albTargetGroup(t *testing.T) { + + rInt := acctest.RandInt() + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoscalingAttachment_alb(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutocalingAlbAttachmentExists("aws_autoscaling_group.asg", 0), + ), + }, + { + Config: testAccAWSAutoscalingAttachment_alb_associated(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutocalingAlbAttachmentExists("aws_autoscaling_group.asg", 1), + ), + }, + { + Config: testAccAWSAutoscalingAttachment_alb_double_associated(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutocalingAlbAttachmentExists("aws_autoscaling_group.asg", 2), + ), + }, + { + Config: testAccAWSAutoscalingAttachment_alb_associated(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutocalingAlbAttachmentExists("aws_autoscaling_group.asg", 1), + ), + }, + { + Config: testAccAWSAutoscalingAttachment_alb(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutocalingAlbAttachmentExists("aws_autoscaling_group.asg", 0), + ), + }, + }, + }) +} + +func testAccCheckAWSAutocalingElbAttachmentExists(asgname string, loadBalancerCount int) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[asgname] + if !ok { + return fmt.Errorf("Not found: %s", asgname) + } + + conn := testAccProvider.Meta().(*AWSClient).autoscalingconn + asg := rs.Primary.ID + + actual, err := conn.DescribeAutoScalingGroups(&autoscaling.DescribeAutoScalingGroupsInput{ + AutoScalingGroupNames: []*string{aws.String(asg)}, + }) + + if err != nil { + return fmt.Errorf("Received an error when attempting to load %s: %s", asg, err) + } + + if loadBalancerCount != len(actual.AutoScalingGroups[0].LoadBalancerNames) { + return fmt.Errorf("Error: ASG has the wrong number of load balacners associated. Expected [%d] but got [%d]", loadBalancerCount, len(actual.AutoScalingGroups[0].LoadBalancerNames)) + } + + return nil + } +} + +func testAccCheckAWSAutocalingAlbAttachmentExists(asgname string, targetGroupCount int) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[asgname] if !ok { @@ -75,15 +139,108 @@ func testAccCheckAWSAutocalingAttachmentExists(asgname string, loadBalancerCount return fmt.Errorf("Recieved an error when attempting to load %s: %s", asg, err) } - if loadBalancerCount != len(actual.AutoScalingGroups[0].LoadBalancerNames) { - return fmt.Errorf("Error: ASG has the wrong number of load balacners associated. Expected [%d] but got [%d]", loadBalancerCount, len(actual.AutoScalingGroups[0].LoadBalancerNames)) + if targetGroupCount != len(actual.AutoScalingGroups[0].TargetGroupARNs) { + return fmt.Errorf("Error: ASG has the wrong number of Target Groups associated. Expected [%d] but got [%d]", targetGroupCount, len(actual.AutoScalingGroups[0].TargetGroupARNs)) } return nil } } -func testAccAWSAutoscalingAttachment_basic(rInt int) string { +func testAccAWSAutoscalingAttachment_alb(rInt int) string { + return fmt.Sprintf(` +resource "aws_alb_target_group" "test" { + name = "test-alb-%d" + port = 443 + protocol = "HTTPS" + vpc_id = "${aws_vpc.test.id}" + + deregistration_delay = 200 + + stickiness { + type = "lb_cookie" + cookie_duration = 10000 + } + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } + + tags { + TestName = "TestAccAWSALBTargetGroup_basic" + } +} + +resource "aws_alb_target_group" "another_test" { + name = "atest-alb-%d" + port = 443 + protocol = "HTTPS" + vpc_id = "${aws_vpc.test.id}" + + deregistration_delay = 200 + + stickiness { + type = "lb_cookie" + cookie_duration = 10000 + } + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } + + tags { + TestName = "TestAccAWSALBTargetGroup_basic" + } +} + +resource "aws_autoscaling_group" "asg" { + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + name = "asg-lb-assoc-terraform-test_%d" + max_size = 1 + min_size = 0 + desired_capacity = 0 + health_check_grace_period = 300 + force_delete = true + launch_configuration = "${aws_launch_configuration.as_conf.name}" + + tag { + key = "Name" + value = "terraform-asg-lg-assoc-test" + propagate_at_launch = true + } +} + +resource "aws_launch_configuration" "as_conf" { + name = "test_config_%d" + image_id = "ami-f34032c3" + instance_type = "t1.micro" +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + TestName = "TestAccAWSALBTargetGroup_basic" + } +} +`, rInt, rInt, rInt, rInt) +} + +func testAccAWSAutoscalingAttachment_elb(rInt int) string { return fmt.Sprintf(` resource "aws_elb" "foo" { availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] @@ -131,18 +288,34 @@ resource "aws_autoscaling_group" "asg" { }`, rInt, rInt) } -func testAccAWSAutoscalingAttachment_associated(rInt int) string { - return testAccAWSAutoscalingAttachment_basic(rInt) + ` +func testAccAWSAutoscalingAttachment_elb_associated(rInt int) string { + return testAccAWSAutoscalingAttachment_elb(rInt) + ` resource "aws_autoscaling_attachment" "asg_attachment_foo" { autoscaling_group_name = "${aws_autoscaling_group.asg.id}" elb = "${aws_elb.foo.id}" }` } -func testAccAWSAutoscalingAttachment_double_associated(rInt int) string { - return testAccAWSAutoscalingAttachment_associated(rInt) + ` +func testAccAWSAutoscalingAttachment_alb_associated(rInt int) string { + return testAccAWSAutoscalingAttachment_alb(rInt) + ` +resource "aws_autoscaling_attachment" "asg_attachment_foo" { + autoscaling_group_name = "${aws_autoscaling_group.asg.id}" + alb_target_group_arn = "${aws_alb_target_group.test.arn}" +}` +} + +func testAccAWSAutoscalingAttachment_elb_double_associated(rInt int) string { + return testAccAWSAutoscalingAttachment_elb_associated(rInt) + ` resource "aws_autoscaling_attachment" "asg_attachment_bar" { autoscaling_group_name = "${aws_autoscaling_group.asg.id}" elb = "${aws_elb.bar.id}" }` } + +func testAccAWSAutoscalingAttachment_alb_double_associated(rInt int) string { + return testAccAWSAutoscalingAttachment_alb_associated(rInt) + ` +resource "aws_autoscaling_attachment" "asg_attachment_bar" { + autoscaling_group_name = "${aws_autoscaling_group.asg.id}" + alb_target_group_arn = "${aws_alb_target_group.another_test.arn}" +}` +} diff --git a/builtin/providers/aws/resource_aws_autoscaling_group.go b/builtin/providers/aws/resource_aws_autoscaling_group.go index 6a35164b06..a5e10e0340 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group.go @@ -29,10 +29,11 @@ func resourceAwsAutoscalingGroup() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { // https://github.com/boto/botocore/blob/9f322b1/botocore/data/autoscaling/2011-01-01/service-2.json#L1862-L1873 value := v.(string) @@ -43,58 +44,71 @@ func resourceAwsAutoscalingGroup() *schema.Resource { return }, }, + "name_prefix": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 229 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 229 characters, name is limited to 255", k)) + } + return + }, + }, - "launch_configuration": &schema.Schema{ + "launch_configuration": { Type: schema.TypeString, Required: true, }, - "desired_capacity": &schema.Schema{ + "desired_capacity": { Type: schema.TypeInt, Optional: true, Computed: true, }, - "min_elb_capacity": &schema.Schema{ + "min_elb_capacity": { Type: schema.TypeInt, Optional: true, }, - "min_size": &schema.Schema{ + "min_size": { Type: schema.TypeInt, Required: true, }, - "max_size": &schema.Schema{ + "max_size": { Type: schema.TypeInt, Required: true, }, - "default_cooldown": &schema.Schema{ + "default_cooldown": { Type: schema.TypeInt, Optional: true, Computed: true, }, - "force_delete": &schema.Schema{ + "force_delete": { Type: schema.TypeBool, Optional: true, Default: false, }, - "health_check_grace_period": &schema.Schema{ + "health_check_grace_period": { Type: schema.TypeInt, Optional: true, Default: 300, }, - "health_check_type": &schema.Schema{ + "health_check_type": { Type: schema.TypeString, Optional: true, Computed: true, }, - "availability_zones": &schema.Schema{ + "availability_zones": { Type: schema.TypeSet, Optional: true, Computed: true, @@ -102,12 +116,12 @@ func resourceAwsAutoscalingGroup() *schema.Resource { Set: schema.HashString, }, - "placement_group": &schema.Schema{ + "placement_group": { Type: schema.TypeString, Optional: true, }, - "load_balancers": &schema.Schema{ + "load_balancers": { Type: schema.TypeSet, Optional: true, Computed: true, @@ -115,7 +129,7 @@ func resourceAwsAutoscalingGroup() *schema.Resource { Set: schema.HashString, }, - "vpc_zone_identifier": &schema.Schema{ + "vpc_zone_identifier": { Type: schema.TypeSet, Optional: true, Computed: true, @@ -123,13 +137,13 @@ func resourceAwsAutoscalingGroup() *schema.Resource { Set: schema.HashString, }, - "termination_policies": &schema.Schema{ + "termination_policies": { Type: schema.TypeList, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "wait_for_capacity_timeout": &schema.Schema{ + "wait_for_capacity_timeout": { Type: schema.TypeString, Optional: true, Default: "10m", @@ -148,12 +162,12 @@ func resourceAwsAutoscalingGroup() *schema.Resource { }, }, - "wait_for_elb_capacity": &schema.Schema{ + "wait_for_elb_capacity": { Type: schema.TypeInt, Optional: true, }, - "enabled_metrics": &schema.Schema{ + "enabled_metrics": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -167,31 +181,32 @@ func resourceAwsAutoscalingGroup() *schema.Resource { Set: schema.HashString, }, - "metrics_granularity": &schema.Schema{ + "metrics_granularity": { Type: schema.TypeString, Optional: true, Default: "1Minute", }, - "protect_from_scale_in": &schema.Schema{ + "protect_from_scale_in": { Type: schema.TypeBool, Optional: true, Default: false, }, - "target_group_arns": &schema.Schema{ + "target_group_arns": { Type: schema.TypeSet, Optional: true, + Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "initial_lifecycle_hook": &schema.Schema{ + "initial_lifecycle_hook": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ @@ -282,7 +297,11 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) if v, ok := d.GetOk("name"); ok { asgName = v.(string) } else { - asgName = resource.PrefixedUniqueId("tf-asg-") + if v, ok := d.GetOk("name_prefix"); ok { + asgName = resource.PrefixedUniqueId(v.(string)) + } else { + asgName = resource.PrefixedUniqueId("tf-asg-") + } d.Set("name", asgName) } @@ -427,6 +446,8 @@ func resourceAwsAutoscalingGroupRead(d *schema.ResourceData, meta interface{}) e d.Set("health_check_type", g.HealthCheckType) d.Set("launch_configuration", g.LaunchConfigurationName) d.Set("load_balancers", flattenStringList(g.LoadBalancerNames)) + d.Set("target_group_arns", flattenStringList(g.TargetGroupARNs)) + if err := d.Set("suspended_processes", flattenAsgSuspendedProcesses(g.SuspendedProcesses)); err != nil { log.Printf("[WARN] Error setting suspended_processes for %q: %s", d.Id(), err) } diff --git a/builtin/providers/aws/resource_aws_autoscaling_group_test.go b/builtin/providers/aws/resource_aws_autoscaling_group_test.go index 6400dcb731..1c310433f2 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group_test.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group_test.go @@ -84,6 +84,27 @@ func TestAccAWSAutoScalingGroup_basic(t *testing.T) { }) } +func TestAccAWSAutoScalingGroup_namePrefix(t *testing.T) { + nameRegexp := regexp.MustCompile("^test-") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSAutoScalingGroupConfig_namePrefix, + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr( + "aws_autoscaling_group.test", "name", nameRegexp), + resource.TestCheckResourceAttrSet( + "aws_autoscaling_group.test", "arn"), + ), + }, + }, + }) +} + func TestAccAWSAutoScalingGroup_autoGeneratedName(t *testing.T) { asgNameRegexp := regexp.MustCompile("^tf-asg-") @@ -472,13 +493,15 @@ func TestAccAWSAutoScalingGroup_ALB_TargetGroups_ELBCapacity(t *testing.T) { var group autoscaling.Group var tg elbv2.TargetGroup + rInt := acctest.RandInt() + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_ELBCapacity, + Config: testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_ELBCapacity(rInt), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), testAccCheckAWSALBTargetGroupExists("aws_alb_target_group.test", &tg), @@ -746,6 +769,22 @@ resource "aws_autoscaling_group" "bar" { } ` +const testAccAWSAutoScalingGroupConfig_namePrefix = ` +resource "aws_launch_configuration" "test" { + image_id = "ami-21f78e11" + instance_type = "t1.micro" +} + +resource "aws_autoscaling_group" "test" { + availability_zones = ["us-west-2a"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name_prefix = "test-" + launch_configuration = "${aws_launch_configuration.test.name}" +} +` + const testAccAWSAutoScalingGroupConfig_terminationPoliciesEmpty = ` resource "aws_launch_configuration" "foobar" { image_id = "ami-21f78e11" @@ -1386,7 +1425,8 @@ resource "aws_autoscaling_group" "bar" { `, name) } -const testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_ELBCapacity = ` +func testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_ELBCapacity(rInt int) string { + return fmt.Sprintf(` provider "aws" { region = "us-west-2" } @@ -1420,7 +1460,7 @@ resource "aws_alb_listener" "test_listener" { } resource "aws_alb_target_group" "test" { - name = "tf-example-alb-tg" + name = "tf-alb-test-%d" port = 80 protocol = "HTTP" vpc_id = "${aws_vpc.default.id}" @@ -1431,6 +1471,10 @@ resource "aws_alb_target_group" "test" { timeout = "2" interval = "5" } + + tags { + Name = "testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_ELBCapacity" + } } resource "aws_subnet" "main" { @@ -1522,8 +1566,8 @@ resource "aws_autoscaling_group" "bar" { force_delete = true termination_policies = ["OldestInstance"] launch_configuration = "${aws_launch_configuration.foobar.name}" +}`, rInt) } -` func testAccAWSAutoScalingGroupConfigWithSuspendedProcesses(name string) string { return fmt.Sprintf(` diff --git a/builtin/providers/aws/resource_aws_codebuild_project.go b/builtin/providers/aws/resource_aws_codebuild_project.go index 3a198366fc..bbd3523a30 100644 --- a/builtin/providers/aws/resource_aws_codebuild_project.go +++ b/builtin/providers/aws/resource_aws_codebuild_project.go @@ -592,11 +592,11 @@ func resourceAwsCodeBuildProjectSourceAuthHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) - authType := m["type"].(string) - authResource := m["resource"].(string) + buf.WriteString(fmt.Sprintf("%s-", m["type"].(string))) - buf.WriteString(fmt.Sprintf("%s-", authType)) - buf.WriteString(fmt.Sprintf("%s-", authResource)) + if m["resource"] != nil { + buf.WriteString(fmt.Sprintf("%s-", m["resource"].(string))) + } return hashcode.String(buf.String()) } diff --git a/builtin/providers/aws/resource_aws_db_instance.go b/builtin/providers/aws/resource_aws_db_instance.go index 0d1d603624..e944489ab9 100644 --- a/builtin/providers/aws/resource_aws_db_instance.go +++ b/builtin/providers/aws/resource_aws_db_instance.go @@ -839,6 +839,10 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error } d.SetPartial("apply_immediately") + if !d.Get("apply_immediately").(bool) { + log.Println("[INFO] Only settings updating, instance changes will be applied in next maintenance window") + } + requestUpdate := false if d.HasChange("allocated_storage") || d.HasChange("iops") { d.SetPartial("allocated_storage") diff --git a/builtin/providers/aws/resource_aws_db_instance_test.go b/builtin/providers/aws/resource_aws_db_instance_test.go index 779e309205..56f8905327 100644 --- a/builtin/providers/aws/resource_aws_db_instance_test.go +++ b/builtin/providers/aws/resource_aws_db_instance_test.go @@ -622,6 +622,10 @@ resource "aws_db_instance" "bar" { backup_retention_period = 0 parameter_group_name = "default.mysql5.6" + + timeouts { + create = "30m" + } }` var testAccAWSDBInstanceConfigKmsKeyId = ` diff --git a/builtin/providers/aws/resource_aws_default_route_table.go b/builtin/providers/aws/resource_aws_default_route_table.go index 78296cb1a8..987dd4a7df 100644 --- a/builtin/providers/aws/resource_aws_default_route_table.go +++ b/builtin/providers/aws/resource_aws_default_route_table.go @@ -17,56 +17,66 @@ func resourceAwsDefaultRouteTable() *schema.Resource { Delete: resourceAwsDefaultRouteTableDelete, Schema: map[string]*schema.Schema{ - "default_route_table_id": &schema.Schema{ + "default_route_table_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Computed: true, }, - "propagating_vgws": &schema.Schema{ + "propagating_vgws": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "route": &schema.Schema{ + "route": { Type: schema.TypeSet, Computed: true, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "cidr_block": &schema.Schema{ - Type: schema.TypeString, - Required: true, - }, - - "gateway_id": &schema.Schema{ + "cidr_block": { Type: schema.TypeString, Optional: true, }, - "instance_id": &schema.Schema{ + "ipv6_cidr_block": { Type: schema.TypeString, Optional: true, }, - "nat_gateway_id": &schema.Schema{ + "egress_only_gateway_id": { Type: schema.TypeString, Optional: true, }, - "vpc_peering_connection_id": &schema.Schema{ + "gateway_id": { Type: schema.TypeString, Optional: true, }, - "network_interface_id": &schema.Schema{ + "instance_id": { + Type: schema.TypeString, + Optional: true, + }, + + "nat_gateway_id": { + Type: schema.TypeString, + Optional: true, + }, + + "vpc_peering_connection_id": { + Type: schema.TypeString, + Optional: true, + }, + + "network_interface_id": { Type: schema.TypeString, Optional: true, }, @@ -193,16 +203,33 @@ func revokeAllRouteTableRules(defaultRouteTableId string, meta interface{}) erro // See aws_vpc_endpoint continue } - log.Printf( - "[INFO] Deleting route from %s: %s", - defaultRouteTableId, *r.DestinationCidrBlock) - _, err := conn.DeleteRoute(&ec2.DeleteRouteInput{ - RouteTableId: aws.String(defaultRouteTableId), - DestinationCidrBlock: r.DestinationCidrBlock, - }) - if err != nil { - return err + + if r.DestinationCidrBlock != nil { + log.Printf( + "[INFO] Deleting route from %s: %s", + defaultRouteTableId, *r.DestinationCidrBlock) + _, err := conn.DeleteRoute(&ec2.DeleteRouteInput{ + RouteTableId: aws.String(defaultRouteTableId), + DestinationCidrBlock: r.DestinationCidrBlock, + }) + if err != nil { + return err + } } + + if r.DestinationIpv6CidrBlock != nil { + log.Printf( + "[INFO] Deleting route from %s: %s", + defaultRouteTableId, *r.DestinationIpv6CidrBlock) + _, err := conn.DeleteRoute(&ec2.DeleteRouteInput{ + RouteTableId: aws.String(defaultRouteTableId), + DestinationIpv6CidrBlock: r.DestinationIpv6CidrBlock, + }) + if err != nil { + return err + } + } + } return nil diff --git a/builtin/providers/aws/resource_aws_default_route_table_test.go b/builtin/providers/aws/resource_aws_default_route_table_test.go index c3feabf9f9..dd67db0ff6 100644 --- a/builtin/providers/aws/resource_aws_default_route_table_test.go +++ b/builtin/providers/aws/resource_aws_default_route_table_test.go @@ -20,7 +20,7 @@ func TestAccAWSDefaultRouteTable_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckDefaultRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDefaultRouteTableConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -40,7 +40,7 @@ func TestAccAWSDefaultRouteTable_swap(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckDefaultRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDefaultRouteTable_change, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -53,7 +53,7 @@ func TestAccAWSDefaultRouteTable_swap(t *testing.T) { // behavior that may happen, in which case a follow up plan will show (in // this case) a diff as the table now needs to be updated to match the // config - resource.TestStep{ + { Config: testAccDefaultRouteTable_change_mod, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -74,7 +74,7 @@ func TestAccAWSDefaultRouteTable_vpc_endpoint(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckDefaultRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDefaultRouteTable_vpc_endpoint, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( diff --git a/builtin/providers/aws/resource_aws_ecs_service.go b/builtin/providers/aws/resource_aws_ecs_service.go index b539852dcb..f763b08a90 100644 --- a/builtin/providers/aws/resource_aws_ecs_service.go +++ b/builtin/providers/aws/resource_aws_ecs_service.go @@ -357,7 +357,13 @@ func flattenPlacementStrategy(pss []*ecs.PlacementStrategy) []map[string]interfa for _, ps := range pss { c := make(map[string]interface{}) c["type"] = *ps.Type - c["field"] = strings.ToLower(*ps.Field) + c["field"] = *ps.Field + + // for some fields the API requires lowercase for creation but will return uppercase on query + if *ps.Field == "MEMORY" || *ps.Field == "CPU" { + c["field"] = strings.ToLower(*ps.Field) + } + results = append(results, c) } return results @@ -467,7 +473,7 @@ func resourceAwsEcsServiceDelete(d *schema.ResourceData, meta interface{}) error // Wait until it's deleted wait := resource.StateChangeConf{ - Pending: []string{"DRAINING"}, + Pending: []string{"ACTIVE", "DRAINING"}, Target: []string{"INACTIVE"}, Timeout: 10 * time.Minute, MinTimeout: 1 * time.Second, diff --git a/builtin/providers/aws/resource_aws_elb.go b/builtin/providers/aws/resource_aws_elb.go index 91d523d3f3..2379ea49a1 100644 --- a/builtin/providers/aws/resource_aws_elb.go +++ b/builtin/providers/aws/resource_aws_elb.go @@ -30,11 +30,18 @@ func resourceAwsElb() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + ValidateFunc: validateElbName, + }, + "name_prefix": &schema.Schema{ Type: schema.TypeString, Optional: true, - Computed: true, ForceNew: true, - ValidateFunc: validateElbName, + ValidateFunc: validateElbNamePrefix, }, "internal": &schema.Schema{ @@ -247,7 +254,11 @@ func resourceAwsElbCreate(d *schema.ResourceData, meta interface{}) error { if v, ok := d.GetOk("name"); ok { elbName = v.(string) } else { - elbName = resource.PrefixedUniqueId("tf-lb-") + if v, ok := d.GetOk("name_prefix"); ok { + elbName = resource.PrefixedUniqueId(v.(string)) + } else { + elbName = resource.PrefixedUniqueId("tf-lb-") + } d.Set("name", elbName) } @@ -388,7 +399,9 @@ func resourceAwsElbRead(d *schema.ResourceData, meta interface{}) error { } } d.Set("subnets", flattenStringList(lb.Subnets)) - d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout) + if lbAttrs.ConnectionSettings != nil { + d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout) + } d.Set("connection_draining", lbAttrs.ConnectionDraining.Enabled) d.Set("connection_draining_timeout", lbAttrs.ConnectionDraining.Timeout) d.Set("cross_zone_load_balancing", lbAttrs.CrossZoneLoadBalancing.Enabled) diff --git a/builtin/providers/aws/resource_aws_elb_test.go b/builtin/providers/aws/resource_aws_elb_test.go index 763bd6a2ca..bc6856f355 100644 --- a/builtin/providers/aws/resource_aws_elb_test.go +++ b/builtin/providers/aws/resource_aws_elb_test.go @@ -26,7 +26,7 @@ func TestAccAWSELB_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -70,7 +70,7 @@ func TestAccAWSELB_fullCharacterRange(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: fmt.Sprintf(testAccAWSELBFullRangeOfCharacters, lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), @@ -93,14 +93,14 @@ func TestAccAWSELB_AccessLogs_enabled(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBAccessLogs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), ), }, - resource.TestStep{ + { Config: testAccAWSELBAccessLogsOn(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), @@ -115,7 +115,7 @@ func TestAccAWSELB_AccessLogs_enabled(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBAccessLogs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), @@ -138,14 +138,14 @@ func TestAccAWSELB_AccessLogs_disabled(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBAccessLogs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), ), }, - resource.TestStep{ + { Config: testAccAWSELBAccessLogsDisabled(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), @@ -160,7 +160,7 @@ func TestAccAWSELB_AccessLogs_disabled(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBAccessLogs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), @@ -172,6 +172,28 @@ func TestAccAWSELB_AccessLogs_disabled(t *testing.T) { }) } +func TestAccAWSELB_namePrefix(t *testing.T) { + var conf elb.LoadBalancerDescription + nameRegex := regexp.MustCompile("^test-") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_elb.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSELBDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSELB_namePrefix, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.test", &conf), + resource.TestMatchResourceAttr( + "aws_elb.test", "name", nameRegex), + ), + }, + }, + }) +} + func TestAccAWSELB_generatedName(t *testing.T) { var conf elb.LoadBalancerDescription generatedNameRegexp := regexp.MustCompile("^tf-lb-") @@ -182,7 +204,7 @@ func TestAccAWSELB_generatedName(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBGeneratedName, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.foo", &conf), @@ -203,7 +225,7 @@ func TestAccAWSELB_availabilityZones(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -218,7 +240,7 @@ func TestAccAWSELB_availabilityZones(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBConfig_AvailabilityZonesUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -244,7 +266,7 @@ func TestAccAWSELB_tags(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -254,7 +276,7 @@ func TestAccAWSELB_tags(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBConfig_TagUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -285,7 +307,7 @@ func TestAccAWSELB_iam_server_cert(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccELBIAMServerCertConfig( fmt.Sprintf("tf-acctest-%s", acctest.RandString(10))), Check: resource.ComposeTestCheckFunc( @@ -306,7 +328,7 @@ func TestAccAWSELB_swap_subnets(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfig_subnets, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.ourapp", &conf), @@ -315,7 +337,7 @@ func TestAccAWSELB_swap_subnets(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBConfig_subnet_swap, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.ourapp", &conf), @@ -363,7 +385,7 @@ func TestAccAWSELB_InstanceAttaching(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -371,7 +393,7 @@ func TestAccAWSELB_InstanceAttaching(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBConfigNewInstance, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -391,7 +413,7 @@ func TestAccAWSELBUpdate_Listener(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -401,7 +423,7 @@ func TestAccAWSELBUpdate_Listener(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBConfigListener_update, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -422,7 +444,7 @@ func TestAccAWSELB_HealthCheck(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfigHealthCheck, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -450,14 +472,14 @@ func TestAccAWSELBUpdate_HealthCheck(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfigHealthCheck, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( "aws_elb.bar", "health_check.0.healthy_threshold", "5"), ), }, - resource.TestStep{ + { Config: testAccAWSELBConfigHealthCheck_update, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -477,7 +499,7 @@ func TestAccAWSELB_Timeout(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfigIdleTimeout, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -497,7 +519,7 @@ func TestAccAWSELBUpdate_Timeout(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfigIdleTimeout, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -505,7 +527,7 @@ func TestAccAWSELBUpdate_Timeout(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAWSELBConfigIdleTimeout_update, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -524,7 +546,7 @@ func TestAccAWSELB_ConnectionDraining(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfigConnectionDraining, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -546,7 +568,7 @@ func TestAccAWSELBUpdate_ConnectionDraining(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfigConnectionDraining, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -557,7 +579,7 @@ func TestAccAWSELBUpdate_ConnectionDraining(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAWSELBConfigConnectionDraining_update_timeout, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -568,7 +590,7 @@ func TestAccAWSELBUpdate_ConnectionDraining(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAWSELBConfigConnectionDraining_update_disable, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -587,7 +609,7 @@ func TestAccAWSELB_SecurityGroups(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBConfig, Check: resource.ComposeTestCheckFunc( // ELBs get a default security group @@ -596,7 +618,7 @@ func TestAccAWSELB_SecurityGroups(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAWSELBConfigSecurityGroups, Check: resource.ComposeTestCheckFunc( // Count should still be one as we swap in a custom security group @@ -1138,6 +1160,20 @@ resource "aws_elb" "foo" { `, r, r) } +const testAccAWSELB_namePrefix = ` +resource "aws_elb" "test" { + name_prefix = "test-" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } +} +` + const testAccAWSELBGeneratedName = ` resource "aws_elb" "foo" { availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] diff --git a/builtin/providers/aws/resource_aws_emr_cluster.go b/builtin/providers/aws/resource_aws_emr_cluster.go index 3017e10877..9217d0ed73 100644 --- a/builtin/providers/aws/resource_aws_emr_cluster.go +++ b/builtin/providers/aws/resource_aws_emr_cluster.go @@ -157,6 +157,11 @@ func resourceAwsEMRCluster() *schema.Resource { ForceNew: true, Required: true, }, + "autoscaling_role": &schema.Schema{ + Type: schema.TypeString, + ForceNew: true, + Optional: true, + }, "visible_to_all_users": { Type: schema.TypeBool, Optional: true, @@ -259,6 +264,9 @@ func resourceAwsEMRClusterCreate(d *schema.ResourceData, meta interface{}) error if v, ok := d.GetOk("log_uri"); ok { params.LogUri = aws.String(v.(string)) } + if v, ok := d.GetOk("autoscaling_role"); ok { + params.AutoScalingRole = aws.String(v.(string)) + } if instanceProfile != "" { params.JobFlowRole = aws.String(instanceProfile) @@ -353,6 +361,7 @@ func resourceAwsEMRClusterRead(d *schema.ResourceData, meta interface{}) error { d.Set("name", cluster.Name) d.Set("service_role", cluster.ServiceRole) + d.Set("autoscaling_role", cluster.AutoScalingRole) d.Set("release_label", cluster.ReleaseLabel) d.Set("log_uri", cluster.LogUri) d.Set("master_public_dns", cluster.MasterPublicDnsName) diff --git a/builtin/providers/aws/resource_aws_emr_cluster_test.go b/builtin/providers/aws/resource_aws_emr_cluster_test.go index ad8b16a1fb..a0bac7fcf6 100644 --- a/builtin/providers/aws/resource_aws_emr_cluster_test.go +++ b/builtin/providers/aws/resource_aws_emr_cluster_test.go @@ -237,6 +237,7 @@ resource "aws_emr_cluster" "tf-test-cluster" { depends_on = ["aws_main_route_table_association.a"] service_role = "${aws_iam_role.iam_emr_default_role.arn}" + autoscaling_role = "${aws_iam_role.emr-autoscaling-role.arn}" } resource "aws_security_group" "allow_all" { @@ -474,6 +475,29 @@ resource "aws_iam_policy" "iam_emr_profile_policy" { } EOT } + +# IAM Role for autoscaling +resource "aws_iam_role" "emr-autoscaling-role" { + name = "EMR_AutoScaling_DefaultRole" + assume_role_policy = "${data.aws_iam_policy_document.emr-autoscaling-role-policy.json}" +} + +data "aws_iam_policy_document" "emr-autoscaling-role-policy" { + statement { + effect = "Allow" + actions = ["sts:AssumeRole"] + + principals = { + type = "Service" + identifiers = ["elasticmapreduce.amazonaws.com","application-autoscaling.amazonaws.com"] + } + } +} + +resource "aws_iam_role_policy_attachment" "emr-autoscaling-role" { + role = "${aws_iam_role.emr-autoscaling-role.name}" + policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforAutoScalingRole" +} `, r, r, r, r, r, r) } @@ -520,6 +544,7 @@ resource "aws_emr_cluster" "tf-test-cluster" { depends_on = ["aws_main_route_table_association.a"] service_role = "${aws_iam_role.iam_emr_default_role.arn}" + autoscaling_role = "${aws_iam_role.emr-autoscaling-role.arn}" } resource "aws_security_group" "allow_all" { @@ -757,6 +782,29 @@ resource "aws_iam_policy" "iam_emr_profile_policy" { } EOT } + +# IAM Role for autoscaling +resource "aws_iam_role" "emr-autoscaling-role" { + name = "EMR_AutoScaling_DefaultRole" + assume_role_policy = "${data.aws_iam_policy_document.emr-autoscaling-role-policy.json}" +} + +data "aws_iam_policy_document" "emr-autoscaling-role-policy" { + statement { + effect = "Allow" + actions = ["sts:AssumeRole"] + + principals = { + type = "Service" + identifiers = ["elasticmapreduce.amazonaws.com","application-autoscaling.amazonaws.com"] + } + } +} + +resource "aws_iam_role_policy_attachment" "emr-autoscaling-role" { + role = "${aws_iam_role.emr-autoscaling-role.name}" + policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforAutoScalingRole" +} `, r, r, r, r, r, r) } @@ -803,6 +851,7 @@ resource "aws_emr_cluster" "tf-test-cluster" { depends_on = ["aws_main_route_table_association.a"] service_role = "${aws_iam_role.iam_emr_default_role.arn}" + autoscaling_role = "${aws_iam_role.emr-autoscaling-role.arn}" } resource "aws_security_group" "allow_all" { @@ -1040,6 +1089,29 @@ resource "aws_iam_policy" "iam_emr_profile_policy" { } EOT } + +# IAM Role for autoscaling +resource "aws_iam_role" "emr-autoscaling-role" { + name = "EMR_AutoScaling_DefaultRole" + assume_role_policy = "${data.aws_iam_policy_document.emr-autoscaling-role-policy.json}" +} + +data "aws_iam_policy_document" "emr-autoscaling-role-policy" { + statement { + effect = "Allow" + actions = ["sts:AssumeRole"] + + principals = { + type = "Service" + identifiers = ["elasticmapreduce.amazonaws.com","application-autoscaling.amazonaws.com"] + } + } +} + +resource "aws_iam_role_policy_attachment" "emr-autoscaling-role" { + role = "${aws_iam_role.emr-autoscaling-role.name}" + policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforAutoScalingRole" +} `, r, r, r, r, r, r) } @@ -1085,6 +1157,7 @@ resource "aws_emr_cluster" "tf-test-cluster" { depends_on = ["aws_main_route_table_association.a"] service_role = "${aws_iam_role.iam_emr_default_role.arn}" + autoscaling_role = "${aws_iam_role.emr-autoscaling-role.arn}" } resource "aws_security_group" "allow_all" { @@ -1322,5 +1395,28 @@ resource "aws_iam_policy" "iam_emr_profile_policy" { } EOT } + +# IAM Role for autoscaling +resource "aws_iam_role" "emr-autoscaling-role" { + name = "EMR_AutoScaling_DefaultRole" + assume_role_policy = "${data.aws_iam_policy_document.emr-autoscaling-role-policy.json}" +} + +data "aws_iam_policy_document" "emr-autoscaling-role-policy" { + statement { + effect = "Allow" + actions = ["sts:AssumeRole"] + + principals = { + type = "Service" + identifiers = ["elasticmapreduce.amazonaws.com","application-autoscaling.amazonaws.com"] + } + } +} + +resource "aws_iam_role_policy_attachment" "emr-autoscaling-role" { + role = "${aws_iam_role.emr-autoscaling-role.name}" + policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforAutoScalingRole" +} `, r, r, r, r, r, r) } diff --git a/builtin/providers/aws/resource_aws_iam_account_alias.go b/builtin/providers/aws/resource_aws_iam_account_alias.go new file mode 100644 index 0000000000..3b1b86f1ef --- /dev/null +++ b/builtin/providers/aws/resource_aws_iam_account_alias.go @@ -0,0 +1,94 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsIamAccountAlias() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsIamAccountAliasCreate, + Read: resourceAwsIamAccountAliasRead, + Delete: resourceAwsIamAccountAliasDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "account_alias": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateAccountAlias, + }, + }, + } +} + +func resourceAwsIamAccountAliasCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iamconn + + account_alias := d.Get("account_alias").(string) + + params := &iam.CreateAccountAliasInput{ + AccountAlias: aws.String(account_alias), + } + + _, err := conn.CreateAccountAlias(params) + + if err != nil { + return fmt.Errorf("Error creating account alias with name %s", account_alias) + } + + d.SetId(account_alias) + + return nil +} + +func resourceAwsIamAccountAliasRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iamconn + + params := &iam.ListAccountAliasesInput{} + + resp, err := conn.ListAccountAliases(params) + + if err != nil { + return err + } + + if resp == nil || len(resp.AccountAliases) == 0 { + d.SetId("") + return nil + } + + account_alias := aws.StringValue(resp.AccountAliases[0]) + + d.SetId(account_alias) + d.Set("account_alias", account_alias) + + return nil +} + +func resourceAwsIamAccountAliasDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iamconn + + account_alias := d.Get("account_alias").(string) + + params := &iam.DeleteAccountAliasInput{ + AccountAlias: aws.String(account_alias), + } + + _, err := conn.DeleteAccountAlias(params) + + if err != nil { + return fmt.Errorf("Error deleting account alias with name %s", account_alias) + } + + d.SetId("") + + return nil +} diff --git a/builtin/providers/aws/resource_aws_iam_account_alias_test.go b/builtin/providers/aws/resource_aws_iam_account_alias_test.go new file mode 100644 index 0000000000..7106566a29 --- /dev/null +++ b/builtin/providers/aws/resource_aws_iam_account_alias_test.go @@ -0,0 +1,91 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSIAMAccountAlias_basic(t *testing.T) { + var account_alias string + + rstring := acctest.RandString(5) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMAccountAliasDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMAccountAliasConfig(rstring), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMAccountAliasExists("aws_iam_account_alias.test", &account_alias), + ), + }, + }, + }) +} + +func testAccCheckAWSIAMAccountAliasDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iam_account_alias" { + continue + } + + params := &iam.ListAccountAliasesInput{} + + resp, err := conn.ListAccountAliases(params) + + if err != nil || resp == nil { + return nil + } + + if len(resp.AccountAliases) > 0 { + return fmt.Errorf("Bad: Account alias still exists: %q", rs.Primary.ID) + } + } + + return nil + +} + +func testAccCheckAWSIAMAccountAliasExists(n string, a *string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*AWSClient).iamconn + params := &iam.ListAccountAliasesInput{} + + resp, err := conn.ListAccountAliases(params) + + if err != nil || resp == nil { + return nil + } + + if len(resp.AccountAliases) == 0 { + return fmt.Errorf("Bad: Account alias %q does not exist", rs.Primary.ID) + } + + *a = aws.StringValue(resp.AccountAliases[0]) + + return nil + } +} + +func testAccAWSIAMAccountAliasConfig(rstring string) string { + return fmt.Sprintf(` +resource "aws_iam_account_alias" "test" { + account_alias = "terraform-%s-alias" +} +`, rstring) +} diff --git a/builtin/providers/aws/resource_aws_iam_group_policy.go b/builtin/providers/aws/resource_aws_iam_group_policy.go index 2c16fe1c60..1bdf725451 100644 --- a/builtin/providers/aws/resource_aws_iam_group_policy.go +++ b/builtin/providers/aws/resource_aws_iam_group_policy.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -27,8 +28,15 @@ func resourceAwsIamGroupPolicy() *schema.Resource { Required: true, }, "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + }, + "name_prefix": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, ForceNew: true, }, "group": &schema.Schema{ @@ -45,10 +53,19 @@ func resourceAwsIamGroupPolicyPut(d *schema.ResourceData, meta interface{}) erro request := &iam.PutGroupPolicyInput{ GroupName: aws.String(d.Get("group").(string)), - PolicyName: aws.String(d.Get("name").(string)), PolicyDocument: aws.String(d.Get("policy").(string)), } + var policyName string + if v, ok := d.GetOk("name"); ok { + policyName = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + policyName = resource.PrefixedUniqueId(v.(string)) + } else { + policyName = resource.UniqueId() + } + request.PolicyName = aws.String(policyName) + if _, err := iamconn.PutGroupPolicy(request); err != nil { return fmt.Errorf("Error putting IAM group policy %s: %s", *request.PolicyName, err) } diff --git a/builtin/providers/aws/resource_aws_iam_group_policy_test.go b/builtin/providers/aws/resource_aws_iam_group_policy_test.go index 8ca167b8ab..6e33cd4844 100644 --- a/builtin/providers/aws/resource_aws_iam_group_policy_test.go +++ b/builtin/providers/aws/resource_aws_iam_group_policy_test.go @@ -7,18 +7,20 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccAWSIAMGroupPolicy_basic(t *testing.T) { + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMGroupPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccIAMGroupPolicyConfig, + { + Config: testAccIAMGroupPolicyConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckIAMGroupPolicy( "aws_iam_group.group", @@ -26,8 +28,8 @@ func TestAccAWSIAMGroupPolicy_basic(t *testing.T) { ), ), }, - resource.TestStep{ - Config: testAccIAMGroupPolicyConfigUpdate, + { + Config: testAccIAMGroupPolicyConfigUpdate(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckIAMGroupPolicy( "aws_iam_group.group", @@ -39,6 +41,48 @@ func TestAccAWSIAMGroupPolicy_basic(t *testing.T) { }) } +func TestAccAWSIAMGroupPolicy_namePrefix(t *testing.T) { + rInt := acctest.RandInt() + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_iam_group_policy.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckIAMGroupPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccIAMGroupPolicyConfig_namePrefix(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMGroupPolicy( + "aws_iam_group.test", + "aws_iam_group_policy.test", + ), + ), + }, + }, + }) +} + +func TestAccAWSIAMGroupPolicy_generatedName(t *testing.T) { + rInt := acctest.RandInt() + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_iam_group_policy.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckIAMGroupPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccIAMGroupPolicyConfig_generatedName(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMGroupPolicy( + "aws_iam_group.test", + "aws_iam_group_policy.test", + ), + ), + }, + }, + }) +} + func testAccCheckIAMGroupPolicyDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).iamconn @@ -102,43 +146,90 @@ func testAccCheckIAMGroupPolicy( } } -const testAccIAMGroupPolicyConfig = ` -resource "aws_iam_group" "group" { - name = "test_group" - path = "/" -} +func testAccIAMGroupPolicyConfig(rInt int) string { + return fmt.Sprintf(` + resource "aws_iam_group" "group" { + name = "test_group_%d" + path = "/" + } -resource "aws_iam_group_policy" "foo" { - name = "foo_policy" - group = "${aws_iam_group.group.name}" - policy = < 128 { - errors = append(errors, fmt.Errorf( - "%q cannot be longer than 128 characters", k)) - } - if !regexp.MustCompile("^[\\w+=,.@-]+$").MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q must match [\\w+=,.@-]", k)) - } - return - }, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + ValidateFunc: validateIamRolePolicyName, + }, + "name_prefix": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validateIamRolePolicyNamePrefix, }, "role": &schema.Schema{ Type: schema.TypeString, @@ -62,10 +58,19 @@ func resourceAwsIamRolePolicyPut(d *schema.ResourceData, meta interface{}) error request := &iam.PutRolePolicyInput{ RoleName: aws.String(d.Get("role").(string)), - PolicyName: aws.String(d.Get("name").(string)), PolicyDocument: aws.String(d.Get("policy").(string)), } + var policyName string + if v, ok := d.GetOk("name"); ok { + policyName = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + policyName = resource.PrefixedUniqueId(v.(string)) + } else { + policyName = resource.UniqueId() + } + request.PolicyName = aws.String(policyName) + if _, err := iamconn.PutRolePolicy(request); err != nil { return fmt.Errorf("Error putting IAM role policy %s: %s", *request.PolicyName, err) } diff --git a/builtin/providers/aws/resource_aws_iam_role_policy_attachment_test.go b/builtin/providers/aws/resource_aws_iam_role_policy_attachment_test.go index d1b4ef6e18..7a723bc077 100644 --- a/builtin/providers/aws/resource_aws_iam_role_policy_attachment_test.go +++ b/builtin/providers/aws/resource_aws_iam_role_policy_attachment_test.go @@ -7,30 +7,35 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccAWSRolePolicyAttachment_basic(t *testing.T) { var out iam.ListAttachedRolePoliciesOutput + rInt := acctest.RandInt() + testPolicy := fmt.Sprintf("tf-acctest-%d", rInt) + testPolicy2 := fmt.Sprintf("tf-acctest2-%d", rInt) + testPolicy3 := fmt.Sprintf("tf-acctest3-%d", rInt) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRolePolicyAttachmentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSRolePolicyAttachConfig, + { + Config: testAccAWSRolePolicyAttachConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSRolePolicyAttachmentExists("aws_iam_role_policy_attachment.test-attach", 1, &out), - testAccCheckAWSRolePolicyAttachmentAttributes([]string{"test-policy"}, &out), + testAccCheckAWSRolePolicyAttachmentAttributes([]string{testPolicy}, &out), ), }, - resource.TestStep{ - Config: testAccAWSRolePolicyAttachConfigUpdate, + { + Config: testAccAWSRolePolicyAttachConfigUpdate(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSRolePolicyAttachmentExists("aws_iam_role_policy_attachment.test-attach", 2, &out), - testAccCheckAWSRolePolicyAttachmentAttributes([]string{"test-policy2", "test-policy3"}, &out), + testAccCheckAWSRolePolicyAttachmentAttributes([]string{testPolicy2, testPolicy3}, &out), ), }, }, @@ -88,135 +93,137 @@ func testAccCheckAWSRolePolicyAttachmentAttributes(policies []string, out *iam.L } } -const testAccAWSRolePolicyAttachConfig = ` -resource "aws_iam_role" "role" { - name = "test-role" - assume_role_policy = < 3 { d.Set("set_identifier", parts[3]) } - - d.Set("weight", -1) } record, err := findRecord(d, meta) @@ -631,7 +629,7 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er changeBatch := &route53.ChangeBatch{ Comment: aws.String("Deleted by Terraform"), Changes: []*route53.Change{ - &route53.Change{ + { Action: aws.String("DELETE"), ResourceRecordSet: rec, }, diff --git a/builtin/providers/aws/resource_aws_route_table.go b/builtin/providers/aws/resource_aws_route_table.go index e8e0cb8038..c92dbde163 100644 --- a/builtin/providers/aws/resource_aws_route_table.go +++ b/builtin/providers/aws/resource_aws_route_table.go @@ -25,7 +25,7 @@ func resourceAwsRouteTable() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -33,45 +33,55 @@ func resourceAwsRouteTable() *schema.Resource { "tags": tagsSchema(), - "propagating_vgws": &schema.Schema{ + "propagating_vgws": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "route": &schema.Schema{ + "route": { Type: schema.TypeSet, Computed: true, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "cidr_block": &schema.Schema{ - Type: schema.TypeString, - Required: true, - }, - - "gateway_id": &schema.Schema{ + "cidr_block": { Type: schema.TypeString, Optional: true, }, - "instance_id": &schema.Schema{ + "ipv6_cidr_block": { Type: schema.TypeString, Optional: true, }, - "nat_gateway_id": &schema.Schema{ + "egress_only_gateway_id": { Type: schema.TypeString, Optional: true, }, - "vpc_peering_connection_id": &schema.Schema{ + "gateway_id": { Type: schema.TypeString, Optional: true, }, - "network_interface_id": &schema.Schema{ + "instance_id": { + Type: schema.TypeString, + Optional: true, + }, + + "nat_gateway_id": { + Type: schema.TypeString, + Optional: true, + }, + + "vpc_peering_connection_id": { + Type: schema.TypeString, + Optional: true, + }, + + "network_interface_id": { Type: schema.TypeString, Optional: true, }, @@ -166,6 +176,12 @@ func resourceAwsRouteTableRead(d *schema.ResourceData, meta interface{}) error { if r.DestinationCidrBlock != nil { m["cidr_block"] = *r.DestinationCidrBlock } + if r.DestinationIpv6CidrBlock != nil { + m["ipv6_cidr_block"] = *r.DestinationIpv6CidrBlock + } + if r.EgressOnlyInternetGatewayId != nil { + m["egress_only_gateway_id"] = *r.EgressOnlyInternetGatewayId + } if r.GatewayId != nil { m["gateway_id"] = *r.GatewayId } @@ -266,14 +282,27 @@ func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error for _, route := range ors.List() { m := route.(map[string]interface{}) - // Delete the route as it no longer exists in the config - log.Printf( - "[INFO] Deleting route from %s: %s", - d.Id(), m["cidr_block"].(string)) - _, err := conn.DeleteRoute(&ec2.DeleteRouteInput{ - RouteTableId: aws.String(d.Id()), - DestinationCidrBlock: aws.String(m["cidr_block"].(string)), - }) + deleteOpts := &ec2.DeleteRouteInput{ + RouteTableId: aws.String(d.Id()), + } + + if s := m["ipv6_cidr_block"].(string); s != "" { + deleteOpts.DestinationIpv6CidrBlock = aws.String(s) + + log.Printf( + "[INFO] Deleting route from %s: %s", + d.Id(), m["ipv6_cidr_block"].(string)) + } + + if s := m["cidr_block"].(string); s != "" { + deleteOpts.DestinationCidrBlock = aws.String(s) + + log.Printf( + "[INFO] Deleting route from %s: %s", + d.Id(), m["cidr_block"].(string)) + } + + _, err := conn.DeleteRoute(deleteOpts) if err != nil { return err } @@ -288,16 +317,39 @@ func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error m := route.(map[string]interface{}) opts := ec2.CreateRouteInput{ - RouteTableId: aws.String(d.Id()), - DestinationCidrBlock: aws.String(m["cidr_block"].(string)), - GatewayId: aws.String(m["gateway_id"].(string)), - InstanceId: aws.String(m["instance_id"].(string)), - VpcPeeringConnectionId: aws.String(m["vpc_peering_connection_id"].(string)), - NetworkInterfaceId: aws.String(m["network_interface_id"].(string)), + RouteTableId: aws.String(d.Id()), } - if m["nat_gateway_id"].(string) != "" { - opts.NatGatewayId = aws.String(m["nat_gateway_id"].(string)) + if s := m["vpc_peering_connection_id"].(string); s != "" { + opts.VpcPeeringConnectionId = aws.String(s) + } + + if s := m["network_interface_id"].(string); s != "" { + opts.NetworkInterfaceId = aws.String(s) + } + + if s := m["instance_id"].(string); s != "" { + opts.InstanceId = aws.String(s) + } + + if s := m["ipv6_cidr_block"].(string); s != "" { + opts.DestinationIpv6CidrBlock = aws.String(s) + } + + if s := m["cidr_block"].(string); s != "" { + opts.DestinationCidrBlock = aws.String(s) + } + + if s := m["gateway_id"].(string); s != "" { + opts.GatewayId = aws.String(s) + } + + if s := m["egress_only_gateway_id"].(string); s != "" { + opts.EgressOnlyInternetGatewayId = aws.String(s) + } + + if s := m["nat_gateway_id"].(string); s != "" { + opts.NatGatewayId = aws.String(s) } log.Printf("[INFO] Creating route for %s: %#v", d.Id(), opts) @@ -402,6 +454,10 @@ func resourceAwsRouteTableHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) + if v, ok := m["ipv6_cidr_block"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + if v, ok := m["cidr_block"]; ok { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } @@ -410,6 +466,10 @@ func resourceAwsRouteTableHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } + if v, ok := m["egress_only_gateway_id"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + natGatewaySet := false if v, ok := m["nat_gateway_id"]; ok { natGatewaySet = v.(string) != "" diff --git a/builtin/providers/aws/resource_aws_route_table_test.go b/builtin/providers/aws/resource_aws_route_table_test.go index 910f8c0135..68fd9237be 100644 --- a/builtin/providers/aws/resource_aws_route_table_test.go +++ b/builtin/providers/aws/resource_aws_route_table_test.go @@ -63,7 +63,7 @@ func TestAccAWSRouteTable_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -72,7 +72,7 @@ func TestAccAWSRouteTable_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccRouteTableConfigChange, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -113,7 +113,7 @@ func TestAccAWSRouteTable_instance(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableConfigInstance, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -125,6 +125,35 @@ func TestAccAWSRouteTable_instance(t *testing.T) { }) } +func TestAccAWSRouteTable_ipv6(t *testing.T) { + var v ec2.RouteTable + + testCheck := func(*terraform.State) error { + // Expect 3: 2 IPv6 (local + all outbound) + 1 IPv4 + if len(v.Routes) != 3 { + return fmt.Errorf("bad routes: %#v", v.Routes) + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_route_table.foo", + Providers: testAccProviders, + CheckDestroy: testAccCheckRouteTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRouteTableConfigIpv6, + Check: resource.ComposeTestCheckFunc( + testAccCheckRouteTableExists("aws_route_table.foo", &v), + testCheck, + ), + }, + }, + }) +} + func TestAccAWSRouteTable_tags(t *testing.T) { var route_table ec2.RouteTable @@ -134,7 +163,7 @@ func TestAccAWSRouteTable_tags(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists("aws_route_table.foo", &route_table), @@ -142,7 +171,7 @@ func TestAccAWSRouteTable_tags(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccRouteTableConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists("aws_route_table.foo", &route_table), @@ -244,7 +273,7 @@ func TestAccAWSRouteTable_vpcPeering(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableVpcPeeringConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -285,7 +314,7 @@ func TestAccAWSRouteTable_vgwRoutePropagation(t *testing.T) { testAccCheckRouteTableDestroy, ), Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableVgwRoutePropagationConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists( @@ -342,6 +371,26 @@ resource "aws_route_table" "foo" { } ` +const testAccRouteTableConfigIpv6 = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + assign_generated_ipv6_cidr_block = true +} + +resource "aws_egress_only_internet_gateway" "foo" { + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_route_table" "foo" { + vpc_id = "${aws_vpc.foo.id}" + + route { + ipv6_cidr_block = "::/0" + egress_only_gateway_id = "${aws_egress_only_internet_gateway.foo.id}" + } +} +` + const testAccRouteTableConfigInstance = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" diff --git a/builtin/providers/aws/resource_aws_route_test.go b/builtin/providers/aws/resource_aws_route_test.go index d7a2c0b995..a8bc00373b 100644 --- a/builtin/providers/aws/resource_aws_route_test.go +++ b/builtin/providers/aws/resource_aws_route_test.go @@ -38,7 +38,7 @@ func TestAccAWSRoute_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSRouteDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSRouteBasicConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSRouteExists("aws_route.bar", &route), @@ -49,6 +49,43 @@ func TestAccAWSRoute_basic(t *testing.T) { }) } +func TestAccAWSRoute_ipv6Support(t *testing.T) { + var route ec2.Route + + //aws creates a default route + testCheck := func(s *terraform.State) error { + + name := "aws_egress_only_internet_gateway.foo" + gwres, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s\n", name) + } + + if *route.EgressOnlyInternetGatewayId != gwres.Primary.ID { + return fmt.Errorf("Egress Only Internet Gateway Id (Expected=%s, Actual=%s)\n", gwres.Primary.ID, *route.EgressOnlyInternetGatewayId) + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRouteDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRouteConfigIpv6, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRouteExists("aws_route.bar", &route), + testCheck, + ), + }, + }, + }) +} + func TestAccAWSRoute_changeCidr(t *testing.T) { var route ec2.Route var routeTable ec2.RouteTable @@ -101,14 +138,14 @@ func TestAccAWSRoute_changeCidr(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSRouteDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSRouteBasicConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSRouteExists("aws_route.bar", &route), testCheck, ), }, - resource.TestStep{ + { Config: testAccAWSRouteBasicConfigChangeCidr, Check: resource.ComposeTestCheckFunc( testAccCheckAWSRouteExists("aws_route.bar", &route), @@ -139,14 +176,14 @@ func TestAccAWSRoute_noopdiff(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSRouteDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSRouteNoopChange, Check: resource.ComposeTestCheckFunc( testAccCheckAWSRouteExists("aws_route.test", &route), testCheck, ), }, - resource.TestStep{ + { Config: testAccAWSRouteNoopChange, Check: resource.ComposeTestCheckFunc( testAccCheckAWSRouteExists("aws_route.test", &route), @@ -166,7 +203,7 @@ func TestAccAWSRoute_doesNotCrashWithVPCEndpoint(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSRouteDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSRouteWithVPCEndpoint, Check: resource.ComposeTestCheckFunc( testAccCheckAWSRouteExists("aws_route.bar", &route), @@ -192,6 +229,7 @@ func testAccCheckAWSRouteExists(n string, res *ec2.Route) resource.TestCheckFunc conn, rs.Primary.Attributes["route_table_id"], rs.Primary.Attributes["destination_cidr_block"], + rs.Primary.Attributes["destination_ipv6_cidr_block"], ) if err != nil { @@ -219,6 +257,7 @@ func testAccCheckAWSRouteDestroy(s *terraform.State) error { conn, rs.Primary.Attributes["route_table_id"], rs.Primary.Attributes["destination_cidr_block"], + rs.Primary.Attributes["destination_ipv6_cidr_block"], ) if route == nil && err == nil { @@ -249,6 +288,29 @@ resource "aws_route" "bar" { } `) +var testAccAWSRouteConfigIpv6 = fmt.Sprintf(` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + assign_generated_ipv6_cidr_block = true +} + +resource "aws_egress_only_internet_gateway" "foo" { + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_route_table" "foo" { + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_route" "bar" { + route_table_id = "${aws_route_table.foo.id}" + destination_ipv6_cidr_block = "::/0" + egress_only_gateway_id = "${aws_egress_only_internet_gateway.foo.id}" +} + + +`) + var testAccAWSRouteBasicConfigChangeCidr = fmt.Sprint(` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" diff --git a/builtin/providers/aws/resource_aws_s3_bucket_object.go b/builtin/providers/aws/resource_aws_s3_bucket_object.go index e945a09802..968344f69e 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket_object.go +++ b/builtin/providers/aws/resource_aws_s3_bucket_object.go @@ -275,7 +275,7 @@ func resourceAwsS3BucketObjectRead(d *schema.ResourceData, meta interface{}) err Key: aws.String(key), }) if err != nil { - return err + return fmt.Errorf("Failed to get object tags (bucket: %s, key: %s): %s", bucket, key, err) } d.Set("tags", tagsToMapS3(tagResp.TagSet)) @@ -319,7 +319,7 @@ func resourceAwsS3BucketObjectDelete(d *schema.ResourceData, meta interface{}) e } _, err := s3conn.DeleteObject(&input) if err != nil { - return fmt.Errorf("Error deleting S3 bucket object: %s", err) + return fmt.Errorf("Error deleting S3 bucket object: %s Bucket: %q Object: %q", err, bucket, key) } } diff --git a/builtin/providers/aws/resource_aws_security_group.go b/builtin/providers/aws/resource_aws_security_group.go index 4c34fea967..e702c1aa0a 100644 --- a/builtin/providers/aws/resource_aws_security_group.go +++ b/builtin/providers/aws/resource_aws_security_group.go @@ -28,7 +28,7 @@ func resourceAwsSecurityGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, Computed: true, @@ -44,7 +44,7 @@ func resourceAwsSecurityGroup() *schema.Resource { }, }, - "name_prefix": &schema.Schema{ + "name_prefix": { Type: schema.TypeString, Optional: true, ForceNew: true, @@ -58,7 +58,7 @@ func resourceAwsSecurityGroup() *schema.Resource { }, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, @@ -73,49 +73,61 @@ func resourceAwsSecurityGroup() *schema.Resource { }, }, - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Optional: true, ForceNew: true, Computed: true, }, - "ingress": &schema.Schema{ + "ingress": { Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "from_port": &schema.Schema{ + "from_port": { Type: schema.TypeInt, Required: true, }, - "to_port": &schema.Schema{ + "to_port": { Type: schema.TypeInt, Required: true, }, - "protocol": &schema.Schema{ + "protocol": { Type: schema.TypeString, Required: true, StateFunc: protocolStateFunc, }, - "cidr_blocks": &schema.Schema{ + "cidr_blocks": { Type: schema.TypeList, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateCIDRNetworkAddress, + }, }, - "security_groups": &schema.Schema{ + "ipv6_cidr_blocks": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateCIDRNetworkAddress, + }, + }, + + "security_groups": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "self": &schema.Schema{ + "self": { Type: schema.TypeBool, Optional: true, Default: false, @@ -125,48 +137,60 @@ func resourceAwsSecurityGroup() *schema.Resource { Set: resourceAwsSecurityGroupRuleHash, }, - "egress": &schema.Schema{ + "egress": { Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "from_port": &schema.Schema{ + "from_port": { Type: schema.TypeInt, Required: true, }, - "to_port": &schema.Schema{ + "to_port": { Type: schema.TypeInt, Required: true, }, - "protocol": &schema.Schema{ + "protocol": { Type: schema.TypeString, Required: true, StateFunc: protocolStateFunc, }, - "cidr_blocks": &schema.Schema{ + "cidr_blocks": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateCIDRNetworkAddress, + }, + }, + + "ipv6_cidr_blocks": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateCIDRNetworkAddress, + }, + }, + + "prefix_list_ids": { Type: schema.TypeList, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "prefix_list_ids": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - - "security_groups": &schema.Schema{ + "security_groups": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "self": &schema.Schema{ + "self": { Type: schema.TypeBool, Optional: true, Default: false, @@ -176,7 +200,7 @@ func resourceAwsSecurityGroup() *schema.Resource { Set: resourceAwsSecurityGroupRuleHash, }, - "owner_id": &schema.Schema{ + "owner_id": { Type: schema.TypeString, Computed: true, }, @@ -252,11 +276,11 @@ func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) er req := &ec2.RevokeSecurityGroupEgressInput{ GroupId: createResp.GroupId, IpPermissions: []*ec2.IpPermission{ - &ec2.IpPermission{ + { FromPort: aws.Int64(int64(0)), ToPort: aws.Int64(int64(0)), IpRanges: []*ec2.IpRange{ - &ec2.IpRange{ + { CidrIp: aws.String("0.0.0.0/0"), }, }, @@ -412,6 +436,18 @@ func resourceAwsSecurityGroupRuleHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", v)) } } + if v, ok := m["ipv6_cidr_blocks"]; ok { + vs := v.([]interface{}) + s := make([]string, len(vs)) + for i, raw := range vs { + s[i] = raw.(string) + } + sort.Strings(s) + + for _, v := range s { + buf.WriteString(fmt.Sprintf("%s-", v)) + } + } if v, ok := m["prefix_list_ids"]; ok { vs := v.([]interface{}) s := make([]string, len(vs)) @@ -476,6 +512,20 @@ func resourceAwsSecurityGroupIPPermGather(groupId string, permissions []*ec2.IpP m["cidr_blocks"] = list } + if len(perm.Ipv6Ranges) > 0 { + raw, ok := m["ipv6_cidr_blocks"] + if !ok { + raw = make([]string, 0, len(perm.Ipv6Ranges)) + } + list := raw.([]string) + + for _, ip := range perm.Ipv6Ranges { + list = append(list, *ip.CidrIpv6) + } + + m["ipv6_cidr_blocks"] = list + } + if len(perm.PrefixListIds) > 0 { raw, ok := m["prefix_list_ids"] if !ok { @@ -699,8 +749,9 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface // local rule we're examining rHash := idHash(rType, r["protocol"].(string), r["to_port"].(int64), r["from_port"].(int64), remoteSelfVal) if rHash == localHash { - var numExpectedCidrs, numExpectedPrefixLists, numExpectedSGs, numRemoteCidrs, numRemotePrefixLists, numRemoteSGs int + var numExpectedCidrs, numExpectedIpv6Cidrs, numExpectedPrefixLists, numExpectedSGs, numRemoteCidrs, numRemoteIpv6Cidrs, numRemotePrefixLists, numRemoteSGs int var matchingCidrs []string + var matchingIpv6Cidrs []string var matchingSGs []string var matchingPrefixLists []string @@ -710,6 +761,10 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface if ok { numExpectedCidrs = len(l["cidr_blocks"].([]interface{})) } + liRaw, ok := l["ipv6_cidr_blocks"] + if ok { + numExpectedIpv6Cidrs = len(l["ipv6_cidr_blocks"].([]interface{})) + } lpRaw, ok := l["prefix_list_ids"] if ok { numExpectedPrefixLists = len(l["prefix_list_ids"].([]interface{})) @@ -723,6 +778,10 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface if ok { numRemoteCidrs = len(r["cidr_blocks"].([]string)) } + riRaw, ok := r["ipv6_cidr_blocks"] + if ok { + numRemoteIpv6Cidrs = len(r["ipv6_cidr_blocks"].([]string)) + } rpRaw, ok := r["prefix_list_ids"] if ok { numRemotePrefixLists = len(r["prefix_list_ids"].([]string)) @@ -738,6 +797,10 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface log.Printf("[DEBUG] Local rule has more CIDR blocks, continuing (%d/%d)", numExpectedCidrs, numRemoteCidrs) continue } + if numExpectedIpv6Cidrs > numRemoteIpv6Cidrs { + log.Printf("[DEBUG] Local rule has more IPV6 CIDR blocks, continuing (%d/%d)", numExpectedIpv6Cidrs, numRemoteIpv6Cidrs) + continue + } if numExpectedPrefixLists > numRemotePrefixLists { log.Printf("[DEBUG] Local rule has more prefix lists, continuing (%d/%d)", numExpectedPrefixLists, numRemotePrefixLists) continue @@ -775,6 +838,29 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface } } + //IPV6 CIDRs + var localIpv6Cidrs []interface{} + if liRaw != nil { + localIpv6Cidrs = liRaw.([]interface{}) + } + localIpv6CidrSet := schema.NewSet(schema.HashString, localIpv6Cidrs) + + var remoteIpv6Cidrs []string + if riRaw != nil { + remoteIpv6Cidrs = riRaw.([]string) + } + var listIpv6 []interface{} + for _, s := range remoteIpv6Cidrs { + listIpv6 = append(listIpv6, s) + } + remoteIpv6CidrSet := schema.NewSet(schema.HashString, listIpv6) + + for _, s := range localIpv6CidrSet.List() { + if remoteIpv6CidrSet.Contains(s) { + matchingIpv6Cidrs = append(matchingIpv6Cidrs, s.(string)) + } + } + // match prefix lists by converting both to sets, and using Set methods var localPrefixLists []interface{} if lpRaw != nil { @@ -830,73 +916,93 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface // match, and then remove those elements from the remote rule, so that // this remote rule can still be considered by other local rules if numExpectedCidrs == len(matchingCidrs) { - if numExpectedPrefixLists == len(matchingPrefixLists) { - if numExpectedSGs == len(matchingSGs) { - // confirm that self references match - var lSelf bool - var rSelf bool - if _, ok := l["self"]; ok { - lSelf = l["self"].(bool) - } - if _, ok := r["self"]; ok { - rSelf = r["self"].(bool) - } - if rSelf == lSelf { - delete(r, "self") - // pop local cidrs from remote - diffCidr := remoteCidrSet.Difference(localCidrSet) - var newCidr []string - for _, cRaw := range diffCidr.List() { - newCidr = append(newCidr, cRaw.(string)) + if numExpectedIpv6Cidrs == len(matchingIpv6Cidrs) { + if numExpectedPrefixLists == len(matchingPrefixLists) { + if numExpectedSGs == len(matchingSGs) { + // confirm that self references match + var lSelf bool + var rSelf bool + if _, ok := l["self"]; ok { + lSelf = l["self"].(bool) } - - // reassigning - if len(newCidr) > 0 { - r["cidr_blocks"] = newCidr - } else { - delete(r, "cidr_blocks") + if _, ok := r["self"]; ok { + rSelf = r["self"].(bool) } + if rSelf == lSelf { + delete(r, "self") + // pop local cidrs from remote + diffCidr := remoteCidrSet.Difference(localCidrSet) + var newCidr []string + for _, cRaw := range diffCidr.List() { + newCidr = append(newCidr, cRaw.(string)) + } - // pop local prefix lists from remote - diffPrefixLists := remotePrefixListsSet.Difference(localPrefixListsSet) - var newPrefixLists []string - for _, pRaw := range diffPrefixLists.List() { - newPrefixLists = append(newPrefixLists, pRaw.(string)) + // reassigning + if len(newCidr) > 0 { + r["cidr_blocks"] = newCidr + } else { + delete(r, "cidr_blocks") + } + + //// IPV6 + //// Comparison + diffIpv6Cidr := remoteIpv6CidrSet.Difference(localIpv6CidrSet) + var newIpv6Cidr []string + for _, cRaw := range diffIpv6Cidr.List() { + newIpv6Cidr = append(newIpv6Cidr, cRaw.(string)) + } + + // reassigning + if len(newIpv6Cidr) > 0 { + r["ipv6_cidr_blocks"] = newIpv6Cidr + } else { + delete(r, "ipv6_cidr_blocks") + } + + // pop local prefix lists from remote + diffPrefixLists := remotePrefixListsSet.Difference(localPrefixListsSet) + var newPrefixLists []string + for _, pRaw := range diffPrefixLists.List() { + newPrefixLists = append(newPrefixLists, pRaw.(string)) + } + + // reassigning + if len(newPrefixLists) > 0 { + r["prefix_list_ids"] = newPrefixLists + } else { + delete(r, "prefix_list_ids") + } + + // pop local sgs from remote + diffSGs := remoteSGSet.Difference(localSGSet) + if len(diffSGs.List()) > 0 { + r["security_groups"] = diffSGs + } else { + delete(r, "security_groups") + } + + saves = append(saves, l) } - - // reassigning - if len(newPrefixLists) > 0 { - r["prefix_list_ids"] = newPrefixLists - } else { - delete(r, "prefix_list_ids") - } - - // pop local sgs from remote - diffSGs := remoteSGSet.Difference(localSGSet) - if len(diffSGs.List()) > 0 { - r["security_groups"] = diffSGs - } else { - delete(r, "security_groups") - } - - saves = append(saves, l) } } + } } } } } - // Here we catch any remote rules that have not been stripped of all self, // cidrs, and security groups. We'll add remote rules here that have not been // matched locally, and let the graph sort things out. This will happen when // rules are added externally to Terraform for _, r := range remote { - var lenCidr, lenPrefixLists, lenSGs int + var lenCidr, lenIpv6Cidr, lenPrefixLists, lenSGs int if rCidrs, ok := r["cidr_blocks"]; ok { lenCidr = len(rCidrs.([]string)) } + if rIpv6Cidrs, ok := r["ipv6_cidr_blocks"]; ok { + lenIpv6Cidr = len(rIpv6Cidrs.([]string)) + } if rPrefixLists, ok := r["prefix_list_ids"]; ok { lenPrefixLists = len(rPrefixLists.([]string)) } @@ -910,7 +1016,7 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface } } - if lenSGs+lenCidr+lenPrefixLists > 0 { + if lenSGs+lenCidr+lenIpv6Cidr+lenPrefixLists > 0 { log.Printf("[DEBUG] Found a remote Rule that wasn't empty: (%#v)", r) saves = append(saves, r) } @@ -1003,15 +1109,15 @@ func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData) error { // Here we carefully find the offenders params := &ec2.DescribeNetworkInterfacesInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("group-id"), Values: []*string{aws.String(d.Id())}, }, - &ec2.Filter{ + { Name: aws.String("description"), Values: []*string{aws.String("AWS Lambda VPC ENI: *")}, }, - &ec2.Filter{ + { Name: aws.String("requester-id"), Values: []*string{aws.String("*:awslambda_*")}, }, diff --git a/builtin/providers/aws/resource_aws_security_group_rule.go b/builtin/providers/aws/resource_aws_security_group_rule.go index f110f98ea8..1372bc83d1 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule.go +++ b/builtin/providers/aws/resource_aws_security_group_rule.go @@ -58,7 +58,20 @@ func resourceAwsSecurityGroupRule() *schema.Resource { Type: schema.TypeList, Optional: true, ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateCIDRNetworkAddress, + }, + }, + + "ipv6_cidr_blocks": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateCIDRNetworkAddress, + }, }, "prefix_list_ids": { @@ -400,6 +413,19 @@ func findRuleMatch(p *ec2.IpPermission, rules []*ec2.IpPermission, isVPC bool) * continue } + remaining = len(p.Ipv6Ranges) + for _, ipv6 := range p.Ipv6Ranges { + for _, ipv6ip := range r.Ipv6Ranges { + if *ipv6.CidrIpv6 == *ipv6ip.CidrIpv6 { + remaining-- + } + } + } + + if remaining > 0 { + continue + } + remaining = len(p.PrefixListIds) for _, pl := range p.PrefixListIds { for _, rpl := range r.PrefixListIds { @@ -463,6 +489,18 @@ func ipPermissionIDHash(sg_id, ruleType string, ip *ec2.IpPermission) string { } } + if len(ip.Ipv6Ranges) > 0 { + s := make([]string, len(ip.Ipv6Ranges)) + for i, r := range ip.Ipv6Ranges { + s[i] = *r.CidrIpv6 + } + sort.Strings(s) + + for _, v := range s { + buf.WriteString(fmt.Sprintf("%s-", v)) + } + } + if len(ip.PrefixListIds) > 0 { s := make([]string, len(ip.PrefixListIds)) for i, pl := range ip.PrefixListIds { @@ -555,6 +593,18 @@ func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) (*ec2.IpPermiss } } + if raw, ok := d.GetOk("ipv6_cidr_blocks"); ok { + list := raw.([]interface{}) + perm.Ipv6Ranges = make([]*ec2.Ipv6Range, len(list)) + for i, v := range list { + cidrIP, ok := v.(string) + if !ok { + return nil, fmt.Errorf("empty element found in ipv6_cidr_blocks - consider using the compact function") + } + perm.Ipv6Ranges[i] = &ec2.Ipv6Range{CidrIpv6: aws.String(cidrIP)} + } + } + if raw, ok := d.GetOk("prefix_list_ids"); ok { list := raw.([]interface{}) perm.PrefixListIds = make([]*ec2.PrefixListId, len(list)) @@ -584,6 +634,12 @@ func setFromIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup, rule *ec2.IpPe d.Set("cidr_blocks", cb) + var ipv6 []string + for _, ip := range rule.Ipv6Ranges { + ipv6 = append(ipv6, *ip.CidrIpv6) + } + d.Set("ipv6_cidr_blocks", ipv6) + var pl []string for _, p := range rule.PrefixListIds { pl = append(pl, *p.PrefixListId) @@ -603,15 +659,16 @@ func setFromIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup, rule *ec2.IpPe return nil } -// Validates that either 'cidr_blocks', 'self', or 'source_security_group_id' is set +// Validates that either 'cidr_blocks', 'ipv6_cidr_blocks', 'self', or 'source_security_group_id' is set func validateAwsSecurityGroupRule(d *schema.ResourceData) error { _, blocksOk := d.GetOk("cidr_blocks") + _, ipv6Ok := d.GetOk("ipv6_cidr_blocks") _, sourceOk := d.GetOk("source_security_group_id") _, selfOk := d.GetOk("self") _, prefixOk := d.GetOk("prefix_list_ids") - if !blocksOk && !sourceOk && !selfOk && !prefixOk { + if !blocksOk && !sourceOk && !selfOk && !prefixOk && !ipv6Ok { return fmt.Errorf( - "One of ['cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] must be set to create an AWS Security Group Rule") + "One of ['cidr_blocks', 'ipv6_cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] must be set to create an AWS Security Group Rule") } return nil } diff --git a/builtin/providers/aws/resource_aws_security_group_rule_test.go b/builtin/providers/aws/resource_aws_security_group_rule_test.go index 424e2a40fb..2992763040 100644 --- a/builtin/providers/aws/resource_aws_security_group_rule_test.go +++ b/builtin/providers/aws/resource_aws_security_group_rule_test.go @@ -52,15 +52,15 @@ func TestIpPermissionIDHash(t *testing.T) { FromPort: aws.Int64(int64(80)), ToPort: aws.Int64(int64(8000)), UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { UserId: aws.String("987654321"), GroupId: aws.String("sg-12345678"), }, - &ec2.UserIdGroupPair{ + { UserId: aws.String("123456789"), GroupId: aws.String("sg-987654321"), }, - &ec2.UserIdGroupPair{ + { UserId: aws.String("123456789"), GroupId: aws.String("sg-12345678"), }, @@ -72,15 +72,15 @@ func TestIpPermissionIDHash(t *testing.T) { FromPort: aws.Int64(int64(80)), ToPort: aws.Int64(int64(8000)), UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { UserId: aws.String("987654321"), GroupName: aws.String("my-security-group"), }, - &ec2.UserIdGroupPair{ + { UserId: aws.String("123456789"), GroupName: aws.String("my-security-group"), }, - &ec2.UserIdGroupPair{ + { UserId: aws.String("123456789"), GroupName: aws.String("my-other-security-group"), }, @@ -183,6 +183,46 @@ func TestAccAWSSecurityGroupRule_Ingress_Protocol(t *testing.T) { }) } +func TestAccAWSSecurityGroupRule_Ingress_Ipv6(t *testing.T) { + var group ec2.SecurityGroup + + testRuleCount := func(*terraform.State) error { + if len(group.IpPermissions) != 1 { + return fmt.Errorf("Wrong Security Group rule count, expected %d, got %d", + 1, len(group.IpPermissions)) + } + + rule := group.IpPermissions[0] + if *rule.FromPort != int64(80) { + return fmt.Errorf("Wrong Security Group port setting, expected %d, got %d", + 80, int(*rule.FromPort)) + } + + ipv6Address := rule.Ipv6Ranges[0] + if *ipv6Address.CidrIpv6 != "::/0" { + return fmt.Errorf("Wrong Security Group IPv6 address, expected %s, got %s", + "::/0", *ipv6Address.CidrIpv6) + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupRuleIngress_ipv6Config, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), + testRuleCount, + ), + }, + }, + }) +} + func TestAccAWSSecurityGroupRule_Ingress_Classic(t *testing.T) { var group ec2.SecurityGroup rInt := acctest.RandInt() @@ -314,6 +354,25 @@ func TestAccAWSSecurityGroupRule_ExpectInvalidTypeError(t *testing.T) { }) } +func TestAccAWSSecurityGroupRule_ExpectInvalidCIDR(t *testing.T) { + rInt := acctest.RandInt() + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupRuleInvalidIPv4CIDR(rInt), + ExpectError: regexp.MustCompile("invalid CIDR address: 1.2.3.4/33"), + }, + { + Config: testAccAWSSecurityGroupRuleInvalidIPv6CIDR(rInt), + ExpectError: regexp.MustCompile("invalid CIDR address: ::/244"), + }, + }, + }) +} + // testing partial match implementation func TestAccAWSSecurityGroupRule_PartialMatching_basic(t *testing.T) { var group ec2.SecurityGroup @@ -376,7 +435,7 @@ func TestAccAWSSecurityGroupRule_PartialMatching_Source(t *testing.T) { ToPort: aws.Int64(80), IpProtocol: aws.String("tcp"), UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{GroupId: nat.GroupId}, + {GroupId: nat.GroupId}, }, } @@ -696,6 +755,34 @@ func testAccAWSSecurityGroupRuleIngressConfig(rInt int) string { }`, rInt) } +const testAccAWSSecurityGroupRuleIngress_ipv6Config = ` +resource "aws_vpc" "tftest" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "tf-testing" + } +} + +resource "aws_security_group" "web" { + vpc_id = "${aws_vpc.tftest.id}" + + tags { + Name = "tf-acc-test" + } +} + +resource "aws_security_group_rule" "ingress_1" { + type = "ingress" + protocol = "6" + from_port = 80 + to_port = 8000 + ipv6_cidr_blocks = ["::/0"] + + security_group_id = "${aws_security_group.web.id}" +} +` + const testAccAWSSecurityGroupRuleIngress_protocolConfig = ` resource "aws_vpc" "tftest" { cidr_block = "10.0.0.0/16" @@ -1098,3 +1185,35 @@ func testAccAWSSecurityGroupRuleExpectInvalidType(rInt int) string { source_security_group_id = "${aws_security_group.web.id}" }`, rInt) } + +func testAccAWSSecurityGroupRuleInvalidIPv4CIDR(rInt int) string { + return fmt.Sprintf(` +resource "aws_security_group" "foo" { + name = "testing-failure-%d" +} + +resource "aws_security_group_rule" "ing" { + type = "ingress" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["1.2.3.4/33"] + security_group_id = "${aws_security_group.foo.id}" +}`, rInt) +} + +func testAccAWSSecurityGroupRuleInvalidIPv6CIDR(rInt int) string { + return fmt.Sprintf(` +resource "aws_security_group" "foo" { + name = "testing-failure-%d" +} + +resource "aws_security_group_rule" "ing" { + type = "egress" + from_port = 0 + to_port = 0 + protocol = "-1" + ipv6_cidr_blocks = ["::/244"] + security_group_id = "${aws_security_group.foo.id}" +}`, rInt) +} diff --git a/builtin/providers/aws/resource_aws_security_group_test.go b/builtin/providers/aws/resource_aws_security_group_test.go index 4c40537709..f1fe67ca9a 100644 --- a/builtin/providers/aws/resource_aws_security_group_test.go +++ b/builtin/providers/aws/resource_aws_security_group_test.go @@ -135,54 +135,54 @@ func TestProtocolForValue(t *testing.T) { func TestResourceAwsSecurityGroupIPPermGather(t *testing.T) { raw := []*ec2.IpPermission{ - &ec2.IpPermission{ + { IpProtocol: aws.String("tcp"), FromPort: aws.Int64(int64(1)), ToPort: aws.Int64(int64(-1)), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("0.0.0.0/0")}}, + IpRanges: []*ec2.IpRange{{CidrIp: aws.String("0.0.0.0/0")}}, UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-11111"), }, }, }, - &ec2.IpPermission{ + { IpProtocol: aws.String("tcp"), FromPort: aws.Int64(int64(80)), ToPort: aws.Int64(int64(80)), UserIdGroupPairs: []*ec2.UserIdGroupPair{ // VPC - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-22222"), }, }, }, - &ec2.IpPermission{ + { IpProtocol: aws.String("tcp"), FromPort: aws.Int64(int64(443)), ToPort: aws.Int64(int64(443)), UserIdGroupPairs: []*ec2.UserIdGroupPair{ // Classic - &ec2.UserIdGroupPair{ + { UserId: aws.String("12345"), GroupId: aws.String("sg-33333"), GroupName: aws.String("ec2_classic"), }, - &ec2.UserIdGroupPair{ + { UserId: aws.String("amazon-elb"), GroupId: aws.String("sg-d2c979d3"), GroupName: aws.String("amazon-elb-sg"), }, }, }, - &ec2.IpPermission{ + { IpProtocol: aws.String("-1"), FromPort: aws.Int64(int64(0)), ToPort: aws.Int64(int64(0)), - PrefixListIds: []*ec2.PrefixListId{&ec2.PrefixListId{PrefixListId: aws.String("pl-12345678")}}, + PrefixListIds: []*ec2.PrefixListId{{PrefixListId: aws.String("pl-12345678")}}, UserIdGroupPairs: []*ec2.UserIdGroupPair{ // VPC - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-22222"), }, }, @@ -190,14 +190,14 @@ func TestResourceAwsSecurityGroupIPPermGather(t *testing.T) { } local := []map[string]interface{}{ - map[string]interface{}{ + { "protocol": "tcp", "from_port": int64(1), "to_port": int64(-1), "cidr_blocks": []string{"0.0.0.0/0"}, "self": true, }, - map[string]interface{}{ + { "protocol": "tcp", "from_port": int64(80), "to_port": int64(80), @@ -205,7 +205,7 @@ func TestResourceAwsSecurityGroupIPPermGather(t *testing.T) { "sg-22222", }), }, - map[string]interface{}{ + { "protocol": "tcp", "from_port": int64(443), "to_port": int64(443), @@ -214,7 +214,7 @@ func TestResourceAwsSecurityGroupIPPermGather(t *testing.T) { "amazon-elb/amazon-elb-sg", }), }, - map[string]interface{}{ + { "protocol": "-1", "from_port": int64(0), "to_port": int64(0), @@ -263,7 +263,7 @@ func TestAccAWSSecurityGroup_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -288,6 +288,39 @@ func TestAccAWSSecurityGroup_basic(t *testing.T) { }) } +func TestAccAWSSecurityGroup_ipv6(t *testing.T) { + var group ec2.SecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_security_group.web", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupConfigIpv6, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), + resource.TestCheckResourceAttr( + "aws_security_group.web", "name", "terraform_acceptance_test_example"), + resource.TestCheckResourceAttr( + "aws_security_group.web", "description", "Used in the terraform acceptance tests"), + resource.TestCheckResourceAttr( + "aws_security_group.web", "ingress.2293451516.protocol", "tcp"), + resource.TestCheckResourceAttr( + "aws_security_group.web", "ingress.2293451516.from_port", "80"), + resource.TestCheckResourceAttr( + "aws_security_group.web", "ingress.2293451516.to_port", "8000"), + resource.TestCheckResourceAttr( + "aws_security_group.web", "ingress.2293451516.ipv6_cidr_blocks.#", "1"), + resource.TestCheckResourceAttr( + "aws_security_group.web", "ingress.2293451516.ipv6_cidr_blocks.0", "::/0"), + ), + }, + }, + }) +} + func TestAccAWSSecurityGroup_tagsCreatedFirst(t *testing.T) { var group ec2.SecurityGroup @@ -296,7 +329,7 @@ func TestAccAWSSecurityGroup_tagsCreatedFirst(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigForTagsOrdering, ExpectError: regexp.MustCompile("InvalidParameterValue"), Check: resource.ComposeTestCheckFunc( @@ -318,7 +351,7 @@ func TestAccAWSSecurityGroup_namePrefix(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupPrefixNameConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.baz", &group), @@ -353,7 +386,7 @@ func TestAccAWSSecurityGroup_self(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigSelf, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -393,7 +426,7 @@ func TestAccAWSSecurityGroup_vpc(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigVpc, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -446,7 +479,7 @@ func TestAccAWSSecurityGroup_vpcNegOneIngress(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigVpcNegOneIngress, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -488,7 +521,7 @@ func TestAccAWSSecurityGroup_vpcProtoNumIngress(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigVpcProtoNumIngress, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -521,7 +554,7 @@ func TestAccAWSSecurityGroup_MultiIngress(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigMultiIngress, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -540,13 +573,13 @@ func TestAccAWSSecurityGroup_Change(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), ), }, - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigChange, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -566,7 +599,7 @@ func TestAccAWSSecurityGroup_generatedName(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_generatedName, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -596,7 +629,7 @@ func TestAccAWSSecurityGroup_DefaultEgress_VPC(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigDefaultEgress, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExistsWithoutDefault("aws_security_group.worker"), @@ -616,7 +649,7 @@ func TestAccAWSSecurityGroup_DefaultEgress_Classic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigClassic, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -634,7 +667,7 @@ func TestAccAWSSecurityGroup_drift(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_drift(), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -664,7 +697,7 @@ func TestAccAWSSecurityGroup_drift_complex(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_drift_complex(), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -686,6 +719,32 @@ func TestAccAWSSecurityGroup_drift_complex(t *testing.T) { }) } +func TestAccAWSSecurityGroup_invalidCIDRBlock(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupInvalidIngressCidr, + ExpectError: regexp.MustCompile("invalid CIDR address: 1.2.3.4/33"), + }, + { + Config: testAccAWSSecurityGroupInvalidEgressCidr, + ExpectError: regexp.MustCompile("invalid CIDR address: 1.2.3.4/33"), + }, + { + Config: testAccAWSSecurityGroupInvalidIPv6IngressCidr, + ExpectError: regexp.MustCompile("invalid CIDR address: ::/244"), + }, + { + Config: testAccAWSSecurityGroupInvalidIPv6EgressCidr, + ExpectError: regexp.MustCompile("invalid CIDR address: ::/244"), + }, + }, + }) +} + func testAccCheckAWSSecurityGroupDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ec2conn @@ -773,7 +832,7 @@ func testAccCheckAWSSecurityGroupAttributes(group *ec2.SecurityGroup) resource.T FromPort: aws.Int64(80), ToPort: aws.Int64(8000), IpProtocol: aws.String("tcp"), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + IpRanges: []*ec2.IpRange{{CidrIp: aws.String("10.0.0.0/8")}}, } if *group.GroupName != "terraform_acceptance_test_example" { @@ -804,7 +863,7 @@ func testAccCheckAWSSecurityGroupAttributesNegOneProtocol(group *ec2.SecurityGro return func(s *terraform.State) error { p := &ec2.IpPermission{ IpProtocol: aws.String("-1"), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + IpRanges: []*ec2.IpRange{{CidrIp: aws.String("10.0.0.0/8")}}, } if *group.GroupName != "terraform_acceptance_test_example" { @@ -839,7 +898,7 @@ func TestAccAWSSecurityGroup_tags(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.foo", &group), @@ -847,7 +906,7 @@ func TestAccAWSSecurityGroup_tags(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.foo", &group), @@ -868,7 +927,7 @@ func TestAccAWSSecurityGroup_CIDRandGroups(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupCombindCIDRandGroups, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.mixed", &group), @@ -887,7 +946,7 @@ func TestAccAWSSecurityGroup_ingressWithCidrAndSGs(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_ingressWithCidrAndSGs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -913,7 +972,7 @@ func TestAccAWSSecurityGroup_ingressWithCidrAndSGs_classic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_ingressWithCidrAndSGs_classic, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.web", &group), @@ -938,7 +997,7 @@ func TestAccAWSSecurityGroup_egressWithPrefixList(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfigPrefixListEgress, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.egress", &group), @@ -1016,21 +1075,21 @@ func testAccCheckAWSSecurityGroupPrefixListAttributes(group *ec2.SecurityGroup) func testAccCheckAWSSecurityGroupAttributesChanged(group *ec2.SecurityGroup) resource.TestCheckFunc { return func(s *terraform.State) error { p := []*ec2.IpPermission{ - &ec2.IpPermission{ + { FromPort: aws.Int64(80), ToPort: aws.Int64(9000), IpProtocol: aws.String("tcp"), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + IpRanges: []*ec2.IpRange{{CidrIp: aws.String("10.0.0.0/8")}}, }, - &ec2.IpPermission{ + { FromPort: aws.Int64(80), ToPort: aws.Int64(8000), IpProtocol: aws.String("tcp"), IpRanges: []*ec2.IpRange{ - &ec2.IpRange{ + { CidrIp: aws.String("0.0.0.0/0"), }, - &ec2.IpRange{ + { CidrIp: aws.String("10.0.0.0/8"), }, }, @@ -1109,7 +1168,7 @@ func TestAccAWSSecurityGroup_failWithDiffMismatch(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSecurityGroupConfig_failWithDiffMismatch, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.nat", &group), @@ -1148,6 +1207,36 @@ resource "aws_security_group" "web" { } }` +const testAccAWSSecurityGroupConfigIpv6 = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_security_group" "web" { + name = "terraform_acceptance_test_example" + description = "Used in the terraform acceptance tests" + vpc_id = "${aws_vpc.foo.id}" + + ingress { + protocol = "6" + from_port = 80 + to_port = 8000 + ipv6_cidr_blocks = ["::/0"] + } + + egress { + protocol = "tcp" + from_port = 80 + to_port = 8000 + ipv6_cidr_blocks = ["::/0"] + } + + tags { + Name = "tf-acc-test" + } +} +` + const testAccAWSSecurityGroupConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" @@ -1586,6 +1675,54 @@ resource "aws_security_group" "web" { }`, acctest.RandInt(), acctest.RandInt()) } +const testAccAWSSecurityGroupInvalidIngressCidr = ` +resource "aws_security_group" "foo" { + name = "testing-foo" + description = "foo-testing" + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["1.2.3.4/33"] + } +}` + +const testAccAWSSecurityGroupInvalidEgressCidr = ` +resource "aws_security_group" "foo" { + name = "testing-foo" + description = "foo-testing" + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["1.2.3.4/33"] + } +}` + +const testAccAWSSecurityGroupInvalidIPv6IngressCidr = ` +resource "aws_security_group" "foo" { + name = "testing-foo" + description = "foo-testing" + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + ipv6_cidr_blocks = ["::/244"] + } +}` + +const testAccAWSSecurityGroupInvalidIPv6EgressCidr = ` +resource "aws_security_group" "foo" { + name = "testing-foo" + description = "foo-testing" + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + ipv6_cidr_blocks = ["::/244"] + } +}` + const testAccAWSSecurityGroupCombindCIDRandGroups = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" diff --git a/builtin/providers/aws/resource_aws_spot_fleet_request.go b/builtin/providers/aws/resource_aws_spot_fleet_request.go index c7cccd5120..db2424e20b 100644 --- a/builtin/providers/aws/resource_aws_spot_fleet_request.go +++ b/builtin/providers/aws/resource_aws_spot_fleet_request.go @@ -26,73 +26,79 @@ func resourceAwsSpotFleetRequest() *schema.Resource { MigrateState: resourceAwsSpotFleetRequestMigrateState, Schema: map[string]*schema.Schema{ - "iam_fleet_role": &schema.Schema{ + "iam_fleet_role": { Type: schema.TypeString, Required: true, ForceNew: true, }, + "replace_unhealthy_instances": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + }, // http://docs.aws.amazon.com/sdk-for-go/api/service/ec2.html#type-SpotFleetLaunchSpecification // http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_SpotFleetLaunchSpecification.html - "launch_specification": &schema.Schema{ + "launch_specification": { Type: schema.TypeSet, Required: true, ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "vpc_security_group_ids": &schema.Schema{ + "vpc_security_group_ids": { Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "associate_public_ip_address": &schema.Schema{ + "associate_public_ip_address": { Type: schema.TypeBool, Optional: true, Default: false, }, - "ebs_block_device": &schema.Schema{ + "ebs_block_device": { Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "delete_on_termination": &schema.Schema{ + "delete_on_termination": { Type: schema.TypeBool, Optional: true, Default: true, ForceNew: true, }, - "device_name": &schema.Schema{ + "device_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "encrypted": &schema.Schema{ + "encrypted": { Type: schema.TypeBool, Optional: true, Computed: true, ForceNew: true, }, - "iops": &schema.Schema{ + "iops": { Type: schema.TypeInt, Optional: true, Computed: true, ForceNew: true, }, - "snapshot_id": &schema.Schema{ + "snapshot_id": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "volume_size": &schema.Schema{ + "volume_size": { Type: schema.TypeInt, Optional: true, Computed: true, ForceNew: true, }, - "volume_type": &schema.Schema{ + "volume_type": { Type: schema.TypeString, Optional: true, Computed: true, @@ -102,18 +108,18 @@ func resourceAwsSpotFleetRequest() *schema.Resource { }, Set: hashEbsBlockDevice, }, - "ephemeral_block_device": &schema.Schema{ + "ephemeral_block_device": { Type: schema.TypeSet, Optional: true, Computed: true, ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "device_name": &schema.Schema{ + "device_name": { Type: schema.TypeString, Required: true, }, - "virtual_name": &schema.Schema{ + "virtual_name": { Type: schema.TypeString, Required: true, }, @@ -121,7 +127,7 @@ func resourceAwsSpotFleetRequest() *schema.Resource { }, Set: hashEphemeralBlockDevice, }, - "root_block_device": &schema.Schema{ + "root_block_device": { // TODO: This is a set because we don't support singleton // sub-resources today. We'll enforce that the set only ever has // length zero or one below. When TF gains support for @@ -134,25 +140,25 @@ func resourceAwsSpotFleetRequest() *schema.Resource { // Termination flag on the block device mapping entry for the root // device volume." - bit.ly/ec2bdmap Schema: map[string]*schema.Schema{ - "delete_on_termination": &schema.Schema{ + "delete_on_termination": { Type: schema.TypeBool, Optional: true, Default: true, ForceNew: true, }, - "iops": &schema.Schema{ + "iops": { Type: schema.TypeInt, Optional: true, Computed: true, ForceNew: true, }, - "volume_size": &schema.Schema{ + "volume_size": { Type: schema.TypeInt, Optional: true, Computed: true, ForceNew: true, }, - "volume_type": &schema.Schema{ + "volume_type": { Type: schema.TypeString, Optional: true, Computed: true, @@ -162,50 +168,50 @@ func resourceAwsSpotFleetRequest() *schema.Resource { }, Set: hashRootBlockDevice, }, - "ebs_optimized": &schema.Schema{ + "ebs_optimized": { Type: schema.TypeBool, Optional: true, Default: false, }, - "iam_instance_profile": &schema.Schema{ + "iam_instance_profile": { Type: schema.TypeString, ForceNew: true, Optional: true, }, - "ami": &schema.Schema{ + "ami": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "instance_type": &schema.Schema{ + "instance_type": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "key_name": &schema.Schema{ + "key_name": { Type: schema.TypeString, Optional: true, ForceNew: true, Computed: true, ValidateFunc: validateSpotFleetRequestKeyName, }, - "monitoring": &schema.Schema{ + "monitoring": { Type: schema.TypeBool, Optional: true, Default: false, }, - "placement_group": &schema.Schema{ + "placement_group": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "spot_price": &schema.Schema{ + "spot_price": { Type: schema.TypeString, Optional: true, ForceNew: true, }, - "user_data": &schema.Schema{ + "user_data": { Type: schema.TypeString, Optional: true, ForceNew: true, @@ -218,18 +224,18 @@ func resourceAwsSpotFleetRequest() *schema.Resource { } }, }, - "weighted_capacity": &schema.Schema{ + "weighted_capacity": { Type: schema.TypeString, Optional: true, ForceNew: true, }, - "subnet_id": &schema.Schema{ + "subnet_id": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "availability_zone": &schema.Schema{ + "availability_zone": { Type: schema.TypeString, Optional: true, Computed: true, @@ -240,48 +246,48 @@ func resourceAwsSpotFleetRequest() *schema.Resource { Set: hashLaunchSpecification, }, // Everything on a spot fleet is ForceNew except target_capacity - "target_capacity": &schema.Schema{ + "target_capacity": { Type: schema.TypeInt, Required: true, ForceNew: false, }, - "allocation_strategy": &schema.Schema{ + "allocation_strategy": { Type: schema.TypeString, Optional: true, Default: "lowestPrice", ForceNew: true, }, - "excess_capacity_termination_policy": &schema.Schema{ + "excess_capacity_termination_policy": { Type: schema.TypeString, Optional: true, Default: "Default", ForceNew: false, }, - "spot_price": &schema.Schema{ + "spot_price": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "terminate_instances_with_expiration": &schema.Schema{ + "terminate_instances_with_expiration": { Type: schema.TypeBool, Optional: true, ForceNew: true, }, - "valid_from": &schema.Schema{ + "valid_from": { Type: schema.TypeString, Optional: true, ForceNew: true, }, - "valid_until": &schema.Schema{ + "valid_until": { Type: schema.TypeString, Optional: true, ForceNew: true, }, - "spot_request_state": &schema.Schema{ + "spot_request_state": { Type: schema.TypeString, Computed: true, }, - "client_token": &schema.Schema{ + "client_token": { Type: schema.TypeString, Computed: true, }, @@ -525,6 +531,7 @@ func resourceAwsSpotFleetRequestCreate(d *schema.ResourceData, meta interface{}) TargetCapacity: aws.Int64(int64(d.Get("target_capacity").(int))), ClientToken: aws.String(resource.UniqueId()), TerminateInstancesWithExpiration: aws.Bool(d.Get("terminate_instances_with_expiration").(bool)), + ReplaceUnhealthyInstances: aws.Bool(d.Get("replace_unhealthy_instances").(bool)), } if v, ok := d.GetOk("excess_capacity_termination_policy"); ok { @@ -716,6 +723,7 @@ func resourceAwsSpotFleetRequestRead(d *schema.ResourceData, meta interface{}) e aws.TimeValue(config.ValidUntil).Format(awsAutoscalingScheduleTimeLayout)) } + d.Set("replace_unhealthy_instances", config.ReplaceUnhealthyInstances) d.Set("launch_specification", launchSpecsToSet(config.LaunchSpecifications, conn)) return nil diff --git a/builtin/providers/aws/resource_aws_spot_instance_request.go b/builtin/providers/aws/resource_aws_spot_instance_request.go index 6b37d52ccc..c0e87e5546 100644 --- a/builtin/providers/aws/resource_aws_spot_instance_request.go +++ b/builtin/providers/aws/resource_aws_spot_instance_request.go @@ -266,6 +266,29 @@ func readInstance(d *schema.ResourceData, meta interface{}) error { if err := readBlockDevices(d, instance, conn); err != nil { return err } + + var ipv6Addresses []string + if len(instance.NetworkInterfaces) > 0 { + for _, ni := range instance.NetworkInterfaces { + if *ni.Attachment.DeviceIndex == 0 { + d.Set("subnet_id", ni.SubnetId) + d.Set("network_interface_id", ni.NetworkInterfaceId) + d.Set("associate_public_ip_address", ni.Association != nil) + d.Set("ipv6_address_count", len(ni.Ipv6Addresses)) + + for _, address := range ni.Ipv6Addresses { + ipv6Addresses = append(ipv6Addresses, *address.Ipv6Address) + } + } + } + } else { + d.Set("subnet_id", instance.SubnetId) + d.Set("network_interface_id", "") + } + + if err := d.Set("ipv6_addresses", ipv6Addresses); err != nil { + log.Printf("[WARN] Error setting ipv6_addresses for AWS Spot Instance (%s): %s", d.Id(), err) + } } return nil diff --git a/builtin/providers/aws/resource_aws_ssm_document.go b/builtin/providers/aws/resource_aws_ssm_document.go index 5ed4516148..499c18f33a 100644 --- a/builtin/providers/aws/resource_aws_ssm_document.go +++ b/builtin/providers/aws/resource_aws_ssm_document.go @@ -205,9 +205,15 @@ func resourceAwsSsmDocumentRead(d *schema.ResourceData, meta interface{}) error if dp.DefaultValue != nil { param["default_value"] = *dp.DefaultValue } - param["description"] = *dp.Description - param["name"] = *dp.Name - param["type"] = *dp.Type + if dp.Description != nil { + param["description"] = *dp.Description + } + if dp.Name != nil { + param["name"] = *dp.Name + } + if dp.Type != nil { + param["type"] = *dp.Type + } params = append(params, param) } diff --git a/builtin/providers/aws/resource_aws_vpc.go b/builtin/providers/aws/resource_aws_vpc.go index e2e9f83a7c..e3bcee3a62 100644 --- a/builtin/providers/aws/resource_aws_vpc.go +++ b/builtin/providers/aws/resource_aws_vpc.go @@ -19,7 +19,7 @@ func resourceAwsVpc() *schema.Resource { Update: resourceAwsVpcUpdate, Delete: resourceAwsVpcDelete, Importer: &schema.ResourceImporter{ - State: schema.ImportStatePassthrough, + State: resourceAwsVpcInstanceImport, }, Schema: map[string]*schema.Schema{ @@ -174,12 +174,16 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { // Tags d.Set("tags", tagsToMap(vpc.Tags)) - if vpc.Ipv6CidrBlockAssociationSet != nil { - d.Set("assign_generated_ipv6_cidr_block", true) - d.Set("ipv6_association_id", vpc.Ipv6CidrBlockAssociationSet[0].AssociationId) - d.Set("ipv6_cidr_block", vpc.Ipv6CidrBlockAssociationSet[0].Ipv6CidrBlock) - } else { - d.Set("assign_generated_ipv6_cidr_block", false) + for _, a := range vpc.Ipv6CidrBlockAssociationSet { + if *a.Ipv6CidrBlockState.State == "associated" { + d.Set("assign_generated_ipv6_cidr_block", true) + d.Set("ipv6_association_id", a.AssociationId) + d.Set("ipv6_cidr_block", a.Ipv6CidrBlock) + } else { + d.Set("assign_generated_ipv6_cidr_block", false) + d.Set("ipv6_association_id", "") // we blank these out to remove old entries + d.Set("ipv6_cidr_block", "") + } } // Attributes @@ -481,3 +485,9 @@ func resourceAwsVpcSetDefaultRouteTable(conn *ec2.EC2, d *schema.ResourceData) e return nil } + +func resourceAwsVpcInstanceImport( + d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + d.Set("assign_generated_ipv6_cidr_block", false) + return []*schema.ResourceData{d}, nil +} diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go index 5fc791f8e0..302571c6c4 100644 --- a/builtin/providers/aws/structure.go +++ b/builtin/providers/aws/structure.go @@ -216,6 +216,12 @@ func expandIPPerms( perm.IpRanges = append(perm.IpRanges, &ec2.IpRange{CidrIp: aws.String(v.(string))}) } } + if raw, ok := m["ipv6_cidr_blocks"]; ok { + list := raw.([]interface{}) + for _, v := range list { + perm.Ipv6Ranges = append(perm.Ipv6Ranges, &ec2.Ipv6Range{CidrIpv6: aws.String(v.(string))}) + } + } if raw, ok := m["prefix_list_ids"]; ok { list := raw.([]interface{}) @@ -1843,3 +1849,63 @@ func flattenInspectorTags(cfTags []*cloudformation.Tag) map[string]string { } return tags } + +func flattenApiGatewayUsageApiStages(s []*apigateway.ApiStage) []map[string]interface{} { + stages := make([]map[string]interface{}, 0) + + for _, bd := range s { + if bd.ApiId != nil && bd.Stage != nil { + stage := make(map[string]interface{}) + stage["api_id"] = *bd.ApiId + stage["stage"] = *bd.Stage + + stages = append(stages, stage) + } + } + + if len(stages) > 0 { + return stages + } + + return nil +} + +func flattenApiGatewayUsagePlanThrottling(s *apigateway.ThrottleSettings) []map[string]interface{} { + settings := make(map[string]interface{}, 0) + + if s == nil { + return nil + } + + if s.BurstLimit != nil { + settings["burst_limit"] = *s.BurstLimit + } + + if s.RateLimit != nil { + settings["rate_limit"] = *s.RateLimit + } + + return []map[string]interface{}{settings} +} + +func flattenApiGatewayUsagePlanQuota(s *apigateway.QuotaSettings) []map[string]interface{} { + settings := make(map[string]interface{}, 0) + + if s == nil { + return nil + } + + if s.Limit != nil { + settings["limit"] = *s.Limit + } + + if s.Offset != nil { + settings["offset"] = *s.Offset + } + + if s.Period != nil { + settings["period"] = *s.Period + } + + return []map[string]interface{}{settings} +} diff --git a/builtin/providers/aws/tagsKMS.go b/builtin/providers/aws/tagsKMS.go new file mode 100644 index 0000000000..d4d2eca1c5 --- /dev/null +++ b/builtin/providers/aws/tagsKMS.go @@ -0,0 +1,115 @@ +package aws + +import ( + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kms" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsKMS(conn *kms.KMS, d *schema.ResourceData, keyId string) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsKMS(tagsFromMapKMS(o), tagsFromMapKMS(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + k := make([]*string, len(remove), len(remove)) + for i, t := range remove { + k[i] = t.TagKey + } + + _, err := conn.UntagResource(&kms.UntagResourceInput{ + KeyId: aws.String(keyId), + TagKeys: k, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + _, err := conn.TagResource(&kms.TagResourceInput{ + KeyId: aws.String(keyId), + Tags: create, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsKMS(oldTags, newTags []*kms.Tag) ([]*kms.Tag, []*kms.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[aws.StringValue(t.TagKey)] = aws.StringValue(t.TagValue) + } + + // Build the list of what to remove + var remove []*kms.Tag + for _, t := range oldTags { + old, ok := create[aws.StringValue(t.TagKey)] + if !ok || old != aws.StringValue(t.TagValue) { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapKMS(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapKMS(m map[string]interface{}) []*kms.Tag { + result := make([]*kms.Tag, 0, len(m)) + for k, v := range m { + t := &kms.Tag{ + TagKey: aws.String(k), + TagValue: aws.String(v.(string)), + } + if !tagIgnoredKMS(t) { + result = append(result, t) + } + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapKMS(ts []*kms.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + if !tagIgnoredKMS(t) { + result[aws.StringValue(t.TagKey)] = aws.StringValue(t.TagValue) + } + } + + return result +} + +// compare a tag against a list of strings and checks if it should +// be ignored or not +func tagIgnoredKMS(t *kms.Tag) bool { + filter := []string{"^aws:*"} + for _, v := range filter { + log.Printf("[DEBUG] Matching %v with %v\n", v, *t.TagKey) + if r, _ := regexp.MatchString(v, *t.TagKey); r == true { + log.Printf("[DEBUG] Found AWS specific tag %s (val: %s), ignoring.\n", *t.TagKey, *t.TagValue) + return true + } + } + return false +} diff --git a/builtin/providers/aws/tagsKMS_test.go b/builtin/providers/aws/tagsKMS_test.go new file mode 100644 index 0000000000..a1d7a770e2 --- /dev/null +++ b/builtin/providers/aws/tagsKMS_test.go @@ -0,0 +1,105 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kms" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// go test -v -run="TestDiffKMSTags" +func TestDiffKMSTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsKMS(tagsFromMapKMS(tc.Old), tagsFromMapKMS(tc.New)) + cm := tagsToMapKMS(c) + rm := tagsToMapKMS(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// go test -v -run="TestIgnoringTagsKMS" +func TestIgnoringTagsKMS(t *testing.T) { + var ignoredTags []*kms.Tag + ignoredTags = append(ignoredTags, &kms.Tag{ + TagKey: aws.String("aws:cloudformation:logical-id"), + TagValue: aws.String("foo"), + }) + ignoredTags = append(ignoredTags, &kms.Tag{ + TagKey: aws.String("aws:foo:bar"), + TagValue: aws.String("baz"), + }) + for _, tag := range ignoredTags { + if !tagIgnoredKMS(tag) { + t.Fatalf("Tag %v with value %v not ignored, but should be!", *tag.TagKey, *tag.TagValue) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckKMSTags( + ts []*kms.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapKMS(ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/validators.go b/builtin/providers/aws/validators.go index e6c6962a04..a8f9c66cfc 100644 --- a/builtin/providers/aws/validators.go +++ b/builtin/providers/aws/validators.go @@ -7,6 +7,7 @@ import ( "strings" "time" + "github.com/aws/aws-sdk-go/service/apigateway" "github.com/aws/aws-sdk-go/service/s3" "github.com/hashicorp/terraform/helper/schema" ) @@ -140,7 +141,24 @@ func validateElbName(v interface{}, k string) (ws []string, errors []error) { "%q cannot end with a hyphen: %q", k, value)) } return +} +func validateElbNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q: %q", + k, value)) + } + if len(value) > 6 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 6 characters: %q", k, value)) + } + if regexp.MustCompile(`^-`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot begin with a hyphen: %q", k, value)) + } + return } func validateEcrRepositoryName(v interface{}, k string) (ws []string, errors []error) { @@ -926,7 +944,100 @@ func validateConfigExecutionFrequency(v interface{}, k string) (ws []string, err } } errors = append(errors, fmt.Errorf( - "%q contains an invalid freqency %q. Valid frequencies are %q.", + "%q contains an invalid frequency %q. Valid frequencies are %q.", k, frequency, validFrequencies)) return } + +func validateAccountAlias(v interface{}, k string) (ws []string, es []error) { + val := v.(string) + + if (len(val) < 3) || (len(val) > 63) { + es = append(es, fmt.Errorf("%q must contain from 3 to 63 alphanumeric characters or hyphens", k)) + } + if !regexp.MustCompile("^[a-z0-9][a-z0-9-]+$").MatchString(val) { + es = append(es, fmt.Errorf("%q must start with an alphanumeric character and only contain lowercase alphanumeric characters and hyphens", k)) + } + if strings.Contains(val, "--") { + es = append(es, fmt.Errorf("%q must not contain consecutive hyphens", k)) + } + if strings.HasSuffix(val, "-") { + es = append(es, fmt.Errorf("%q must not end in a hyphen", k)) + } + return +} + +func validateApiGatewayApiKeyValue(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) < 30 { + errors = append(errors, fmt.Errorf( + "%q must be at least 30 characters long", k)) + } + if len(value) > 128 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 128 characters", k)) + } + return +} + +func validateIamRolePolicyName(v interface{}, k string) (ws []string, errors []error) { + // https://github.com/boto/botocore/blob/2485f5c/botocore/data/iam/2010-05-08/service-2.json#L8291-L8296 + value := v.(string) + if len(value) > 128 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 128 characters", k)) + } + if !regexp.MustCompile("^[\\w+=,.@-]+$").MatchString(value) { + errors = append(errors, fmt.Errorf("%q must match [\\w+=,.@-]", k)) + } + return +} + +func validateIamRolePolicyNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 100 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 100 characters", k)) + } + if !regexp.MustCompile("^[\\w+=,.@-]+$").MatchString(value) { + errors = append(errors, fmt.Errorf("%q must match [\\w+=,.@-]", k)) + } + return +} + +func validateApiGatewayUsagePlanQuotaSettingsPeriod(v interface{}, k string) (ws []string, errors []error) { + validPeriods := []string{ + apigateway.QuotaPeriodTypeDay, + apigateway.QuotaPeriodTypeWeek, + apigateway.QuotaPeriodTypeMonth, + } + period := v.(string) + for _, f := range validPeriods { + if period == f { + return + } + } + errors = append(errors, fmt.Errorf( + "%q contains an invalid period %q. Valid period are %q.", + k, period, validPeriods)) + return +} + +func validateApiGatewayUsagePlanQuotaSettings(v map[string]interface{}) (errors []error) { + period := v["period"].(string) + offset := v["offset"].(int) + + if period == apigateway.QuotaPeriodTypeDay && offset != 0 { + errors = append(errors, fmt.Errorf("Usage Plan quota offset must be zero in the DAY period")) + } + + if period == apigateway.QuotaPeriodTypeWeek && (offset < 0 || offset > 6) { + errors = append(errors, fmt.Errorf("Usage Plan quota offset must be between 0 and 6 inclusive in the WEEK period")) + } + + if period == apigateway.QuotaPeriodTypeMonth && (offset < 0 || offset > 27) { + errors = append(errors, fmt.Errorf("Usage Plan quota offset must be between 0 and 27 inclusive in the MONTH period")) + } + + return +} diff --git a/builtin/providers/aws/validators_test.go b/builtin/providers/aws/validators_test.go index 8920ae7649..0c37308fe3 100644 --- a/builtin/providers/aws/validators_test.go +++ b/builtin/providers/aws/validators_test.go @@ -1550,3 +1550,238 @@ func TestValidateDmsReplicationTaskId(t *testing.T) { } } } + +func TestValidateAccountAlias(t *testing.T) { + validAliases := []string{ + "tf-alias", + "0tf-alias1", + } + + for _, s := range validAliases { + _, errors := validateAccountAlias(s, "account_alias") + if len(errors) > 0 { + t.Fatalf("%q should be a valid account alias: %v", s, errors) + } + } + + invalidAliases := []string{ + "tf", + "-tf", + "tf-", + "TF-Alias", + "tf-alias-tf-alias-tf-alias-tf-alias-tf-alias-tf-alias-tf-alias-tf-alias", + } + + for _, s := range invalidAliases { + _, errors := validateAccountAlias(s, "account_alias") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid account alias: %v", s, errors) + } + } +} + +func TestValidateIamRoleProfileName(t *testing.T) { + validNames := []string{ + "tf-test-role-profile-1", + } + + for _, s := range validNames { + _, errors := validateIamRolePolicyName(s, "name") + if len(errors) > 0 { + t.Fatalf("%q should be a valid IAM role policy name: %v", s, errors) + } + } + + invalidNames := []string{ + "invalid#name", + "this-is-a-very-long-role-policy-name-this-is-a-very-long-role-policy-name-this-is-a-very-long-role-policy-name-this-is-a-very-long", + } + + for _, s := range invalidNames { + _, errors := validateIamRolePolicyName(s, "name") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid IAM role policy name: %v", s, errors) + } + } +} + +func TestValidateIamRoleProfileNamePrefix(t *testing.T) { + validNamePrefixes := []string{ + "tf-test-role-profile-", + } + + for _, s := range validNamePrefixes { + _, errors := validateIamRolePolicyNamePrefix(s, "name_prefix") + if len(errors) > 0 { + t.Fatalf("%q should be a valid IAM role policy name prefix: %v", s, errors) + } + } + + invalidNamePrefixes := []string{ + "invalid#name_prefix", + "this-is-a-very-long-role-policy-name-prefix-this-is-a-very-long-role-policy-name-prefix-this-is-a-very-", + } + + for _, s := range invalidNamePrefixes { + _, errors := validateIamRolePolicyNamePrefix(s, "name_prefix") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid IAM role policy name prefix: %v", s, errors) + } + } +} + +func TestValidateApiGatewayUsagePlanQuotaSettingsPeriod(t *testing.T) { + validEntries := []string{ + "DAY", + "WEEK", + "MONTH", + } + + invalidEntries := []string{ + "fooBAR", + "foobar45Baz", + "foobar45Baz@!", + } + + for _, v := range validEntries { + _, errors := validateApiGatewayUsagePlanQuotaSettingsPeriod(v, "name") + if len(errors) != 0 { + t.Fatalf("%q should be a valid API Gateway Quota Settings Period: %v", v, errors) + } + } + + for _, v := range invalidEntries { + _, errors := validateApiGatewayUsagePlanQuotaSettingsPeriod(v, "name") + if len(errors) == 0 { + t.Fatalf("%q should not be a API Gateway Quota Settings Period", v) + } + } +} + +func TestValidateApiGatewayUsagePlanQuotaSettings(t *testing.T) { + cases := []struct { + Offset int + Period string + ErrCount int + }{ + { + Offset: 0, + Period: "DAY", + ErrCount: 0, + }, + { + Offset: -1, + Period: "DAY", + ErrCount: 1, + }, + { + Offset: 1, + Period: "DAY", + ErrCount: 1, + }, + { + Offset: 0, + Period: "WEEK", + ErrCount: 0, + }, + { + Offset: 6, + Period: "WEEK", + ErrCount: 0, + }, + { + Offset: -1, + Period: "WEEK", + ErrCount: 1, + }, + { + Offset: 7, + Period: "WEEK", + ErrCount: 1, + }, + { + Offset: 0, + Period: "MONTH", + ErrCount: 0, + }, + { + Offset: 27, + Period: "MONTH", + ErrCount: 0, + }, + { + Offset: -1, + Period: "MONTH", + ErrCount: 1, + }, + { + Offset: 28, + Period: "MONTH", + ErrCount: 1, + }, + } + + for _, tc := range cases { + m := make(map[string]interface{}) + m["offset"] = tc.Offset + m["period"] = tc.Period + + errors := validateApiGatewayUsagePlanQuotaSettings(m) + if len(errors) != tc.ErrCount { + t.Fatalf("API Gateway Usage Plan Quota Settings validation failed: %v", errors) + } + } +} + +func TestValidateElbName(t *testing.T) { + validNames := []string{ + "tf-test-elb", + } + + for _, s := range validNames { + _, errors := validateElbName(s, "name") + if len(errors) > 0 { + t.Fatalf("%q should be a valid ELB name: %v", s, errors) + } + } + + invalidNames := []string{ + "tf.test.elb.1", + "tf-test-elb-tf-test-elb-tf-test-elb", + "-tf-test-elb", + "tf-test-elb-", + } + + for _, s := range invalidNames { + _, errors := validateElbName(s, "name") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid ELB name: %v", s, errors) + } + } +} + +func TestValidateElbNamePrefix(t *testing.T) { + validNamePrefixes := []string{ + "test-", + } + + for _, s := range validNamePrefixes { + _, errors := validateElbNamePrefix(s, "name_prefix") + if len(errors) > 0 { + t.Fatalf("%q should be a valid ELB name prefix: %v", s, errors) + } + } + + invalidNamePrefixes := []string{ + "tf.test.elb.", + "tf-test", + "-test", + } + + for _, s := range invalidNamePrefixes { + _, errors := validateElbNamePrefix(s, "name_prefix") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid ELB name prefix: %v", s, errors) + } + } +} diff --git a/builtin/providers/azurerm/provider.go b/builtin/providers/azurerm/provider.go index 39862aefe9..c57aae2c2f 100644 --- a/builtin/providers/azurerm/provider.go +++ b/builtin/providers/azurerm/provider.go @@ -196,6 +196,12 @@ func providerConfigure(p *schema.Provider) schema.ConfigureFunc { client.StopContext = p.StopContext() + // replaces the context between tests + p.MetaReset = func() error { + client.StopContext = p.StopContext() + return nil + } + // List all the available providers and their registration state to avoid unnecessary // requests. This also lets us check if the provider credentials are correct. providerList, err := client.providers.List(nil, "") diff --git a/builtin/providers/azurerm/resource_arm_virtual_machine.go b/builtin/providers/azurerm/resource_arm_virtual_machine.go index 6bbdf28bb3..5467967af2 100644 --- a/builtin/providers/azurerm/resource_arm_virtual_machine.go +++ b/builtin/providers/azurerm/resource_arm_virtual_machine.go @@ -474,6 +474,11 @@ func resourceArmVirtualMachine() *schema.Resource { Set: schema.HashString, }, + "primary_network_interface_id": { + Type: schema.TypeString, + Optional: true, + }, + "tags": tagsSchema(), }, } @@ -679,6 +684,15 @@ func resourceArmVirtualMachineRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("network_interface_ids", flattenAzureRmVirtualMachineNetworkInterfaces(resp.VirtualMachineProperties.NetworkProfile)); err != nil { return fmt.Errorf("[DEBUG] Error setting Virtual Machine Storage Network Interfaces: %#v", err) } + + if resp.VirtualMachineProperties.NetworkProfile.NetworkInterfaces != nil { + for _, nic := range *resp.VirtualMachineProperties.NetworkProfile.NetworkInterfaces { + if nic.NetworkInterfaceReferenceProperties != nil && *nic.NetworkInterfaceReferenceProperties.Primary { + d.Set("primary_network_interface_id", nic.ID) + break + } + } + } } flattenAndSetTags(d, resp.Tags) @@ -1362,14 +1376,20 @@ func expandAzureRmVirtualMachineImageReference(d *schema.ResourceData) (*compute func expandAzureRmVirtualMachineNetworkProfile(d *schema.ResourceData) compute.NetworkProfile { nicIds := d.Get("network_interface_ids").(*schema.Set).List() + primaryNicId := d.Get("primary_network_interface_id").(string) network_interfaces := make([]compute.NetworkInterfaceReference, 0, len(nicIds)) network_profile := compute.NetworkProfile{} for _, nic := range nicIds { id := nic.(string) + primary := id == primaryNicId + network_interface := compute.NetworkInterfaceReference{ ID: &id, + NetworkInterfaceReferenceProperties: &compute.NetworkInterfaceReferenceProperties{ + Primary: &primary, + }, } network_interfaces = append(network_interfaces, network_interface) } diff --git a/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set.go b/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set.go index f063119bd0..afadf65522 100644 --- a/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set.go +++ b/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set.go @@ -341,6 +341,55 @@ func resourceArmVirtualMachineScaleSet() *schema.Resource { Set: resourceArmVirtualMachineScaleSetStorageProfileImageReferenceHash, }, + "extension": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + + "publisher": { + Type: schema.TypeString, + Required: true, + }, + + "type": { + Type: schema.TypeString, + Required: true, + }, + + "type_handler_version": { + Type: schema.TypeString, + Required: true, + }, + + "auto_upgrade_minor_version": { + Type: schema.TypeBool, + Optional: true, + }, + + "settings": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateJsonString, + DiffSuppressFunc: suppressDiffVirtualMachineExtensionSettings, + }, + + "protected_settings": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + ValidateFunc: validateJsonString, + DiffSuppressFunc: suppressDiffVirtualMachineExtensionSettings, + }, + }, + }, + Set: resourceArmVirtualMachineScaleSetExtensionHash, + }, + "tags": tagsSchema(), }, } @@ -381,6 +430,11 @@ func resourceArmVirtualMachineScaleSetCreate(d *schema.ResourceData, meta interf return err } + extensions, err := expandAzureRMVirtualMachineScaleSetExtensions(d) + if err != nil { + return err + } + updatePolicy := d.Get("upgrade_policy_mode").(string) overprovision := d.Get("overprovision").(bool) scaleSetProps := compute.VirtualMachineScaleSetProperties{ @@ -388,9 +442,10 @@ func resourceArmVirtualMachineScaleSetCreate(d *schema.ResourceData, meta interf Mode: compute.UpgradeMode(updatePolicy), }, VirtualMachineProfile: &compute.VirtualMachineScaleSetVMProfile{ - NetworkProfile: expandAzureRmVirtualMachineScaleSetNetworkProfile(d), - StorageProfile: &storageProfile, - OsProfile: osProfile, + NetworkProfile: expandAzureRmVirtualMachineScaleSetNetworkProfile(d), + StorageProfile: &storageProfile, + OsProfile: osProfile, + ExtensionProfile: extensions, }, Overprovision: &overprovision, } @@ -488,6 +543,14 @@ func resourceArmVirtualMachineScaleSetRead(d *schema.ResourceData, meta interfac return fmt.Errorf("[DEBUG] Error setting Virtual Machine Scale Set Storage Profile OS Disk error: %#v", err) } + if properties.VirtualMachineProfile.ExtensionProfile != nil { + extension, err := flattenAzureRmVirtualMachineScaleSetExtensionProfile(properties.VirtualMachineProfile.ExtensionProfile) + if err != nil { + return fmt.Errorf("[DEBUG] Error setting Virtual Machine Scale Set Extension Profile error: %#v", err) + } + d.Set("extension", extension) + } + flattenAndSetTags(d, resp.Tags) return nil @@ -702,6 +765,39 @@ func flattenAzureRmVirtualMachineScaleSetSku(sku *compute.Sku) []interface{} { return []interface{}{result} } +func flattenAzureRmVirtualMachineScaleSetExtensionProfile(profile *compute.VirtualMachineScaleSetExtensionProfile) ([]map[string]interface{}, error) { + if profile.Extensions == nil { + return nil, nil + } + + result := make([]map[string]interface{}, 0, len(*profile.Extensions)) + for _, extension := range *profile.Extensions { + e := make(map[string]interface{}) + e["name"] = *extension.Name + properties := extension.VirtualMachineScaleSetExtensionProperties + if properties != nil { + e["publisher"] = *properties.Publisher + e["type"] = *properties.Type + e["type_handler_version"] = *properties.TypeHandlerVersion + if properties.AutoUpgradeMinorVersion != nil { + e["auto_upgrade_minor_version"] = *properties.AutoUpgradeMinorVersion + } + + if properties.Settings != nil { + settings, err := flattenArmVirtualMachineExtensionSettings(*properties.Settings) + if err != nil { + return nil, err + } + e["settings"] = settings + } + } + + result = append(result, e) + } + + return result, nil +} + func resourceArmVirtualMachineScaleSetStorageProfileImageReferenceHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) @@ -776,6 +872,20 @@ func resourceArmVirtualMachineScaleSetOsProfileLWindowsConfigHash(v interface{}) return hashcode.String(buf.String()) } +func resourceArmVirtualMachineScaleSetExtensionHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["publisher"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["type"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["type_handler_version"].(string))) + if m["auto_upgrade_minor_version"] != nil { + buf.WriteString(fmt.Sprintf("%t-", m["auto_upgrade_minor_version"].(bool))) + } + + return hashcode.String(buf.String()) +} + func expandVirtualMachineScaleSetSku(d *schema.ResourceData) (*compute.Sku, error) { skuConfig := d.Get("sku").(*schema.Set).List() @@ -1091,3 +1201,51 @@ func expandAzureRmVirtualMachineScaleSetOsProfileSecrets(d *schema.ResourceData) return &secrets } + +func expandAzureRMVirtualMachineScaleSetExtensions(d *schema.ResourceData) (*compute.VirtualMachineScaleSetExtensionProfile, error) { + extensions := d.Get("extension").(*schema.Set).List() + resources := make([]compute.VirtualMachineScaleSetExtension, 0, len(extensions)) + for _, e := range extensions { + config := e.(map[string]interface{}) + name := config["name"].(string) + publisher := config["publisher"].(string) + t := config["type"].(string) + version := config["type_handler_version"].(string) + + extension := compute.VirtualMachineScaleSetExtension{ + Name: &name, + VirtualMachineScaleSetExtensionProperties: &compute.VirtualMachineScaleSetExtensionProperties{ + Publisher: &publisher, + Type: &t, + TypeHandlerVersion: &version, + }, + } + + if u := config["auto_upgrade_minor_version"]; u != nil { + upgrade := u.(bool) + extension.VirtualMachineScaleSetExtensionProperties.AutoUpgradeMinorVersion = &upgrade + } + + if s := config["settings"].(string); s != "" { + settings, err := expandArmVirtualMachineExtensionSettings(s) + if err != nil { + return nil, fmt.Errorf("unable to parse settings: %s", err) + } + extension.VirtualMachineScaleSetExtensionProperties.Settings = &settings + } + + if s := config["protected_settings"].(string); s != "" { + protectedSettings, err := expandArmVirtualMachineExtensionSettings(s) + if err != nil { + return nil, fmt.Errorf("unable to parse protected_settings: %s", err) + } + extension.VirtualMachineScaleSetExtensionProperties.ProtectedSettings = &protectedSettings + } + + resources = append(resources, extension) + } + + return &compute.VirtualMachineScaleSetExtensionProfile{ + Extensions: &resources, + }, nil +} diff --git a/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set_test.go b/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set_test.go index 8b5c874d10..e9f3d1ef9c 100644 --- a/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set_test.go +++ b/builtin/providers/azurerm/resource_arm_virtual_machine_scale_set_test.go @@ -86,6 +86,44 @@ func TestAccAzureRMVirtualMachineScaleSet_overprovision(t *testing.T) { }) } +func TestAccAzureRMVirtualMachineScaleSet_extension(t *testing.T) { + ri := acctest.RandInt() + config := fmt.Sprintf(testAccAzureRMVirtualMachineScaleSetExtensionTemplate, ri, ri, ri, ri, ri, ri) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMVirtualMachineScaleSetDestroy, + Steps: []resource.TestStep{ + { + Config: config, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMVirtualMachineScaleSetExists("azurerm_virtual_machine_scale_set.test"), + testCheckAzureRMVirtualMachineScaleSetExtension("azurerm_virtual_machine_scale_set.test"), + ), + }, + }, + }) +} + +func TestAccAzureRMVirtualMachineScaleSet_multipleExtensions(t *testing.T) { + ri := acctest.RandInt() + config := fmt.Sprintf(testAccAzureRMVirtualMachineScaleSetMultipleExtensionsTemplate, ri, ri, ri, ri, ri, ri) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testCheckAzureRMVirtualMachineScaleSetDestroy, + Steps: []resource.TestStep{ + { + Config: config, + Check: resource.ComposeTestCheckFunc( + testCheckAzureRMVirtualMachineScaleSetExists("azurerm_virtual_machine_scale_set.test"), + testCheckAzureRMVirtualMachineScaleSetExtension("azurerm_virtual_machine_scale_set.test"), + ), + }, + }, + }) +} + func testCheckAzureRMVirtualMachineScaleSetExists(name string) resource.TestCheckFunc { return func(s *terraform.State) error { // Ensure we have enough information in state to look up in API @@ -240,6 +278,39 @@ func testCheckAzureRMVirtualMachineScaleSetOverprovision(name string) resource.T } } +func testCheckAzureRMVirtualMachineScaleSetExtension(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + // Ensure we have enough information in state to look up in API + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + name := rs.Primary.Attributes["name"] + resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"] + if !hasResourceGroup { + return fmt.Errorf("Bad: no resource group found in state for virtual machine: scale set %s", name) + } + + conn := testAccProvider.Meta().(*ArmClient).vmScaleSetClient + resp, err := conn.Get(resourceGroup, name) + if err != nil { + return fmt.Errorf("Bad: Get on vmScaleSetClient: %s", err) + } + + if resp.StatusCode == http.StatusNotFound { + return fmt.Errorf("Bad: VirtualMachineScaleSet %q (resource group: %q) does not exist", name, resourceGroup) + } + + n := resp.VirtualMachineProfile.ExtensionProfile.Extensions + if n == nil || len(*n) == 0 { + return fmt.Errorf("Bad: Could not get extensions for scale set %v", name) + } + + return nil + } +} + var testAccAzureRMVirtualMachineScaleSet_basicLinux = ` resource "azurerm_resource_group" "test" { name = "acctestRG-%d" @@ -507,3 +578,207 @@ resource "azurerm_virtual_machine_scale_set" "test" { } } ` + +var testAccAzureRMVirtualMachineScaleSetExtensionTemplate = ` +resource "azurerm_resource_group" "test" { + name = "acctestrg-%d" + location = "southcentralus" +} + +resource "azurerm_virtual_network" "test" { + name = "acctvn-%d" + address_space = ["10.0.0.0/16"] + location = "southcentralus" + resource_group_name = "${azurerm_resource_group.test.name}" +} + +resource "azurerm_subnet" "test" { + name = "acctsub-%d" + resource_group_name = "${azurerm_resource_group.test.name}" + virtual_network_name = "${azurerm_virtual_network.test.name}" + address_prefix = "10.0.2.0/24" +} + +resource "azurerm_storage_account" "test" { + name = "accsa%d" + resource_group_name = "${azurerm_resource_group.test.name}" + location = "southcentralus" + account_type = "Standard_LRS" +} + +resource "azurerm_storage_container" "test" { + name = "vhds" + resource_group_name = "${azurerm_resource_group.test.name}" + storage_account_name = "${azurerm_storage_account.test.name}" + container_access_type = "private" +} + +resource "azurerm_virtual_machine_scale_set" "test" { + name = "acctvmss-%d" + location = "southcentralus" + resource_group_name = "${azurerm_resource_group.test.name}" + upgrade_policy_mode = "Manual" + overprovision = false + + sku { + name = "Standard_A0" + tier = "Standard" + capacity = 1 + } + + os_profile { + computer_name_prefix = "testvm-%d" + admin_username = "myadmin" + admin_password = "Passwword1234" + } + + network_profile { + name = "TestNetworkProfile" + primary = true + ip_configuration { + name = "TestIPConfiguration" + subnet_id = "${azurerm_subnet.test.id}" + } + } + + storage_profile_os_disk { + name = "os-disk" + caching = "ReadWrite" + create_option = "FromImage" + vhd_containers = [ "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}" ] + } + + storage_profile_image_reference { + publisher = "Canonical" + offer = "UbuntuServer" + sku = "14.04.2-LTS" + version = "latest" + } + + extension { + name = "CustomScript" + publisher = "Microsoft.Azure.Extensions" + type = "CustomScript" + type_handler_version = "2.0" + auto_upgrade_minor_version = true + settings = <&1 make -C ../../.. testacc TEST=./builtin/providers/circonus | tee test.log diff --git a/builtin/providers/circonus/check.go b/builtin/providers/circonus/check.go new file mode 100644 index 0000000000..6200e94bfa --- /dev/null +++ b/builtin/providers/circonus/check.go @@ -0,0 +1,123 @@ +package circonus + +import ( + "fmt" + "log" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" +) + +// The circonusCheck type is the backing store of the `circonus_check` resource. + +type circonusCheck struct { + api.CheckBundle +} + +type circonusCheckType string + +const ( + // CheckBundle.Status can be one of these values + checkStatusActive = "active" + checkStatusDisabled = "disabled" +) + +const ( + apiCheckTypeCAQL circonusCheckType = "caql" + apiCheckTypeICMPPing circonusCheckType = "ping_icmp" + apiCheckTypeHTTP circonusCheckType = "http" + apiCheckTypeJSON circonusCheckType = "json" + apiCheckTypeMySQL circonusCheckType = "mysql" + apiCheckTypeStatsd circonusCheckType = "statsd" + apiCheckTypePostgreSQL circonusCheckType = "postgres" + apiCheckTypeTCP circonusCheckType = "tcp" +) + +func newCheck() circonusCheck { + return circonusCheck{ + CheckBundle: *api.NewCheckBundle(), + } +} + +func loadCheck(ctxt *providerContext, cid api.CIDType) (circonusCheck, error) { + var c circonusCheck + cb, err := ctxt.client.FetchCheckBundle(cid) + if err != nil { + return circonusCheck{}, err + } + c.CheckBundle = *cb + + return c, nil +} + +func checkAPIStatusToBool(s string) bool { + var active bool + switch s { + case checkStatusActive: + active = true + case checkStatusDisabled: + active = false + default: + log.Printf("[ERROR] PROVIDER BUG: check status %q unsupported", s) + } + + return active +} + +func checkActiveToAPIStatus(active bool) string { + if active { + return checkStatusActive + } + + return checkStatusDisabled +} + +func (c *circonusCheck) Create(ctxt *providerContext) error { + cb, err := ctxt.client.CreateCheckBundle(&c.CheckBundle) + if err != nil { + return err + } + + c.CID = cb.CID + + return nil +} + +func (c *circonusCheck) Update(ctxt *providerContext) error { + _, err := ctxt.client.UpdateCheckBundle(&c.CheckBundle) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to update check bundle %s: {{err}}", c.CID), err) + } + + return nil +} + +func (c *circonusCheck) Fixup() error { + switch apiCheckType(c.Type) { + case apiCheckTypeCloudWatchAttr: + switch c.Period { + case 60: + c.Config[config.Granularity] = "1" + case 300: + c.Config[config.Granularity] = "5" + } + } + + return nil +} + +func (c *circonusCheck) Validate() error { + if c.Timeout > float32(c.Period) { + return fmt.Errorf("Timeout (%f) can not exceed period (%d)", c.Timeout, c.Period) + } + + switch apiCheckType(c.Type) { + case apiCheckTypeCloudWatchAttr: + if !(c.Period == 60 || c.Period == 300) { + return fmt.Errorf("Period must be either 1m or 5m for a %s check", apiCheckTypeCloudWatchAttr) + } + } + + return nil +} diff --git a/builtin/providers/circonus/consts.go b/builtin/providers/circonus/consts.go new file mode 100644 index 0000000000..6b505482ac --- /dev/null +++ b/builtin/providers/circonus/consts.go @@ -0,0 +1,127 @@ +package circonus + +const ( + // Provider-level constants + + // defaultAutoTag determines the default behavior of circonus.auto_tag. + defaultAutoTag = false + + // When auto_tag is enabled, the default tag category and value will be set to + // the following value unless overriden. + defaultCirconusTag circonusTag = "author:terraform" + + // When hashing a Set, default to a buffer this size + defaultHashBufSize = 512 + + providerAPIURLAttr = "api_url" + providerAutoTagAttr = "auto_tag" + providerKeyAttr = "key" + + defaultCheckJSONMethod = "GET" + defaultCheckJSONPort = "443" + defaultCheckJSONVersion = "1.1" + + defaultCheckICMPPingAvailability = 100.0 + defaultCheckICMPPingCount = 5 + defaultCheckICMPPingInterval = "2s" + + defaultCheckCAQLTarget = "q._caql" + + defaultCheckHTTPCodeRegexp = `^200$` + defaultCheckHTTPMethod = "GET" + defaultCheckHTTPVersion = "1.1" + + defaultCheckHTTPTrapAsync = false + + defaultCheckCloudWatchVersion = "2010-08-01" + + defaultCollectorDetailAttrs = 10 + + defaultGraphDatapoints = 8 + defaultGraphLineStyle = "stepped" + defaultGraphStyle = "line" + defaultGraphFunction = "gauge" + + metricUnit = "" + metricUnitRegexp = `^.*$` + + defaultRuleSetLast = "300s" + defaultRuleSetMetricType = "numeric" + defaultRuleSetRuleLen = 4 + defaultAlertSeverity = 1 + defaultRuleSetWindowFunc = "average" + ruleSetAbsentMin = "70s" +) + +// Consts and their close relative, Go pseudo-consts. + +// validMetricTypes: See `type`: https://login.circonus.com/resources/api/calls/check_bundle +var validMetricTypes = validStringValues{ + `caql`, + `composite`, + `histogram`, + `numeric`, + `text`, +} + +// validAggregateFuncs: See `aggregate_function`: https://login.circonus.com/resources/api/calls/graph +var validAggregateFuncs = validStringValues{ + `none`, + `min`, + `max`, + `sum`, + `mean`, + `geometric_mean`, +} + +// validGraphLineStyles: See `line_style`: https://login.circonus.com/resources/api/calls/graph +var validGraphLineStyles = validStringValues{ + `stepped`, + `interpolated`, +} + +// validGraphStyles: See `style`: https://login.circonus.com/resources/api/calls/graph +var validGraphStyles = validStringValues{ + `area`, + `line`, +} + +// validAxisAttrs: See `line_style`: https://login.circonus.com/resources/api/calls/graph +var validAxisAttrs = validStringValues{ + `left`, + `right`, +} + +// validGraphFunctionValues: See `derive`: https://login.circonus.com/resources/api/calls/graph +var validGraphFunctionValues = validStringValues{ + `counter`, + `derive`, + `gauge`, +} + +// validRuleSetWindowFuncs: See `derive` or `windowing_func`: https://login.circonus.com/resources/api/calls/rule_set +var validRuleSetWindowFuncs = validStringValues{ + `average`, + `stddev`, + `derive`, + `derive_stddev`, + `counter`, + `counter_stddev`, + `derive_2`, + `derive_2_stddev`, + `counter_2`, + `counter_2_stddev`, +} + +const ( + // Supported circonus_trigger.metric_types. See `metric_type`: + // https://login.circonus.com/resources/api/calls/rule_set + ruleSetMetricTypeNumeric = "numeric" + ruleSetMetricTypeText = "text" +) + +// validRuleSetMetricTypes: See `metric_type`: https://login.circonus.com/resources/api/calls/rule_set +var validRuleSetMetricTypes = validStringValues{ + ruleSetMetricTypeNumeric, + ruleSetMetricTypeText, +} diff --git a/builtin/providers/circonus/data_source_circonus_account.go b/builtin/providers/circonus/data_source_circonus_account.go new file mode 100644 index 0000000000..c7a9121bc2 --- /dev/null +++ b/builtin/providers/circonus/data_source_circonus_account.go @@ -0,0 +1,271 @@ +package circonus + +import ( + "fmt" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + accountAddress1Attr = "address1" + accountAddress2Attr = "address2" + accountCCEmailAttr = "cc_email" + accountCityAttr = "city" + accountContactGroupsAttr = "contact_groups" + accountCountryAttr = "country" + accountCurrentAttr = "current" + accountDescriptionAttr = "description" + accountEmailAttr = "email" + accountIDAttr = "id" + accountInvitesAttr = "invites" + accountLimitAttr = "limit" + accountNameAttr = "name" + accountOwnerAttr = "owner" + accountRoleAttr = "role" + accountStateProvAttr = "state" + accountTimezoneAttr = "timezone" + accountTypeAttr = "type" + accountUIBaseURLAttr = "ui_base_url" + accountUsageAttr = "usage" + accountUsedAttr = "used" + accountUserIDAttr = "id" + accountUsersAttr = "users" +) + +var accountDescription = map[schemaAttr]string{ + accountContactGroupsAttr: "Contact Groups in this account", + accountInvitesAttr: "Outstanding invites attached to the account", + accountUsageAttr: "Account's usage limits", + accountUsersAttr: "Users attached to this account", +} + +func dataSourceCirconusAccount() *schema.Resource { + return &schema.Resource{ + Read: dataSourceCirconusAccountRead, + + Schema: map[string]*schema.Schema{ + accountAddress1Attr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountAddress1Attr], + }, + accountAddress2Attr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountAddress2Attr], + }, + accountCCEmailAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountCCEmailAttr], + }, + accountIDAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ConflictsWith: []string{accountCurrentAttr}, + ValidateFunc: validateFuncs( + validateRegexp(accountIDAttr, config.AccountCIDRegex), + ), + Description: accountDescription[accountIDAttr], + }, + accountCityAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountCityAttr], + }, + accountContactGroupsAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: accountDescription[accountContactGroupsAttr], + }, + accountCountryAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountCountryAttr], + }, + accountCurrentAttr: &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + ConflictsWith: []string{accountIDAttr}, + Description: accountDescription[accountCurrentAttr], + }, + accountDescriptionAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountDescriptionAttr], + }, + accountInvitesAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Description: accountDescription[accountInvitesAttr], + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + accountEmailAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountEmailAttr], + }, + accountRoleAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountRoleAttr], + }, + }, + }, + }, + accountNameAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountNameAttr], + }, + accountOwnerAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountOwnerAttr], + }, + accountStateProvAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountStateProvAttr], + }, + accountTimezoneAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountTimezoneAttr], + }, + accountUIBaseURLAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountUIBaseURLAttr], + }, + accountUsageAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Description: accountDescription[accountUsageAttr], + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + accountLimitAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + Description: accountDescription[accountLimitAttr], + }, + accountTypeAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountTypeAttr], + }, + accountUsedAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + Description: accountDescription[accountUsedAttr], + }, + }, + }, + }, + accountUsersAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Description: accountDescription[accountUsersAttr], + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + accountUserIDAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountUserIDAttr], + }, + accountRoleAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: accountDescription[accountRoleAttr], + }, + }, + }, + }, + }, + } +} + +func dataSourceCirconusAccountRead(d *schema.ResourceData, meta interface{}) error { + c := meta.(*providerContext) + + var cid string + + var a *api.Account + var err error + if v, ok := d.GetOk(accountIDAttr); ok { + cid = v.(string) + } + + if v, ok := d.GetOk(accountCurrentAttr); ok { + if v.(bool) { + cid = "" + } + } + + a, err = c.client.FetchAccount(api.CIDType(&cid)) + if err != nil { + return err + } + + invitesList := make([]interface{}, 0, len(a.Invites)) + for i := range a.Invites { + invitesList = append(invitesList, map[string]interface{}{ + accountEmailAttr: a.Invites[i].Email, + accountRoleAttr: a.Invites[i].Role, + }) + } + + usageList := make([]interface{}, 0, len(a.Usage)) + for i := range a.Usage { + usageList = append(usageList, map[string]interface{}{ + accountLimitAttr: a.Usage[i].Limit, + accountTypeAttr: a.Usage[i].Type, + accountUsedAttr: a.Usage[i].Used, + }) + } + + usersList := make([]interface{}, 0, len(a.Users)) + for i := range a.Users { + usersList = append(usersList, map[string]interface{}{ + accountUserIDAttr: a.Users[i].UserCID, + accountRoleAttr: a.Users[i].Role, + }) + } + + d.SetId(a.CID) + + d.Set(accountAddress1Attr, a.Address1) + d.Set(accountAddress2Attr, a.Address2) + d.Set(accountCCEmailAttr, a.CCEmail) + d.Set(accountIDAttr, a.CID) + d.Set(accountCityAttr, a.City) + d.Set(accountContactGroupsAttr, a.ContactGroups) + d.Set(accountCountryAttr, a.Country) + d.Set(accountDescriptionAttr, a.Description) + + if err := d.Set(accountInvitesAttr, invitesList); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store account %q attribute: {{err}}", accountInvitesAttr), err) + } + + d.Set(accountNameAttr, a.Name) + d.Set(accountOwnerAttr, a.OwnerCID) + d.Set(accountStateProvAttr, a.StateProv) + d.Set(accountTimezoneAttr, a.Timezone) + d.Set(accountUIBaseURLAttr, a.UIBaseURL) + + if err := d.Set(accountUsageAttr, usageList); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store account %q attribute: {{err}}", accountUsageAttr), err) + } + + if err := d.Set(accountUsersAttr, usersList); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store account %q attribute: {{err}}", accountUsersAttr), err) + } + + return nil +} diff --git a/builtin/providers/circonus/data_source_circonus_account_test.go b/builtin/providers/circonus/data_source_circonus_account_test.go new file mode 100644 index 0000000000..78b08c52d0 --- /dev/null +++ b/builtin/providers/circonus/data_source_circonus_account_test.go @@ -0,0 +1,66 @@ +package circonus + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceCirconusAccount(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDataSourceCirconusAccountCurrentConfig, + Check: resource.ComposeTestCheckFunc( + testAccDataSourceCirconusAccountCheck("data.circonus_account.by_current", "/account/3081"), + ), + }, + }, + }) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDataSourceCirconusAccountIDConfig, + Check: resource.ComposeTestCheckFunc( + testAccDataSourceCirconusAccountCheck("data.circonus_account.by_id", "/account/3081"), + ), + }, + }, + }) +} + +func testAccDataSourceCirconusAccountCheck(name, cid string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("root module has no resource called %s", name) + } + + attr := rs.Primary.Attributes + + if attr[accountIDAttr] != cid { + return fmt.Errorf("bad %s %s", accountIDAttr, attr[accountIDAttr]) + } + + return nil + } +} + +const testAccDataSourceCirconusAccountCurrentConfig = ` +data "circonus_account" "by_current" { + current = true +} +` + +const testAccDataSourceCirconusAccountIDConfig = ` +data "circonus_account" "by_id" { + id = "/account/3081" +} +` diff --git a/builtin/providers/circonus/data_source_circonus_collector.go b/builtin/providers/circonus/data_source_circonus_collector.go new file mode 100644 index 0000000000..6dda66a825 --- /dev/null +++ b/builtin/providers/circonus/data_source_circonus_collector.go @@ -0,0 +1,214 @@ +package circonus + +import ( + "fmt" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + collectorCNAttr = "cn" + collectorIDAttr = "id" + collectorDetailsAttr = "details" + collectorExternalHostAttr = "external_host" + collectorExternalPortAttr = "external_port" + collectorIPAttr = "ip" + collectorLatitudeAttr = "latitude" + collectorLongitudeAttr = "longitude" + collectorMinVersionAttr = "min_version" + collectorModulesAttr = "modules" + collectorNameAttr = "name" + collectorPortAttr = "port" + collectorSkewAttr = "skew" + collectorStatusAttr = "status" + collectorTagsAttr = "tags" + collectorTypeAttr = "type" + collectorVersionAttr = "version" +) + +var collectorDescription = map[schemaAttr]string{ + collectorDetailsAttr: "Details associated with individual collectors (a.k.a. broker)", + collectorTagsAttr: "Tags assigned to a collector", +} + +func dataSourceCirconusCollector() *schema.Resource { + return &schema.Resource{ + Read: dataSourceCirconusCollectorRead, + + Schema: map[string]*schema.Schema{ + collectorDetailsAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Description: collectorDescription[collectorDetailsAttr], + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + collectorCNAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorCNAttr], + }, + collectorExternalHostAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorExternalHostAttr], + }, + collectorExternalPortAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + Description: collectorDescription[collectorExternalPortAttr], + }, + collectorIPAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorIPAttr], + }, + collectorMinVersionAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + Description: collectorDescription[collectorMinVersionAttr], + }, + collectorModulesAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: collectorDescription[collectorModulesAttr], + }, + collectorPortAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + Description: collectorDescription[collectorPortAttr], + }, + collectorSkewAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorSkewAttr], + }, + collectorStatusAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorStatusAttr], + }, + collectorVersionAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + Description: collectorDescription[collectorVersionAttr], + }, + }, + }, + }, + collectorIDAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validateRegexp(collectorIDAttr, config.BrokerCIDRegex), + Description: collectorDescription[collectorIDAttr], + }, + collectorLatitudeAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorLatitudeAttr], + }, + collectorLongitudeAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorLongitudeAttr], + }, + collectorNameAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorNameAttr], + }, + collectorTagsAttr: tagMakeConfigSchema(collectorTagsAttr), + collectorTypeAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: collectorDescription[collectorTypeAttr], + }, + }, + } +} + +func dataSourceCirconusCollectorRead(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + var collector *api.Broker + var err error + cid := d.Id() + if cidRaw, ok := d.GetOk(collectorIDAttr); ok { + cid = cidRaw.(string) + } + collector, err = ctxt.client.FetchBroker(api.CIDType(&cid)) + if err != nil { + return err + } + + d.SetId(collector.CID) + + if err := d.Set(collectorDetailsAttr, collectorDetailsToState(collector)); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store collector %q attribute: {{err}}", collectorDetailsAttr), err) + } + + d.Set(collectorIDAttr, collector.CID) + d.Set(collectorLatitudeAttr, collector.Latitude) + d.Set(collectorLongitudeAttr, collector.Longitude) + d.Set(collectorNameAttr, collector.Name) + d.Set(collectorTagsAttr, collector.Tags) + d.Set(collectorTypeAttr, collector.Type) + + return nil +} + +func collectorDetailsToState(c *api.Broker) []interface{} { + details := make([]interface{}, 0, len(c.Details)) + + for _, collector := range c.Details { + collectorDetails := make(map[string]interface{}, defaultCollectorDetailAttrs) + + collectorDetails[collectorCNAttr] = collector.CN + + if collector.ExternalHost != nil { + collectorDetails[collectorExternalHostAttr] = *collector.ExternalHost + } + + if collector.ExternalPort != 0 { + collectorDetails[collectorExternalPortAttr] = collector.ExternalPort + } + + if collector.IP != nil { + collectorDetails[collectorIPAttr] = *collector.IP + } + + if collector.MinVer != 0 { + collectorDetails[collectorMinVersionAttr] = collector.MinVer + } + + if len(collector.Modules) > 0 { + collectorDetails[collectorModulesAttr] = collector.Modules + } + + if collector.Port != nil { + collectorDetails[collectorPortAttr] = *collector.Port + } + + if collector.Skew != nil { + collectorDetails[collectorSkewAttr] = *collector.Skew + } + + if collector.Status != "" { + collectorDetails[collectorStatusAttr] = collector.Status + } + + if collector.Version != nil { + collectorDetails[collectorVersionAttr] = *collector.Version + } + + details = append(details, collectorDetails) + } + + return details +} diff --git a/builtin/providers/circonus/data_source_circonus_collector_test.go b/builtin/providers/circonus/data_source_circonus_collector_test.go new file mode 100644 index 0000000000..54d5feba9f --- /dev/null +++ b/builtin/providers/circonus/data_source_circonus_collector_test.go @@ -0,0 +1,47 @@ +package circonus + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceCirconusCollector(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDataSourceCirconusCollectorConfig, + Check: resource.ComposeTestCheckFunc( + testAccDataSourceCirconusCollectorCheck("data.circonus_collector.by_id", "/broker/1"), + ), + }, + }, + }) +} + +func testAccDataSourceCirconusCollectorCheck(name, cid string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("root module has no resource called %s", name) + } + + attr := rs.Primary.Attributes + + if attr[collectorIDAttr] != cid { + return fmt.Errorf("bad id %s", attr[collectorIDAttr]) + } + + return nil + } +} + +const testAccDataSourceCirconusCollectorConfig = ` +data "circonus_collector" "by_id" { + id = "/broker/1" +} +` diff --git a/builtin/providers/circonus/interface.go b/builtin/providers/circonus/interface.go new file mode 100644 index 0000000000..d5777cf816 --- /dev/null +++ b/builtin/providers/circonus/interface.go @@ -0,0 +1,83 @@ +package circonus + +import "log" + +type interfaceList []interface{} +type interfaceMap map[string]interface{} + +// newInterfaceMap returns a helper type that has methods for common operations +// for accessing data. +func newInterfaceMap(l interface{}) interfaceMap { + return interfaceMap(l.(map[string]interface{})) +} + +// CollectList returns []string of values that matched the key attrName. +// interfaceList most likely came from a schema.TypeSet. +func (l interfaceList) CollectList(attrName schemaAttr) []string { + stringList := make([]string, 0, len(l)) + + for _, mapRaw := range l { + mapAttrs := mapRaw.(map[string]interface{}) + + if v, ok := mapAttrs[string(attrName)]; ok { + stringList = append(stringList, v.(string)) + } + } + + return stringList +} + +// List returns a list of values in a Set as a string slice +func (l interfaceList) List() []string { + stringList := make([]string, 0, len(l)) + for _, e := range l { + switch e.(type) { + case string: + stringList = append(stringList, e.(string)) + case []interface{}: + for _, v := range e.([]interface{}) { + stringList = append(stringList, v.(string)) + } + default: + log.Printf("[ERROR] PROVIDER BUG: unable to convert %#v to list", e) + return nil + } + } + return stringList +} + +// CollectList returns []string of values that matched the key attrName. +// interfaceMap most likely came from a schema.TypeSet. +func (m interfaceMap) CollectList(attrName schemaAttr) []string { + stringList := make([]string, 0, len(m)) + + for _, mapRaw := range m { + mapAttrs := mapRaw.(map[string]interface{}) + + if v, ok := mapAttrs[string(attrName)]; ok { + stringList = append(stringList, v.(string)) + } + } + + return stringList +} + +// CollectMap returns map[string]string of values that matched the key attrName. +// interfaceMap most likely came from a schema.TypeSet. +func (m interfaceMap) CollectMap(attrName schemaAttr) map[string]string { + var mergedMap map[string]string + + if attrRaw, ok := m[string(attrName)]; ok { + attrMap := attrRaw.(map[string]interface{}) + mergedMap = make(map[string]string, len(m)) + for k, v := range attrMap { + mergedMap[k] = v.(string) + } + } + + if len(mergedMap) == 0 { + return nil + } + + return mergedMap +} diff --git a/builtin/providers/circonus/metric.go b/builtin/providers/circonus/metric.go new file mode 100644 index 0000000000..78483ec794 --- /dev/null +++ b/builtin/providers/circonus/metric.go @@ -0,0 +1,182 @@ +package circonus + +// The circonusMetric type is the backing store of the `circonus_metric` resource. + +import ( + "bytes" + "fmt" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/hashicorp/errwrap" + uuid "github.com/hashicorp/go-uuid" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +type circonusMetric struct { + ID metricID + api.CheckBundleMetric +} + +func newMetric() circonusMetric { + return circonusMetric{} +} + +func (m *circonusMetric) Create(d *schema.ResourceData) error { + return m.SaveState(d) +} + +func (m *circonusMetric) ParseConfig(id string, d *schema.ResourceData) error { + m.ID = metricID(id) + + if v, found := d.GetOk(metricNameAttr); found { + m.Name = v.(string) + } + + if v, found := d.GetOk(metricActiveAttr); found { + m.Status = metricActiveToAPIStatus(v.(bool)) + } + + if v, found := d.GetOk(metricTagsAttr); found { + m.Tags = derefStringList(flattenSet(v.(*schema.Set))) + } + + if v, found := d.GetOk(metricTypeAttr); found { + m.Type = v.(string) + } + + if v, found := d.GetOk(metricUnitAttr); found { + s := v.(string) + m.Units = &s + } + + if m.Units != nil && *m.Units == "" { + m.Units = nil + } + + return nil +} + +func (m *circonusMetric) ParseConfigMap(id string, attrMap map[string]interface{}) error { + m.ID = metricID(id) + + if v, found := attrMap[metricNameAttr]; found { + m.Name = v.(string) + } + + if v, found := attrMap[metricActiveAttr]; found { + m.Status = metricActiveToAPIStatus(v.(bool)) + } + + if v, found := attrMap[metricTagsAttr]; found { + m.Tags = derefStringList(flattenSet(v.(*schema.Set))) + } + + if v, found := attrMap[metricTypeAttr]; found { + m.Type = v.(string) + } + + if v, found := attrMap[metricUnitAttr]; found { + s := v.(string) + m.Units = &s + } + + if m.Units != nil && *m.Units == "" { + m.Units = nil + } + + return nil +} + +func (m *circonusMetric) SaveState(d *schema.ResourceData) error { + d.SetId(string(m.ID)) + + d.Set(metricActiveAttr, metricAPIStatusToBool(m.Status)) + d.Set(metricNameAttr, m.Name) + d.Set(metricTagsAttr, tagsToState(apiToTags(m.Tags))) + d.Set(metricTypeAttr, m.Type) + d.Set(metricUnitAttr, indirect(m.Units)) + + return nil +} + +func (m *circonusMetric) Update(d *schema.ResourceData) error { + // NOTE: there are no "updates" to be made against an API server, so we just + // pass through a call to SaveState. Keep this method around for API + // symmetry. + return m.SaveState(d) +} + +func metricAPIStatusToBool(s string) bool { + switch s { + case metricStatusActive: + return true + case metricStatusAvailable: + return false + default: + // log.Printf("PROVIDER BUG: metric status %q unsupported", s) + return false + } +} + +func metricActiveToAPIStatus(active bool) string { + if active { + return metricStatusActive + } + + return metricStatusAvailable +} + +func newMetricID() (string, error) { + id, err := uuid.GenerateUUID() + if err != nil { + return "", errwrap.Wrapf("metric ID creation failed: {{err}}", err) + } + + return id, nil +} + +func metricChecksum(m interfaceMap) int { + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + + // Order writes to the buffer using lexically sorted list for easy visual + // reconciliation with other lists. + if v, found := m[metricActiveAttr]; found { + fmt.Fprintf(b, "%t", v.(bool)) + } + + if v, found := m[metricNameAttr]; found { + fmt.Fprint(b, v.(string)) + } + + if v, found := m[metricTagsAttr]; found { + tags := derefStringList(flattenSet(v.(*schema.Set))) + for _, tag := range tags { + fmt.Fprint(b, tag) + } + } + + if v, found := m[metricTypeAttr]; found { + fmt.Fprint(b, v.(string)) + } + + if v, found := m[metricUnitAttr]; found { + if v != nil { + var s string + switch v.(type) { + case string: + s = v.(string) + case *string: + s = *v.(*string) + } + + if s != "" { + fmt.Fprint(b, s) + } + } + } + + s := b.String() + return hashcode.String(s) +} diff --git a/builtin/providers/circonus/metric_cluster.go b/builtin/providers/circonus/metric_cluster.go new file mode 100644 index 0000000000..6d06834f42 --- /dev/null +++ b/builtin/providers/circonus/metric_cluster.go @@ -0,0 +1,57 @@ +package circonus + +import ( + "fmt" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/hashicorp/errwrap" +) + +type circonusMetricCluster struct { + api.MetricCluster +} + +func newMetricCluster() circonusMetricCluster { + return circonusMetricCluster{ + MetricCluster: api.MetricCluster{}, + } +} + +func loadMetricCluster(ctxt *providerContext, cid api.CIDType) (circonusMetricCluster, error) { + var mc circonusMetricCluster + cmc, err := ctxt.client.FetchMetricCluster(cid, "") + if err != nil { + return circonusMetricCluster{}, err + } + mc.MetricCluster = *cmc + + return mc, nil +} + +func (mc *circonusMetricCluster) Create(ctxt *providerContext) error { + cmc, err := ctxt.client.CreateMetricCluster(&mc.MetricCluster) + if err != nil { + return err + } + + mc.CID = cmc.CID + + return nil +} + +func (mc *circonusMetricCluster) Update(ctxt *providerContext) error { + _, err := ctxt.client.UpdateMetricCluster(&mc.MetricCluster) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to update stream group %s: {{err}}", mc.CID), err) + } + + return nil +} + +func (mc *circonusMetricCluster) Validate() error { + if len(mc.Queries) < 1 { + return fmt.Errorf("there must be at least one stream group query present") + } + + return nil +} diff --git a/builtin/providers/circonus/metric_test.go b/builtin/providers/circonus/metric_test.go new file mode 100644 index 0000000000..b7b28efb85 --- /dev/null +++ b/builtin/providers/circonus/metric_test.go @@ -0,0 +1,19 @@ +package circonus + +import "testing" + +func Test_MetricChecksum(t *testing.T) { + unit := "qty" + m := interfaceMap{ + string(metricActiveAttr): true, + string(metricNameAttr): "asdf", + string(metricTagsAttr): tagsToState(apiToTags([]string{"foo", "bar"})), + string(metricTypeAttr): "json", + string(metricUnitAttr): &unit, + } + + csum := metricChecksum(m) + if csum != 4250221491 { + t.Fatalf("Checksum mismatch") + } +} diff --git a/builtin/providers/circonus/provider.go b/builtin/providers/circonus/provider.go new file mode 100644 index 0000000000..9796d6be0e --- /dev/null +++ b/builtin/providers/circonus/provider.go @@ -0,0 +1,135 @@ +package circonus + +import ( + "bytes" + "fmt" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +const ( + defaultCirconus404ErrorString = "API response code 404:" + defaultCirconusAggregationWindow = "300s" + defaultCirconusAlertMinEscalateAfter = "300s" + defaultCirconusCheckPeriodMax = "300s" + defaultCirconusCheckPeriodMin = "30s" + defaultCirconusHTTPFormat = "json" + defaultCirconusHTTPMethod = "POST" + defaultCirconusSlackUsername = "Circonus" + defaultCirconusTimeoutMax = "300s" + defaultCirconusTimeoutMin = "0s" + maxSeverity = 5 + minSeverity = 1 +) + +var providerDescription = map[string]string{ + providerAPIURLAttr: "URL of the Circonus API", + providerAutoTagAttr: "Signals that the provider should automatically add a tag to all API calls denoting that the resource was created by Terraform", + providerKeyAttr: "API token used to authenticate with the Circonus API", +} + +// Constants that want to be a constant but can't in Go +var ( + validContactHTTPFormats = validStringValues{"json", "params"} + validContactHTTPMethods = validStringValues{"GET", "POST"} +) + +type contactMethods string + +// globalAutoTag controls whether or not the provider should automatically add a +// tag to each resource. +// +// NOTE(sean): This is done as a global variable because the diff suppress +// functions does not have access to the providerContext, only the key, old, and +// new values. +var globalAutoTag bool + +type providerContext struct { + // Circonus API client + client *api.API + + // autoTag, when true, automatically appends defaultCirconusTag + autoTag bool + + // defaultTag make up the tag to be used when autoTag tags a tag. + defaultTag circonusTag +} + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + providerAPIURLAttr: { + Type: schema.TypeString, + Optional: true, + Default: "https://api.circonus.com/v2", + Description: providerDescription[providerAPIURLAttr], + }, + providerAutoTagAttr: { + Type: schema.TypeBool, + Optional: true, + Default: defaultAutoTag, + Description: providerDescription[providerAutoTagAttr], + }, + providerKeyAttr: { + Type: schema.TypeString, + Required: true, + Sensitive: true, + DefaultFunc: schema.EnvDefaultFunc("CIRCONUS_API_TOKEN", nil), + Description: providerDescription[providerKeyAttr], + }, + }, + + DataSourcesMap: map[string]*schema.Resource{ + "circonus_account": dataSourceCirconusAccount(), + "circonus_collector": dataSourceCirconusCollector(), + }, + + ResourcesMap: map[string]*schema.Resource{ + "circonus_check": resourceCheck(), + "circonus_contact_group": resourceContactGroup(), + "circonus_graph": resourceGraph(), + "circonus_metric": resourceMetric(), + "circonus_metric_cluster": resourceMetricCluster(), + "circonus_rule_set": resourceRuleSet(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + globalAutoTag = d.Get(providerAutoTagAttr).(bool) + + config := &api.Config{ + URL: d.Get(providerAPIURLAttr).(string), + TokenKey: d.Get(providerKeyAttr).(string), + TokenApp: tfAppName(), + } + + client, err := api.NewAPI(config) + if err != nil { + return nil, errwrap.Wrapf("Error initializing Circonus: %s", err) + } + + return &providerContext{ + client: client, + autoTag: d.Get(providerAutoTagAttr).(bool), + defaultTag: defaultCirconusTag, + }, nil +} + +func tfAppName() string { + const VersionPrerelease = terraform.VersionPrerelease + var versionString bytes.Buffer + + fmt.Fprintf(&versionString, "Terraform v%s", terraform.Version) + if VersionPrerelease != "" { + fmt.Fprintf(&versionString, "-%s", VersionPrerelease) + } + + return versionString.String() +} diff --git a/builtin/providers/circonus/provider_test.go b/builtin/providers/circonus/provider_test.go new file mode 100644 index 0000000000..4a30f4877d --- /dev/null +++ b/builtin/providers/circonus/provider_test.go @@ -0,0 +1,35 @@ +package circonus + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "circonus": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if apiToken := os.Getenv("CIRCONUS_API_TOKEN"); apiToken == "" { + t.Fatal("CIRCONUS_API_TOKEN must be set for acceptance tests") + } +} diff --git a/builtin/providers/circonus/resource_circonus_check.go b/builtin/providers/circonus/resource_circonus_check.go new file mode 100644 index 0000000000..0c2b6d5016 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_check.go @@ -0,0 +1,628 @@ +package circonus + +/* + * Note to future readers: The `circonus_check` resource is actually a facade for + * the check_bundle call. check_bundle is an implementation detail that we mask + * over and expose just a "check" even though the "check" is actually a + * check_bundle. + * + * Style note: There are three directions that information flows: + * + * 1) Terraform Config file into API Objects. *Attr named objects are Config or + * Schema attribute names. In this file, all config constants should be + * named check*Attr. + * + * 2) API Objects into Statefile data. api*Attr named constants are parameters + * that originate from the API and need to be mapped into the provider's + * vernacular. + */ + +import ( + "fmt" + "time" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + // circonus_check.* global resource attribute names + checkActiveAttr = "active" + checkCAQLAttr = "caql" + checkCloudWatchAttr = "cloudwatch" + checkCollectorAttr = "collector" + checkHTTPAttr = "http" + checkHTTPTrapAttr = "httptrap" + checkICMPPingAttr = "icmp_ping" + checkJSONAttr = "json" + checkMetricLimitAttr = "metric_limit" + checkMySQLAttr = "mysql" + checkNameAttr = "name" + checkNotesAttr = "notes" + checkPeriodAttr = "period" + checkPostgreSQLAttr = "postgresql" + checkMetricAttr = "metric" + checkStatsdAttr = "statsd" + checkTagsAttr = "tags" + checkTargetAttr = "target" + checkTCPAttr = "tcp" + checkTimeoutAttr = "timeout" + checkTypeAttr = "type" + + // circonus_check.collector.* resource attribute names + checkCollectorIDAttr = "id" + + // circonus_check.metric.* resource attribute names are aliased to + // circonus_metric.* resource attributes. + + // circonus_check.metric.* resource attribute names + // metricIDAttr = "id" + + // Out parameters for circonus_check + checkOutByCollectorAttr = "check_by_collector" + checkOutIDAttr = "check_id" + checkOutChecksAttr = "checks" + checkOutCreatedAttr = "created" + checkOutLastModifiedAttr = "last_modified" + checkOutLastModifiedByAttr = "last_modified_by" + checkOutReverseConnectURLsAttr = "reverse_connect_urls" + checkOutCheckUUIDsAttr = "uuids" +) + +const ( + // Circonus API constants from their API endpoints + apiCheckTypeCAQLAttr apiCheckType = "caql" + apiCheckTypeCloudWatchAttr apiCheckType = "cloudwatch" + apiCheckTypeHTTPAttr apiCheckType = "http" + apiCheckTypeHTTPTrapAttr apiCheckType = "httptrap" + apiCheckTypeICMPPingAttr apiCheckType = "ping_icmp" + apiCheckTypeJSONAttr apiCheckType = "json" + apiCheckTypeMySQLAttr apiCheckType = "mysql" + apiCheckTypePostgreSQLAttr apiCheckType = "postgres" + apiCheckTypeStatsdAttr apiCheckType = "statsd" + apiCheckTypeTCPAttr apiCheckType = "tcp" +) + +var checkDescriptions = attrDescrs{ + checkActiveAttr: "If the check is activate or disabled", + checkCAQLAttr: "CAQL check configuration", + checkCloudWatchAttr: "CloudWatch check configuration", + checkCollectorAttr: "The collector(s) that are responsible for gathering the metrics", + checkHTTPAttr: "HTTP check configuration", + checkHTTPTrapAttr: "HTTP Trap check configuration", + checkICMPPingAttr: "ICMP ping check configuration", + checkJSONAttr: "JSON check configuration", + checkMetricAttr: "Configuration for a stream of metrics", + checkMetricLimitAttr: `Setting a metric_limit will enable all (-1), disable (0), or allow up to the specified limit of metrics for this check ("N+", where N is a positive integer)`, + checkMySQLAttr: "MySQL check configuration", + checkNameAttr: "The name of the check bundle that will be displayed in the web interface", + checkNotesAttr: "Notes about this check bundle", + checkPeriodAttr: "The period between each time the check is made", + checkPostgreSQLAttr: "PostgreSQL check configuration", + checkStatsdAttr: "statsd check configuration", + checkTCPAttr: "TCP check configuration", + checkTagsAttr: "A list of tags assigned to the check", + checkTargetAttr: "The target of the check (e.g. hostname, URL, IP, etc)", + checkTimeoutAttr: "The length of time in seconds (and fractions of a second) before the check will timeout if no response is returned to the collector", + checkTypeAttr: "The check type", + + checkOutByCollectorAttr: "", + checkOutCheckUUIDsAttr: "", + checkOutChecksAttr: "", + checkOutCreatedAttr: "", + checkOutIDAttr: "", + checkOutLastModifiedAttr: "", + checkOutLastModifiedByAttr: "", + checkOutReverseConnectURLsAttr: "", +} + +var checkCollectorDescriptions = attrDescrs{ + checkCollectorIDAttr: "The ID of the collector", +} + +var checkMetricDescriptions = metricDescriptions + +func resourceCheck() *schema.Resource { + return &schema.Resource{ + Create: checkCreate, + Read: checkRead, + Update: checkUpdate, + Delete: checkDelete, + Exists: checkExists, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: convertToHelperSchema(checkDescriptions, map[schemaAttr]*schema.Schema{ + checkActiveAttr: &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + checkCAQLAttr: schemaCheckCAQL, + checkCloudWatchAttr: schemaCheckCloudWatch, + checkCollectorAttr: &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: convertToHelperSchema(checkCollectorDescriptions, map[schemaAttr]*schema.Schema{ + checkCollectorIDAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(checkCollectorIDAttr, config.BrokerCIDRegex), + }, + }), + }, + }, + checkHTTPAttr: schemaCheckHTTP, + checkHTTPTrapAttr: schemaCheckHTTPTrap, + checkJSONAttr: schemaCheckJSON, + checkICMPPingAttr: schemaCheckICMPPing, + checkMetricAttr: &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Set: checkMetricChecksum, + MinItems: 1, + Elem: &schema.Resource{ + Schema: convertToHelperSchema(checkMetricDescriptions, map[schemaAttr]*schema.Schema{ + metricActiveAttr: &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + metricNameAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(metricNameAttr, `[\S]+`), + }, + metricTagsAttr: tagMakeConfigSchema(metricTagsAttr), + metricTypeAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateMetricType, + }, + metricUnitAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: metricUnit, + ValidateFunc: validateRegexp(metricUnitAttr, metricUnitRegexp), + }, + }), + }, + }, + checkMetricLimitAttr: &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validateFuncs( + validateIntMin(checkMetricLimitAttr, -1), + ), + }, + checkMySQLAttr: schemaCheckMySQL, + checkNameAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + checkNotesAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: suppressWhitespace, + }, + checkPeriodAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: normalizeTimeDurationStringToSeconds, + ValidateFunc: validateFuncs( + validateDurationMin(checkPeriodAttr, defaultCirconusCheckPeriodMin), + validateDurationMax(checkPeriodAttr, defaultCirconusCheckPeriodMax), + ), + }, + checkPostgreSQLAttr: schemaCheckPostgreSQL, + checkStatsdAttr: schemaCheckStatsd, + checkTagsAttr: tagMakeConfigSchema(checkTagsAttr), + checkTargetAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validateRegexp(checkTagsAttr, `.+`), + }, + checkTCPAttr: schemaCheckTCP, + checkTimeoutAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: normalizeTimeDurationStringToSeconds, + ValidateFunc: validateFuncs( + validateDurationMin(checkTimeoutAttr, defaultCirconusTimeoutMin), + validateDurationMax(checkTimeoutAttr, defaultCirconusTimeoutMax), + ), + }, + checkTypeAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + ValidateFunc: validateCheckType, + }, + + // Out parameters + checkOutIDAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + checkOutByCollectorAttr: &schema.Schema{ + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + checkOutCheckUUIDsAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + checkOutChecksAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + checkOutCreatedAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + checkOutLastModifiedAttr: &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + }, + checkOutLastModifiedByAttr: &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + checkOutReverseConnectURLsAttr: &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }), + } +} + +func checkCreate(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + c := newCheck() + if err := c.ParseConfig(d); err != nil { + return errwrap.Wrapf("error parsing check schema during create: {{err}}", err) + } + + if err := c.Create(ctxt); err != nil { + return errwrap.Wrapf("error creating check: {{err}}", err) + } + + d.SetId(c.CID) + + return checkRead(d, meta) +} + +func checkExists(d *schema.ResourceData, meta interface{}) (bool, error) { + ctxt := meta.(*providerContext) + + cid := d.Id() + cb, err := ctxt.client.FetchCheckBundle(api.CIDType(&cid)) + if err != nil { + return false, err + } + + if cb.CID == "" { + return false, nil + } + + return true, nil +} + +// checkRead pulls data out of the CheckBundle object and stores it into the +// appropriate place in the statefile. +func checkRead(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + cid := d.Id() + c, err := loadCheck(ctxt, api.CIDType(&cid)) + if err != nil { + return err + } + + d.SetId(c.CID) + + // Global circonus_check attributes are saved first, followed by the check + // type specific attributes handled below in their respective checkRead*(). + + checkIDsByCollector := make(map[string]interface{}, len(c.Checks)) + for i, b := range c.Brokers { + checkIDsByCollector[b] = c.Checks[i] + } + + var checkID string + if len(c.Checks) == 1 { + checkID = c.Checks[0] + } + + metrics := schema.NewSet(checkMetricChecksum, nil) + for _, m := range c.Metrics { + metricAttrs := map[string]interface{}{ + string(metricActiveAttr): metricAPIStatusToBool(m.Status), + string(metricNameAttr): m.Name, + string(metricTagsAttr): tagsToState(apiToTags(m.Tags)), + string(metricTypeAttr): m.Type, + string(metricUnitAttr): indirect(m.Units), + } + + metrics.Add(metricAttrs) + } + + // Write the global circonus_check parameters followed by the check + // type-specific parameters. + + d.Set(checkActiveAttr, checkAPIStatusToBool(c.Status)) + + if err := d.Set(checkCollectorAttr, stringListToSet(c.Brokers, checkCollectorIDAttr)); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkCollectorAttr), err) + } + + d.Set(checkMetricLimitAttr, c.MetricLimit) + d.Set(checkNameAttr, c.DisplayName) + d.Set(checkNotesAttr, c.Notes) + d.Set(checkPeriodAttr, fmt.Sprintf("%ds", c.Period)) + + if err := d.Set(checkMetricAttr, metrics); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkMetricAttr), err) + } + + if err := d.Set(checkTagsAttr, c.Tags); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkTagsAttr), err) + } + + d.Set(checkTargetAttr, c.Target) + + { + t, _ := time.ParseDuration(fmt.Sprintf("%fs", c.Timeout)) + d.Set(checkTimeoutAttr, t.String()) + } + + d.Set(checkTypeAttr, c.Type) + + // Last step: parse a check_bundle's config into the statefile. + if err := parseCheckTypeConfig(&c, d); err != nil { + return errwrap.Wrapf("Unable to parse check config: {{err}}", err) + } + + // Out parameters + if err := d.Set(checkOutByCollectorAttr, checkIDsByCollector); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkOutByCollectorAttr), err) + } + + if err := d.Set(checkOutCheckUUIDsAttr, c.CheckUUIDs); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkOutCheckUUIDsAttr), err) + } + + if err := d.Set(checkOutChecksAttr, c.Checks); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkOutChecksAttr), err) + } + + if checkID != "" { + d.Set(checkOutIDAttr, checkID) + } + + d.Set(checkOutCreatedAttr, c.Created) + d.Set(checkOutLastModifiedAttr, c.LastModified) + d.Set(checkOutLastModifiedByAttr, c.LastModifedBy) + + if err := d.Set(checkOutReverseConnectURLsAttr, c.ReverseConnectURLs); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkOutReverseConnectURLsAttr), err) + } + + return nil +} + +func checkUpdate(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + c := newCheck() + if err := c.ParseConfig(d); err != nil { + return err + } + + c.CID = d.Id() + if err := c.Update(ctxt); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to update check %q: {{err}}", d.Id()), err) + } + + return checkRead(d, meta) +} + +func checkDelete(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + if _, err := ctxt.client.Delete(d.Id()); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to delete check %q: {{err}}", d.Id()), err) + } + + d.SetId("") + + return nil +} + +func checkMetricChecksum(v interface{}) int { + m := v.(map[string]interface{}) + csum := metricChecksum(m) + return csum +} + +// ParseConfig reads Terraform config data and stores the information into a +// Circonus CheckBundle object. +func (c *circonusCheck) ParseConfig(d *schema.ResourceData) error { + if v, found := d.GetOk(checkActiveAttr); found { + c.Status = checkActiveToAPIStatus(v.(bool)) + } + + if v, found := d.GetOk(checkCollectorAttr); found { + l := v.(*schema.Set).List() + c.Brokers = make([]string, 0, len(l)) + + for _, mapRaw := range l { + mapAttrs := mapRaw.(map[string]interface{}) + + if mv, mapFound := mapAttrs[checkCollectorIDAttr]; mapFound { + c.Brokers = append(c.Brokers, mv.(string)) + } + } + } + + if v, found := d.GetOk(checkMetricLimitAttr); found { + c.MetricLimit = v.(int) + } + + if v, found := d.GetOk(checkNameAttr); found { + c.DisplayName = v.(string) + } + + if v, found := d.GetOk(checkNotesAttr); found { + s := v.(string) + c.Notes = &s + } + + if v, found := d.GetOk(checkPeriodAttr); found { + d, err := time.ParseDuration(v.(string)) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to parse %q as a duration: {{err}}", checkPeriodAttr), err) + } + + c.Period = uint(d.Seconds()) + } + + if v, found := d.GetOk(checkMetricAttr); found { + metricList := v.(*schema.Set).List() + c.Metrics = make([]api.CheckBundleMetric, 0, len(metricList)) + + for _, metricListRaw := range metricList { + metricAttrs := metricListRaw.(map[string]interface{}) + + var id string + if av, found := metricAttrs[metricIDAttr]; found { + id = av.(string) + } else { + var err error + id, err = newMetricID() + if err != nil { + return errwrap.Wrapf("unable to create a new metric ID: {{err}}", err) + } + } + + m := newMetric() + if err := m.ParseConfigMap(id, metricAttrs); err != nil { + return errwrap.Wrapf("unable to parse config: {{err}}", err) + } + + c.Metrics = append(c.Metrics, m.CheckBundleMetric) + } + } + + if v, found := d.GetOk(checkTagsAttr); found { + c.Tags = derefStringList(flattenSet(v.(*schema.Set))) + } + + if v, found := d.GetOk(checkTargetAttr); found { + c.Target = v.(string) + } + + if v, found := d.GetOk(checkTimeoutAttr); found { + d, err := time.ParseDuration(v.(string)) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to parse %q as a duration: {{err}}", checkTimeoutAttr), err) + } + + t := float32(d.Seconds()) + c.Timeout = t + } + + // Last step: parse the individual check types + if err := checkConfigToAPI(c, d); err != nil { + return errwrap.Wrapf("unable to parse check type: {{err}}", err) + } + + if err := c.Fixup(); err != nil { + return err + } + + if err := c.Validate(); err != nil { + return err + } + + return nil +} + +// checkConfigToAPI parses the Terraform config into the respective per-check +// type api.Config attributes. +func checkConfigToAPI(c *circonusCheck, d *schema.ResourceData) error { + checkTypeParseMap := map[string]func(*circonusCheck, interfaceList) error{ + checkCAQLAttr: checkConfigToAPICAQL, + checkCloudWatchAttr: checkConfigToAPICloudWatch, + checkHTTPAttr: checkConfigToAPIHTTP, + checkHTTPTrapAttr: checkConfigToAPIHTTPTrap, + checkICMPPingAttr: checkConfigToAPIICMPPing, + checkJSONAttr: checkConfigToAPIJSON, + checkMySQLAttr: checkConfigToAPIMySQL, + checkPostgreSQLAttr: checkConfigToAPIPostgreSQL, + checkStatsdAttr: checkConfigToAPIStatsd, + checkTCPAttr: checkConfigToAPITCP, + } + + for checkType, fn := range checkTypeParseMap { + if listRaw, found := d.GetOk(checkType); found { + if err := fn(c, listRaw.(*schema.Set).List()); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to parse type %q: {{err}}", string(checkType)), err) + } + } + } + + return nil +} + +// parseCheckTypeConfig parses an API Config object and stores the result in the +// statefile. +func parseCheckTypeConfig(c *circonusCheck, d *schema.ResourceData) error { + checkTypeConfigHandlers := map[apiCheckType]func(*circonusCheck, *schema.ResourceData) error{ + apiCheckTypeCAQLAttr: checkAPIToStateCAQL, + apiCheckTypeCloudWatchAttr: checkAPIToStateCloudWatch, + apiCheckTypeHTTPAttr: checkAPIToStateHTTP, + apiCheckTypeHTTPTrapAttr: checkAPIToStateHTTPTrap, + apiCheckTypeICMPPingAttr: checkAPIToStateICMPPing, + apiCheckTypeJSONAttr: checkAPIToStateJSON, + apiCheckTypeMySQLAttr: checkAPIToStateMySQL, + apiCheckTypePostgreSQLAttr: checkAPIToStatePostgreSQL, + apiCheckTypeStatsdAttr: checkAPIToStateStatsd, + apiCheckTypeTCPAttr: checkAPIToStateTCP, + } + + var checkType apiCheckType = apiCheckType(c.Type) + fn, ok := checkTypeConfigHandlers[checkType] + if !ok { + return fmt.Errorf("check type %q not supported", c.Type) + } + + if err := fn(c, d); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to parse the API config for %q: {{err}}", c.Type), err) + } + + return nil +} diff --git a/builtin/providers/circonus/resource_circonus_check_caql.go b/builtin/providers/circonus/resource_circonus_check_caql.go new file mode 100644 index 0000000000..a3d876b636 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_check_caql.go @@ -0,0 +1,89 @@ +package circonus + +import ( + "bytes" + "fmt" + "strings" + + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + // circonus_check.caql.* resource attribute names + checkCAQLQueryAttr = "query" +) + +var checkCAQLDescriptions = attrDescrs{ + checkCAQLQueryAttr: "The query definition", +} + +var schemaCheckCAQL = &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + MaxItems: 1, + MinItems: 1, + Set: hashCheckCAQL, + Elem: &schema.Resource{ + Schema: convertToHelperSchema(checkCAQLDescriptions, map[schemaAttr]*schema.Schema{ + checkCAQLQueryAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(checkCAQLQueryAttr, `.+`), + }, + }), + }, +} + +// checkAPIToStateCAQL reads the Config data out of circonusCheck.CheckBundle +// into the statefile. +func checkAPIToStateCAQL(c *circonusCheck, d *schema.ResourceData) error { + caqlConfig := make(map[string]interface{}, len(c.Config)) + + caqlConfig[string(checkCAQLQueryAttr)] = c.Config[config.Query] + + if err := d.Set(checkCAQLAttr, schema.NewSet(hashCheckCAQL, []interface{}{caqlConfig})); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkCAQLAttr), err) + } + + return nil +} + +// hashCheckCAQL creates a stable hash of the normalized values +func hashCheckCAQL(v interface{}) int { + m := v.(map[string]interface{}) + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + + writeString := func(attrName schemaAttr) { + if v, ok := m[string(attrName)]; ok && v.(string) != "" { + fmt.Fprint(b, strings.TrimSpace(v.(string))) + } + } + + // Order writes to the buffer using lexically sorted list for easy visual + // reconciliation with other lists. + writeString(checkCAQLQueryAttr) + + s := b.String() + return hashcode.String(s) +} + +func checkConfigToAPICAQL(c *circonusCheck, l interfaceList) error { + c.Type = string(apiCheckTypeCAQL) + c.Target = defaultCheckCAQLTarget + + // Iterate over all `caql` attributes, even though we have a max of 1 in the + // schema. + for _, mapRaw := range l { + caqlConfig := newInterfaceMap(mapRaw) + + if v, found := caqlConfig[checkCAQLQueryAttr]; found { + c.Config[config.Query] = v.(string) + } + } + + return nil +} diff --git a/builtin/providers/circonus/resource_circonus_check_caql_test.go b/builtin/providers/circonus/resource_circonus_check_caql_test.go new file mode 100644 index 0000000000..5efcb6ad93 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_check_caql_test.go @@ -0,0 +1,74 @@ +package circonus + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccCirconusCheckCAQL_basic(t *testing.T) { + checkName := fmt.Sprintf("Consul's Go GC latency (Merged Histogram) - %s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDestroyCirconusCheckBundle, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(testAccCirconusCheckCAQLConfigFmt, checkName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "active", "true"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "collector.#", "1"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "collector.36214388.id", "/broker/1490"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "caql.#", "1"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "caql.4060628048.query", `search:metric:histogram("*consul*runtime`+"`"+`gc_pause_ns* (active:1)") | histogram:merge() | histogram:percentile(99)`), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "name", checkName), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "period", "60s"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "metric.#", "1"), + + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "tags.#", "4"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "tags.3728194417", "app:consul"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "tags.2087084518", "author:terraform"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "tags.1401442048", "lifecycle:unittest"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "tags.3480593708", "source:goruntime"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "target", "q._caql"), + resource.TestCheckResourceAttr("circonus_check.go_gc_latency", "type", "caql"), + ), + }, + }, + }) +} + +const testAccCirconusCheckCAQLConfigFmt = ` +variable "test_tags" { + type = "list" + default = [ "app:consul", "author:terraform", "lifecycle:unittest", "source:goruntime" ] +} + +resource "circonus_check" "go_gc_latency" { + active = true + name = "%s" + period = "60s" + + collector { + id = "/broker/1490" + } + + caql { + query = <"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.2626248092.version", "1.0"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.2626248092.method", "GET"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.2626248092.port", "443"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.2626248092.read_limit", "1048576"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.2626248092.url", "https://api.circonus.com/account/current"), + resource.TestCheckResourceAttr("circonus_check.usage", "name", "Terraform test: api.circonus.com metric usage check"), + resource.TestCheckResourceAttr("circonus_check.usage", "notes", ""), + resource.TestCheckResourceAttr("circonus_check.usage", "period", "60s"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.#", "2"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.active", "true"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.name", "_usage`0`_limit"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.tags.#", "1"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.tags.3241999189", "source:circonus"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.type", "numeric"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.unit", "qty"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.active", "true"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.name", "_usage`0`_used"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.tags.#", "1"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.tags.3241999189", "source:circonus"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.type", "numeric"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.unit", "qty"), + resource.TestCheckResourceAttr("circonus_check.usage", "tags.#", "2"), + resource.TestCheckResourceAttr("circonus_check.usage", "tags.3241999189", "source:circonus"), + resource.TestCheckResourceAttr("circonus_check.usage", "tags.1401442048", "lifecycle:unittest"), + resource.TestCheckResourceAttr("circonus_check.usage", "target", "api.circonus.com"), + resource.TestCheckResourceAttr("circonus_check.usage", "type", "json"), + ), + }, + { + Config: testAccCirconusCheckJSONConfig2, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("circonus_check.usage", "active", "true"), + resource.TestCheckResourceAttr("circonus_check.usage", "collector.#", "1"), + resource.TestCheckResourceAttr("circonus_check.usage", "collector.2388330941.id", "/broker/1"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.#", "1"), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.auth_method", ""), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.auth_password", ""), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.auth_user", ""), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.ca_chain", ""), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.certificate_file", ""), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.ciphers", ""), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.key_file", ""), + // resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.payload", ""), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.headers.%", "3"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.headers.Accept", "application/json"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.headers.X-Circonus-App-Name", "TerraformCheck"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.headers.X-Circonus-Auth-Token", ""), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.version", "1.1"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.method", "GET"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.port", "443"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.read_limit", "1048576"), + resource.TestCheckResourceAttr("circonus_check.usage", "json.3951979786.url", "https://api.circonus.com/account/current"), + resource.TestCheckResourceAttr("circonus_check.usage", "name", "Terraform test: api.circonus.com metric usage check"), + resource.TestCheckResourceAttr("circonus_check.usage", "notes", "notes!"), + resource.TestCheckResourceAttr("circonus_check.usage", "period", "300s"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.#", "2"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.active", "true"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.name", "_usage`0`_limit"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.tags.#", "1"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.tags.3241999189", "source:circonus"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.type", "numeric"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.1992097900.unit", "qty"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.active", "true"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.name", "_usage`0`_used"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.tags.#", "1"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.tags.3241999189", "source:circonus"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.type", "numeric"), + resource.TestCheckResourceAttr("circonus_check.usage", "metric.3280673139.unit", "qty"), + resource.TestCheckResourceAttr("circonus_check.usage", "tags.#", "2"), + resource.TestCheckResourceAttr("circonus_check.usage", "tags.3241999189", "source:circonus"), + resource.TestCheckResourceAttr("circonus_check.usage", "tags.1401442048", "lifecycle:unittest"), + resource.TestCheckResourceAttr("circonus_check.usage", "target", "api.circonus.com"), + resource.TestCheckResourceAttr("circonus_check.usage", "type", "json"), + ), + }, + }, + }) +} + +const testAccCirconusCheckJSONConfig1 = ` +variable "usage_default_unit" { + default = "qty" +} + +resource "circonus_metric" "limit" { + name = "_usage` + "`0`" + `_limit" + tags = [ "source:circonus" ] + type = "numeric" + unit = "${var.usage_default_unit}" +} + +resource "circonus_metric" "used" { + name = "_usage` + "`0`" + `_used" + tags = [ "source:circonus" ] + type = "numeric" + unit = "${var.usage_default_unit}" +} + +resource "circonus_check" "usage" { + active = true + name = "Terraform test: api.circonus.com metric usage check" + period = "60s" + + collector { + id = "/broker/1" + } + + json { + url = "https://api.circonus.com/account/current" + headers = { + Accept = "application/json", + X-Circonus-App-Name = "TerraformCheck", + X-Circonus-Auth-Token = "", + } + version = "1.0" + method = "GET" + port = 443 + read_limit = 1048576 + } + + metric { + name = "${circonus_metric.used.name}" + tags = [ "${circonus_metric.used.tags}" ] + type = "${circonus_metric.used.type}" + unit = "${coalesce(circonus_metric.used.unit, var.usage_default_unit)}" + } + + metric { + name = "${circonus_metric.limit.name}" + tags = [ "${circonus_metric.limit.tags}" ] + type = "${circonus_metric.limit.type}" + unit = "${coalesce(circonus_metric.limit.unit, var.usage_default_unit)}" + } + + tags = [ "source:circonus", "lifecycle:unittest" ] +} +` + +const testAccCirconusCheckJSONConfig2 = ` +variable "usage_default_unit" { + default = "qty" +} + +resource "circonus_metric" "limit" { + name = "_usage` + "`0`" + `_limit" + tags = [ "source:circonus" ] + type = "numeric" + unit = "${var.usage_default_unit}" +} + +resource "circonus_metric" "used" { + name = "_usage` + "`0`" + `_used" + tags = [ "source:circonus" ] + type = "numeric" + unit = "${var.usage_default_unit}" +} + +resource "circonus_check" "usage" { + active = true + name = "Terraform test: api.circonus.com metric usage check" + notes = "notes!" + period = "300s" + + collector { + id = "/broker/1" + } + + json { + url = "https://api.circonus.com/account/current" + headers = { + Accept = "application/json", + X-Circonus-App-Name = "TerraformCheck", + X-Circonus-Auth-Token = "", + } + version = "1.1" + method = "GET" + port = 443 + read_limit = 1048576 + } + + metric { + name = "${circonus_metric.used.name}" + tags = [ "${circonus_metric.used.tags}" ] + type = "${circonus_metric.used.type}" + unit = "${coalesce(circonus_metric.used.unit, var.usage_default_unit)}" + } + + metric { + name = "${circonus_metric.limit.name}" + tags = [ "${circonus_metric.limit.tags}" ] + type = "${circonus_metric.limit.type}" + unit = "${coalesce(circonus_metric.limit.unit, var.usage_default_unit)}" + } + + tags = [ "source:circonus", "lifecycle:unittest" ] +} +` diff --git a/builtin/providers/circonus/resource_circonus_check_mysql.go b/builtin/providers/circonus/resource_circonus_check_mysql.go new file mode 100644 index 0000000000..3fe2094ed0 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_check_mysql.go @@ -0,0 +1,102 @@ +package circonus + +import ( + "bytes" + "fmt" + "strings" + + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + // circonus_check.mysql.* resource attribute names + checkMySQLDSNAttr = "dsn" + checkMySQLQueryAttr = "query" +) + +var checkMySQLDescriptions = attrDescrs{ + checkMySQLDSNAttr: "The connect DSN for the MySQL instance", + checkMySQLQueryAttr: "The SQL to use as the query", +} + +var schemaCheckMySQL = &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + MaxItems: 1, + MinItems: 1, + Set: hashCheckMySQL, + Elem: &schema.Resource{ + Schema: convertToHelperSchema(checkMySQLDescriptions, map[schemaAttr]*schema.Schema{ + checkMySQLDSNAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(checkMySQLDSNAttr, `^.+$`), + }, + checkMySQLQueryAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + StateFunc: func(v interface{}) string { return strings.TrimSpace(v.(string)) }, + ValidateFunc: validateRegexp(checkMySQLQueryAttr, `.+`), + }, + }), + }, +} + +// checkAPIToStateMySQL reads the Config data out of circonusCheck.CheckBundle into the +// statefile. +func checkAPIToStateMySQL(c *circonusCheck, d *schema.ResourceData) error { + MySQLConfig := make(map[string]interface{}, len(c.Config)) + + MySQLConfig[string(checkMySQLDSNAttr)] = c.Config[config.DSN] + MySQLConfig[string(checkMySQLQueryAttr)] = c.Config[config.SQL] + + if err := d.Set(checkMySQLAttr, schema.NewSet(hashCheckMySQL, []interface{}{MySQLConfig})); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store check %q attribute: {{err}}", checkMySQLAttr), err) + } + + return nil +} + +// hashCheckMySQL creates a stable hash of the normalized values +func hashCheckMySQL(v interface{}) int { + m := v.(map[string]interface{}) + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + + writeString := func(attrName schemaAttr) { + if v, ok := m[string(attrName)]; ok && v.(string) != "" { + fmt.Fprint(b, strings.TrimSpace(v.(string))) + } + } + + // Order writes to the buffer using lexically sorted list for easy visual + // reconciliation with other lists. + writeString(checkMySQLDSNAttr) + writeString(checkMySQLQueryAttr) + + s := b.String() + return hashcode.String(s) +} + +func checkConfigToAPIMySQL(c *circonusCheck, l interfaceList) error { + c.Type = string(apiCheckTypeMySQL) + + // Iterate over all `mysql` attributes, even though we have a max of 1 in the + // schema. + for _, mapRaw := range l { + mysqlConfig := newInterfaceMap(mapRaw) + + if v, found := mysqlConfig[checkMySQLDSNAttr]; found { + c.Config[config.DSN] = v.(string) + } + + if v, found := mysqlConfig[checkMySQLQueryAttr]; found { + c.Config[config.SQL] = v.(string) + } + } + + return nil +} diff --git a/builtin/providers/circonus/resource_circonus_check_mysql_test.go b/builtin/providers/circonus/resource_circonus_check_mysql_test.go new file mode 100644 index 0000000000..063cc54b4c --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_check_mysql_test.go @@ -0,0 +1,80 @@ +package circonus + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccCirconusCheckMySQL_basic(t *testing.T) { + checkName := fmt.Sprintf("MySQL binlog total - %s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDestroyCirconusCheckBundle, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(testAccCirconusCheckMySQLConfigFmt, checkName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("circonus_check.table_ops", "active", "true"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "collector.#", "1"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "collector.2388330941.id", "/broker/1"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "mysql.#", "1"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "mysql.3110376931.dsn", "user=mysql host=mydb1.example.org port=3306 password=12345 sslmode=require"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "mysql.3110376931.query", `select 'binlog', total from (select variable_value as total from information_schema.global_status where variable_name='BINLOG_CACHE_USE') total`), + resource.TestCheckResourceAttr("circonus_check.table_ops", "name", checkName), + resource.TestCheckResourceAttr("circonus_check.table_ops", "period", "300s"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "metric.#", "1"), + + resource.TestCheckResourceAttr("circonus_check.table_ops", "metric.885029470.name", "binlog`total"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "metric.885029470.tags.#", "2"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "metric.885029470.tags.2087084518", "author:terraform"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "metric.885029470.tags.1401442048", "lifecycle:unittest"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "metric.885029470.type", "numeric"), + + resource.TestCheckResourceAttr("circonus_check.table_ops", "tags.#", "2"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "tags.2087084518", "author:terraform"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "tags.1401442048", "lifecycle:unittest"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "target", "mydb.example.org"), + resource.TestCheckResourceAttr("circonus_check.table_ops", "type", "mysql"), + ), + }, + }, + }) +} + +const testAccCirconusCheckMySQLConfigFmt = ` +variable "test_tags" { + type = "list" + default = [ "author:terraform", "lifecycle:unittest" ] +} + +resource "circonus_check" "table_ops" { + active = true + name = "%s" + period = "300s" + + collector { + id = "/broker/1" + } + + mysql { + dsn = "user=mysql host=mydb1.example.org port=3306 password=12345 sslmode=require" + query = < 1 { + alertOptionsList = append(alertOptionsList, *alertOptions[i]) + } + } + + return alertOptionsList +} + +func contactGroupEmailToState(cg *api.ContactGroup) []interface{} { + emailContacts := make([]interface{}, 0, len(cg.Contacts.Users)+len(cg.Contacts.External)) + + for _, ext := range cg.Contacts.External { + switch ext.Method { + case circonusMethodEmail: + emailContacts = append(emailContacts, map[string]interface{}{ + contactEmailAddressAttr: ext.Info, + }) + } + } + + for _, user := range cg.Contacts.Users { + switch user.Method { + case circonusMethodEmail: + emailContacts = append(emailContacts, map[string]interface{}{ + contactUserCIDAttr: user.UserCID, + }) + } + } + + return emailContacts +} + +func contactGroupHTTPToState(cg *api.ContactGroup) ([]interface{}, error) { + httpContacts := make([]interface{}, 0, len(cg.Contacts.External)) + + for _, ext := range cg.Contacts.External { + switch ext.Method { + case circonusMethodHTTP: + url := contactHTTPInfo{} + if err := json.Unmarshal([]byte(ext.Info), &url); err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("unable to decode external %s JSON (%q): {{err}}", contactHTTPAttr, ext.Info), err) + } + + httpContacts = append(httpContacts, map[string]interface{}{ + string(contactHTTPAddressAttr): url.Address, + string(contactHTTPFormatAttr): url.Format, + string(contactHTTPMethodAttr): url.Method, + }) + } + } + + return httpContacts, nil +} + +func getContactGroupInput(d *schema.ResourceData) (*api.ContactGroup, error) { + cg := api.NewContactGroup() + if v, ok := d.GetOk(contactAggregationWindowAttr); ok { + aggWindow, _ := time.ParseDuration(v.(string)) + cg.AggregationWindow = uint(aggWindow.Seconds()) + } + + if v, ok := d.GetOk(contactAlertOptionAttr); ok { + alertOptionsRaw := v.(*schema.Set).List() + + ensureEscalationSeverity := func(severity int) { + if cg.Escalations[severity] == nil { + cg.Escalations[severity] = &api.ContactGroupEscalation{} + } + } + + for _, alertOptionRaw := range alertOptionsRaw { + alertOptionsMap := alertOptionRaw.(map[string]interface{}) + + severityIndex := -1 + + if optRaw, ok := alertOptionsMap[contactSeverityAttr]; ok { + severityIndex = optRaw.(int) - 1 + } + + if optRaw, ok := alertOptionsMap[contactEscalateAfterAttr]; ok { + if optRaw.(string) != "" { + d, _ := time.ParseDuration(optRaw.(string)) + if d != 0 { + ensureEscalationSeverity(severityIndex) + cg.Escalations[severityIndex].After = uint(d.Seconds()) + } + } + } + + if optRaw, ok := alertOptionsMap[contactEscalateToAttr]; ok && optRaw.(string) != "" { + ensureEscalationSeverity(severityIndex) + cg.Escalations[severityIndex].ContactGroupCID = optRaw.(string) + } + + if optRaw, ok := alertOptionsMap[contactReminderAttr]; ok { + if optRaw.(string) == "" { + optRaw = "0s" + } + + d, _ := time.ParseDuration(optRaw.(string)) + cg.Reminders[severityIndex] = uint(d.Seconds()) + } + } + } + + if v, ok := d.GetOk(contactNameAttr); ok { + cg.Name = v.(string) + } + + if v, ok := d.GetOk(contactEmailAttr); ok { + emailListRaw := v.(*schema.Set).List() + for _, emailMapRaw := range emailListRaw { + emailMap := emailMapRaw.(map[string]interface{}) + + var requiredAttrFound bool + if v, ok := emailMap[contactEmailAddressAttr]; ok && v.(string) != "" { + requiredAttrFound = true + cg.Contacts.External = append(cg.Contacts.External, api.ContactGroupContactsExternal{ + Info: v.(string), + Method: circonusMethodEmail, + }) + } + + if v, ok := emailMap[contactUserCIDAttr]; ok && v.(string) != "" { + requiredAttrFound = true + cg.Contacts.Users = append(cg.Contacts.Users, api.ContactGroupContactsUser{ + Method: circonusMethodEmail, + UserCID: v.(string), + }) + } + + // Can't mark two attributes that are conflicting as required so we do our + // own validation check here. + if !requiredAttrFound { + return nil, fmt.Errorf("In type %s, either %s or %s must be specified", contactEmailAttr, contactEmailAddressAttr, contactUserCIDAttr) + } + } + } + + if v, ok := d.GetOk(contactHTTPAttr); ok { + httpListRaw := v.(*schema.Set).List() + for _, httpMapRaw := range httpListRaw { + httpMap := httpMapRaw.(map[string]interface{}) + + httpInfo := contactHTTPInfo{} + + if v, ok := httpMap[string(contactHTTPAddressAttr)]; ok { + httpInfo.Address = v.(string) + } + + if v, ok := httpMap[string(contactHTTPFormatAttr)]; ok { + httpInfo.Format = v.(string) + } + + if v, ok := httpMap[string(contactHTTPMethodAttr)]; ok { + httpInfo.Method = v.(string) + } + + js, err := json.Marshal(httpInfo) + if err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("error marshalling %s JSON config string: {{err}}", contactHTTPAttr), err) + } + + cg.Contacts.External = append(cg.Contacts.External, api.ContactGroupContactsExternal{ + Info: string(js), + Method: circonusMethodHTTP, + }) + } + } + + if v, ok := d.GetOk(contactIRCAttr); ok { + ircListRaw := v.(*schema.Set).List() + for _, ircMapRaw := range ircListRaw { + ircMap := ircMapRaw.(map[string]interface{}) + + if v, ok := ircMap[contactUserCIDAttr]; ok && v.(string) != "" { + cg.Contacts.Users = append(cg.Contacts.Users, api.ContactGroupContactsUser{ + Method: circonusMethodIRC, + UserCID: v.(string), + }) + } + } + } + + if v, ok := d.GetOk(contactPagerDutyAttr); ok { + pagerDutyListRaw := v.(*schema.Set).List() + for _, pagerDutyMapRaw := range pagerDutyListRaw { + pagerDutyMap := pagerDutyMapRaw.(map[string]interface{}) + + pagerDutyInfo := contactPagerDutyInfo{} + + if v, ok := pagerDutyMap[contactContactGroupFallbackAttr]; ok && v.(string) != "" { + cid := v.(string) + contactGroupID, err := failoverGroupCIDToID(api.CIDType(&cid)) + if err != nil { + return nil, errwrap.Wrapf("error reading contact group CID: {{err}}", err) + } + pagerDutyInfo.FallbackGroupCID = contactGroupID + } + + if v, ok := pagerDutyMap[string(contactPagerDutyServiceKeyAttr)]; ok { + pagerDutyInfo.ServiceKey = v.(string) + } + + if v, ok := pagerDutyMap[string(contactPagerDutyWebhookURLAttr)]; ok { + pagerDutyInfo.WebhookURL = v.(string) + } + + js, err := json.Marshal(pagerDutyInfo) + if err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("error marshalling %s JSON config string: {{err}}", contactPagerDutyAttr), err) + } + + cg.Contacts.External = append(cg.Contacts.External, api.ContactGroupContactsExternal{ + Info: string(js), + Method: circonusMethodPagerDuty, + }) + } + } + + if v, ok := d.GetOk(contactSlackAttr); ok { + slackListRaw := v.(*schema.Set).List() + for _, slackMapRaw := range slackListRaw { + slackMap := slackMapRaw.(map[string]interface{}) + + slackInfo := contactSlackInfo{} + + var buttons int + if v, ok := slackMap[contactSlackButtonsAttr]; ok { + if v.(bool) { + buttons = 1 + } + slackInfo.Buttons = buttons + } + + if v, ok := slackMap[contactSlackChannelAttr]; ok { + slackInfo.Channel = v.(string) + } + + if v, ok := slackMap[contactContactGroupFallbackAttr]; ok && v.(string) != "" { + cid := v.(string) + contactGroupID, err := failoverGroupCIDToID(api.CIDType(&cid)) + if err != nil { + return nil, errwrap.Wrapf("error reading contact group CID: {{err}}", err) + } + slackInfo.FallbackGroupCID = contactGroupID + } + + if v, ok := slackMap[contactSlackTeamAttr]; ok { + slackInfo.Team = v.(string) + } + + if v, ok := slackMap[contactSlackUsernameAttr]; ok { + slackInfo.Username = v.(string) + } + + js, err := json.Marshal(slackInfo) + if err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("error marshalling %s JSON config string: {{err}}", contactSlackAttr), err) + } + + cg.Contacts.External = append(cg.Contacts.External, api.ContactGroupContactsExternal{ + Info: string(js), + Method: circonusMethodSlack, + }) + } + } + + if v, ok := d.GetOk(contactSMSAttr); ok { + smsListRaw := v.(*schema.Set).List() + for _, smsMapRaw := range smsListRaw { + smsMap := smsMapRaw.(map[string]interface{}) + + var requiredAttrFound bool + if v, ok := smsMap[contactSMSAddressAttr]; ok && v.(string) != "" { + requiredAttrFound = true + cg.Contacts.External = append(cg.Contacts.External, api.ContactGroupContactsExternal{ + Info: v.(string), + Method: circonusMethodSMS, + }) + } + + if v, ok := smsMap[contactUserCIDAttr]; ok && v.(string) != "" { + requiredAttrFound = true + cg.Contacts.Users = append(cg.Contacts.Users, api.ContactGroupContactsUser{ + Method: circonusMethodSMS, + UserCID: v.(string), + }) + } + + // Can't mark two attributes that are conflicting as required so we do our + // own validation check here. + if !requiredAttrFound { + return nil, fmt.Errorf("In type %s, either %s or %s must be specified", contactEmailAttr, contactEmailAddressAttr, contactUserCIDAttr) + } + } + } + + if v, ok := d.GetOk(contactVictorOpsAttr); ok { + victorOpsListRaw := v.(*schema.Set).List() + for _, victorOpsMapRaw := range victorOpsListRaw { + victorOpsMap := victorOpsMapRaw.(map[string]interface{}) + + victorOpsInfo := contactVictorOpsInfo{} + + if v, ok := victorOpsMap[contactContactGroupFallbackAttr]; ok && v.(string) != "" { + cid := v.(string) + contactGroupID, err := failoverGroupCIDToID(api.CIDType(&cid)) + if err != nil { + return nil, errwrap.Wrapf("error reading contact group CID: {{err}}", err) + } + victorOpsInfo.FallbackGroupCID = contactGroupID + } + + if v, ok := victorOpsMap[contactVictorOpsAPIKeyAttr]; ok { + victorOpsInfo.APIKey = v.(string) + } + + if v, ok := victorOpsMap[contactVictorOpsCriticalAttr]; ok { + victorOpsInfo.Critical = v.(int) + } + + if v, ok := victorOpsMap[contactVictorOpsInfoAttr]; ok { + victorOpsInfo.Info = v.(int) + } + + if v, ok := victorOpsMap[contactVictorOpsTeamAttr]; ok { + victorOpsInfo.Team = v.(string) + } + + if v, ok := victorOpsMap[contactVictorOpsWarningAttr]; ok { + victorOpsInfo.Warning = v.(int) + } + + js, err := json.Marshal(victorOpsInfo) + if err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("error marshalling %s JSON config string: {{err}}", contactVictorOpsAttr), err) + } + + cg.Contacts.External = append(cg.Contacts.External, api.ContactGroupContactsExternal{ + Info: string(js), + Method: circonusMethodVictorOps, + }) + } + } + + if v, ok := d.GetOk(contactXMPPAttr); ok { + xmppListRaw := v.(*schema.Set).List() + for _, xmppMapRaw := range xmppListRaw { + xmppMap := xmppMapRaw.(map[string]interface{}) + + if v, ok := xmppMap[contactXMPPAddressAttr]; ok && v.(string) != "" { + cg.Contacts.External = append(cg.Contacts.External, api.ContactGroupContactsExternal{ + Info: v.(string), + Method: circonusMethodXMPP, + }) + } + + if v, ok := xmppMap[contactUserCIDAttr]; ok && v.(string) != "" { + cg.Contacts.Users = append(cg.Contacts.Users, api.ContactGroupContactsUser{ + Method: circonusMethodXMPP, + UserCID: v.(string), + }) + } + } + } + + if v, ok := d.GetOk(contactLongMessageAttr); ok { + msg := v.(string) + cg.AlertFormats.LongMessage = &msg + } + + if v, ok := d.GetOk(contactLongSubjectAttr); ok { + msg := v.(string) + cg.AlertFormats.LongSubject = &msg + } + + if v, ok := d.GetOk(contactLongSummaryAttr); ok { + msg := v.(string) + cg.AlertFormats.LongSummary = &msg + } + + if v, ok := d.GetOk(contactShortMessageAttr); ok { + msg := v.(string) + cg.AlertFormats.ShortMessage = &msg + } + + if v, ok := d.GetOk(contactShortSummaryAttr); ok { + msg := v.(string) + cg.AlertFormats.ShortSummary = &msg + } + + if v, ok := d.GetOk(contactShortMessageAttr); ok { + msg := v.(string) + cg.AlertFormats.ShortMessage = &msg + } + + if v, found := d.GetOk(checkTagsAttr); found { + cg.Tags = derefStringList(flattenSet(v.(*schema.Set))) + } + + if err := validateContactGroup(cg); err != nil { + return nil, err + } + + return cg, nil +} + +func contactGroupIRCToState(cg *api.ContactGroup) []interface{} { + ircContacts := make([]interface{}, 0, len(cg.Contacts.Users)) + + for _, user := range cg.Contacts.Users { + switch user.Method { + case circonusMethodIRC: + ircContacts = append(ircContacts, map[string]interface{}{ + contactUserCIDAttr: user.UserCID, + }) + } + } + + return ircContacts +} + +func contactGroupPagerDutyToState(cg *api.ContactGroup) ([]interface{}, error) { + pdContacts := make([]interface{}, 0, len(cg.Contacts.External)) + + for _, ext := range cg.Contacts.External { + switch ext.Method { + case circonusMethodPagerDuty: + pdInfo := contactPagerDutyInfo{} + if err := json.Unmarshal([]byte(ext.Info), &pdInfo); err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("unable to decode external %s JSON (%q): {{err}}", contactPagerDutyAttr, ext.Info), err) + } + + pdContacts = append(pdContacts, map[string]interface{}{ + string(contactContactGroupFallbackAttr): failoverGroupIDToCID(pdInfo.FallbackGroupCID), + string(contactPagerDutyServiceKeyAttr): pdInfo.ServiceKey, + string(contactPagerDutyWebhookURLAttr): pdInfo.WebhookURL, + }) + } + } + + return pdContacts, nil +} + +func contactGroupSlackToState(cg *api.ContactGroup) ([]interface{}, error) { + slackContacts := make([]interface{}, 0, len(cg.Contacts.External)) + + for _, ext := range cg.Contacts.External { + switch ext.Method { + case circonusMethodSlack: + slackInfo := contactSlackInfo{} + if err := json.Unmarshal([]byte(ext.Info), &slackInfo); err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("unable to decode external %s JSON (%q): {{err}}", contactSlackAttr, ext.Info), err) + } + + slackContacts = append(slackContacts, map[string]interface{}{ + contactContactGroupFallbackAttr: failoverGroupIDToCID(slackInfo.FallbackGroupCID), + contactSlackButtonsAttr: int(slackInfo.Buttons) == int(1), + contactSlackChannelAttr: slackInfo.Channel, + contactSlackTeamAttr: slackInfo.Team, + contactSlackUsernameAttr: slackInfo.Username, + }) + } + } + + return slackContacts, nil +} + +func contactGroupSMSToState(cg *api.ContactGroup) ([]interface{}, error) { + smsContacts := make([]interface{}, 0, len(cg.Contacts.Users)+len(cg.Contacts.External)) + + for _, ext := range cg.Contacts.External { + switch ext.Method { + case circonusMethodSMS: + smsContacts = append(smsContacts, map[string]interface{}{ + contactSMSAddressAttr: ext.Info, + }) + } + } + + for _, user := range cg.Contacts.Users { + switch user.Method { + case circonusMethodSMS: + smsContacts = append(smsContacts, map[string]interface{}{ + contactUserCIDAttr: user.UserCID, + }) + } + } + + return smsContacts, nil +} + +func contactGroupVictorOpsToState(cg *api.ContactGroup) ([]interface{}, error) { + victorOpsContacts := make([]interface{}, 0, len(cg.Contacts.External)) + + for _, ext := range cg.Contacts.External { + switch ext.Method { + case circonusMethodVictorOps: + victorOpsInfo := contactVictorOpsInfo{} + if err := json.Unmarshal([]byte(ext.Info), &victorOpsInfo); err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("unable to decode external %s JSON (%q): {{err}}", contactVictorOpsInfoAttr, ext.Info), err) + } + + victorOpsContacts = append(victorOpsContacts, map[string]interface{}{ + contactContactGroupFallbackAttr: failoverGroupIDToCID(victorOpsInfo.FallbackGroupCID), + contactVictorOpsAPIKeyAttr: victorOpsInfo.APIKey, + contactVictorOpsCriticalAttr: victorOpsInfo.Critical, + contactVictorOpsInfoAttr: victorOpsInfo.Info, + contactVictorOpsTeamAttr: victorOpsInfo.Team, + contactVictorOpsWarningAttr: victorOpsInfo.Warning, + }) + } + } + + return victorOpsContacts, nil +} + +func contactGroupXMPPToState(cg *api.ContactGroup) ([]interface{}, error) { + xmppContacts := make([]interface{}, 0, len(cg.Contacts.Users)+len(cg.Contacts.External)) + + for _, ext := range cg.Contacts.External { + switch ext.Method { + case circonusMethodXMPP: + xmppContacts = append(xmppContacts, map[string]interface{}{ + contactXMPPAddressAttr: ext.Info, + }) + } + } + + for _, user := range cg.Contacts.Users { + switch user.Method { + case circonusMethodXMPP: + xmppContacts = append(xmppContacts, map[string]interface{}{ + contactUserCIDAttr: user.UserCID, + }) + } + } + + return xmppContacts, nil +} + +// contactGroupAlertOptionsChecksum creates a stable hash of the normalized values +func contactGroupAlertOptionsChecksum(v interface{}) int { + m := v.(map[string]interface{}) + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + fmt.Fprintf(b, "%x", m[contactSeverityAttr].(int)) + fmt.Fprint(b, normalizeTimeDurationStringToSeconds(m[contactEscalateAfterAttr])) + fmt.Fprint(b, m[contactEscalateToAttr]) + fmt.Fprint(b, normalizeTimeDurationStringToSeconds(m[contactReminderAttr])) + return hashcode.String(b.String()) +} diff --git a/builtin/providers/circonus/resource_circonus_contact_test.go b/builtin/providers/circonus/resource_circonus_contact_test.go new file mode 100644 index 0000000000..64186f27df --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_contact_test.go @@ -0,0 +1,241 @@ +package circonus + +import ( + "fmt" + "strings" + "testing" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccCirconusContactGroup_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDestroyCirconusContactGroup, + Steps: []resource.TestStep{ + { + Config: testAccCirconusContactGroupConfig, + Check: resource.ComposeTestCheckFunc( + // testAccContactGroupExists("circonus_contact_group.staging-sev3", "foo"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "name", "ops-staging-sev3"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "email.#", "3"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "email.1119127802.address", ""), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "email.1119127802.user", "/user/5469"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "email.1456570992.address", ""), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "email.1456570992.user", "/user/6331"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "email.343263208.address", "user@example.com"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "email.343263208.user", ""), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "http.#", "1"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "http.1287846151.address", "https://www.example.org/post/endpoint"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "http.1287846151.format", "json"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "http.1287846151.method", "POST"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "irc.#", "0"), + // resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "irc.918937268.user", "/user/6331"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "slack.#", "1"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "slack.274933206.channel", "#ops-staging"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "slack.274933206.team", "T123UT98F"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "slack.274933206.username", "Circonus"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "slack.274933206.buttons", "true"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "sms.#", "1"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "sms.1119127802.user", "/user/5469"), + + // xmpp.# will be 0 for user faux user accounts that don't have an + // XMPP address setup. + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "xmpp.#", "0"), + // resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "xmpp.1119127802.user", "/user/5469"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "victorops.#", "1"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "victorops.2029434450.api_key", "123"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "victorops.2029434450.critical", "2"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "victorops.2029434450.info", "5"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "victorops.2029434450.team", "bender"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "victorops.2029434450.warning", "3"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "aggregation_window", "60s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.#", "5"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.689365425.severity", "1"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.689365425.reminder", "60s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.689365425.escalate_after", "3600s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.689365425.escalate_to", "/contact_group/2913"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.551050940.severity", "2"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.551050940.reminder", "120s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.551050940.escalate_after", "7200s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.551050940.escalate_to", "/contact_group/2913"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1292974544.severity", "3"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1292974544.reminder", "180s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1292974544.escalate_after", "10800s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1292974544.escalate_to", "/contact_group/2913"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1183354841.severity", "4"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1183354841.reminder", "240s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1183354841.escalate_after", "14400s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.1183354841.escalate_to", "/contact_group/2913"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.2942620849.severity", "5"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.2942620849.reminder", "300s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.2942620849.escalate_after", "18000s"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "alert_option.2942620849.escalate_to", "/contact_group/2913"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "long_message", "a long message"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "long_subject", "long subject"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "long_summary", "long summary"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "short_message", "short message"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "short_summary", "short summary"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "tags.#", "2"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "tags.2087084518", "author:terraform"), + resource.TestCheckResourceAttr("circonus_contact_group.staging-sev3", "tags.393923453", "other:foo"), + ), + }, + }, + }) +} + +func testAccCheckDestroyCirconusContactGroup(s *terraform.State) error { + c := testAccProvider.Meta().(*providerContext) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "circonus_contact_group" { + continue + } + + cid := rs.Primary.ID + exists, err := checkContactGroupExists(c, api.CIDType(&cid)) + switch { + case !exists: + // noop + case exists: + return fmt.Errorf("contact group still exists after destroy") + case err != nil: + return fmt.Errorf("Error checking contact group %s", err) + } + } + + return nil +} + +func checkContactGroupExists(c *providerContext, contactGroupCID api.CIDType) (bool, error) { + cb, err := c.client.FetchContactGroup(contactGroupCID) + if err != nil { + if strings.Contains(err.Error(), defaultCirconus404ErrorString) { + return false, nil + } + + return false, err + } + + if api.CIDType(&cb.CID) == contactGroupCID { + return true, nil + } + + return false, nil +} + +const testAccCirconusContactGroupConfig = ` +resource "circonus_contact_group" "staging-sev3" { + name = "ops-staging-sev3" + + email { + user = "/user/5469" + } + + email { + address = "user@example.com" + } + + email { + user = "/user/6331" + } + + http { + address = "https://www.example.org/post/endpoint" + format = "json" + method = "POST" + } + +/* + // Account needs to be setup with IRC before this can work. + irc { + user = "/user/6331" + } +*/ + +/* + pager_duty { + // NOTE(sean@): needs to be filled in + } +*/ + + slack { + channel = "#ops-staging" + team = "T123UT98F" + username = "Circonus" + buttons = true + } + + sms { + user = "/user/5469" + } + + victorops { + api_key = "123" + critical = 2 + info = 5 + team = "bender" + warning = 3 + } + + // Faux user accounts that don't have an XMPP address setup will not return a + // valid response in the future. + // + // xmpp { + // user = "/user/5469" + // } + + aggregation_window = "1m" + + alert_option { + severity = 1 + reminder = "60s" + escalate_after = "3600s" + escalate_to = "/contact_group/2913" + } + + alert_option { + severity = 2 + reminder = "2m" + escalate_after = "2h" + escalate_to = "/contact_group/2913" + } + + alert_option { + severity = 3 + reminder = "3m" + escalate_after = "3h" + escalate_to = "/contact_group/2913" + } + + alert_option { + severity = 4 + reminder = "4m" + escalate_after = "4h" + escalate_to = "/contact_group/2913" + } + + alert_option { + severity = 5 + reminder = "5m" + escalate_after = "5h" + escalate_to = "/contact_group/2913" + } + + // alert_formats: omit to use defaults + long_message = "a long message" + long_subject = "long subject" + long_summary = "long summary" + short_message = "short message" + short_summary = "short summary" + + tags = [ + "author:terraform", + "other:foo", + ] +} +` diff --git a/builtin/providers/circonus/resource_circonus_graph.go b/builtin/providers/circonus/resource_circonus_graph.go new file mode 100644 index 0000000000..836e422636 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_graph.go @@ -0,0 +1,930 @@ +package circonus + +import ( + "fmt" + "regexp" + "strconv" + "strings" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + // circonus_graph.* resource attribute names + graphDescriptionAttr = "description" + graphLeftAttr = "left" + graphLineStyleAttr = "line_style" + graphMetricClusterAttr = "metric_cluster" + graphNameAttr = "name" + graphNotesAttr = "notes" + graphRightAttr = "right" + graphMetricAttr = "metric" + graphStyleAttr = "graph_style" + graphTagsAttr = "tags" + + // circonus_graph.metric.* resource attribute names + graphMetricActiveAttr = "active" + graphMetricAlphaAttr = "alpha" + graphMetricAxisAttr = "axis" + graphMetricCAQLAttr = "caql" + graphMetricCheckAttr = "check" + graphMetricColorAttr = "color" + graphMetricFormulaAttr = "formula" + graphMetricFormulaLegendAttr = "legend_formula" + graphMetricFunctionAttr = "function" + graphMetricHumanNameAttr = "name" + graphMetricMetricTypeAttr = "metric_type" + graphMetricNameAttr = "metric_name" + graphMetricStackAttr = "stack" + + // circonus_graph.metric_cluster.* resource attribute names + graphMetricClusterActiveAttr = "active" + graphMetricClusterAggregateAttr = "aggregate" + graphMetricClusterAxisAttr = "axis" + graphMetricClusterColorAttr = "color" + graphMetricClusterQueryAttr = "query" + graphMetricClusterHumanNameAttr = "name" + + // circonus_graph.{left,right}.* resource attribute names + graphAxisLogarithmicAttr = "logarithmic" + graphAxisMaxAttr = "max" + graphAxisMinAttr = "min" +) + +const ( + apiGraphStyleLine = "line" +) + +var graphDescriptions = attrDescrs{ + // circonus_graph.* resource attribute names + graphDescriptionAttr: "", + graphLeftAttr: "", + graphLineStyleAttr: "How the line should change between point. A string containing either 'stepped', 'interpolated' or null.", + graphNameAttr: "", + graphNotesAttr: "", + graphRightAttr: "", + graphMetricAttr: "", + graphMetricClusterAttr: "", + graphStyleAttr: "", + graphTagsAttr: "", +} + +var graphMetricDescriptions = attrDescrs{ + // circonus_graph.metric.* resource attribute names + graphMetricActiveAttr: "", + graphMetricAlphaAttr: "", + graphMetricAxisAttr: "", + graphMetricCAQLAttr: "", + graphMetricCheckAttr: "", + graphMetricColorAttr: "", + graphMetricFormulaAttr: "", + graphMetricFormulaLegendAttr: "", + graphMetricFunctionAttr: "", + graphMetricMetricTypeAttr: "", + graphMetricHumanNameAttr: "", + graphMetricNameAttr: "", + graphMetricStackAttr: "", +} + +var graphMetricClusterDescriptions = attrDescrs{ + // circonus_graph.metric_cluster.* resource attribute names + graphMetricClusterActiveAttr: "", + graphMetricClusterAggregateAttr: "", + graphMetricClusterAxisAttr: "", + graphMetricClusterColorAttr: "", + graphMetricClusterQueryAttr: "", + graphMetricClusterHumanNameAttr: "", +} + +// NOTE(sean@): There is no way to set a description on map inputs, but if that +// does happen: +// +// var graphMetricAxisOptionDescriptions = attrDescrs{ +// // circonus_graph.if.value.over.* resource attribute names +// graphAxisLogarithmicAttr: "", +// graphAxisMaxAttr: "", +// graphAxisMinAttr: "", +// } + +func resourceGraph() *schema.Resource { + makeConflictsWith := func(in ...schemaAttr) []string { + out := make([]string, 0, len(in)) + for _, attr := range in { + out = append(out, string(graphMetricAttr)+"."+string(attr)) + } + return out + } + + return &schema.Resource{ + Create: graphCreate, + Read: graphRead, + Update: graphUpdate, + Delete: graphDelete, + Exists: graphExists, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: convertToHelperSchema(graphDescriptions, map[schemaAttr]*schema.Schema{ + graphDescriptionAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + StateFunc: suppressWhitespace, + }, + graphLeftAttr: &schema.Schema{ + Type: schema.TypeMap, + Elem: schema.TypeString, + Optional: true, + ValidateFunc: validateGraphAxisOptions, + }, + graphLineStyleAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: defaultGraphLineStyle, + ValidateFunc: validateStringIn(graphLineStyleAttr, validGraphLineStyles), + }, + graphNameAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(graphNameAttr, `.+`), + }, + graphNotesAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + graphRightAttr: &schema.Schema{ + Type: schema.TypeMap, + Elem: schema.TypeString, + Optional: true, + ValidateFunc: validateGraphAxisOptions, + }, + graphMetricAttr: &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: convertToHelperSchema(graphMetricDescriptions, map[schemaAttr]*schema.Schema{ + graphMetricActiveAttr: &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + graphMetricAlphaAttr: &schema.Schema{ + Type: schema.TypeFloat, + Optional: true, + ValidateFunc: validateFuncs( + validateFloatMin(graphMetricAlphaAttr, 0.0), + validateFloatMax(graphMetricAlphaAttr, 1.0), + ), + }, + graphMetricAxisAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "left", + ValidateFunc: validateStringIn(graphMetricAxisAttr, validAxisAttrs), + }, + graphMetricCAQLAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricCAQLAttr, `.+`), + ConflictsWith: makeConflictsWith(graphMetricCheckAttr, graphMetricNameAttr), + }, + graphMetricCheckAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricCheckAttr, config.CheckCIDRegex), + ConflictsWith: makeConflictsWith(graphMetricCAQLAttr), + }, + graphMetricColorAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricColorAttr, `^#[0-9a-fA-F]{6}$`), + }, + graphMetricFormulaAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricFormulaAttr, `^.+$`), + }, + graphMetricFormulaLegendAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricFormulaLegendAttr, `^.+$`), + }, + graphMetricFunctionAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: defaultGraphFunction, + ValidateFunc: validateStringIn(graphMetricFunctionAttr, validGraphFunctionValues), + }, + graphMetricMetricTypeAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateStringIn(graphMetricMetricTypeAttr, validMetricTypes), + }, + graphMetricHumanNameAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricHumanNameAttr, `.+`), + }, + graphMetricNameAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricNameAttr, `^[\S]+$`), + }, + graphMetricStackAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricStackAttr, `^[\d]*$`), + }, + }), + }, + }, + graphMetricClusterAttr: &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: convertToHelperSchema(graphMetricClusterDescriptions, map[schemaAttr]*schema.Schema{ + graphMetricClusterActiveAttr: &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + graphMetricClusterAggregateAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "none", + ValidateFunc: validateStringIn(graphMetricClusterAggregateAttr, validAggregateFuncs), + }, + graphMetricClusterAxisAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "left", + ValidateFunc: validateStringIn(graphMetricClusterAttr, validAxisAttrs), + }, + graphMetricClusterColorAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricClusterColorAttr, `^#[0-9a-fA-F]{6}$`), + }, + graphMetricClusterQueryAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp(graphMetricClusterQueryAttr, config.MetricClusterCIDRegex), + }, + graphMetricClusterHumanNameAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(graphMetricHumanNameAttr, `.+`), + }, + }), + }, + }, + graphStyleAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: defaultGraphStyle, + ValidateFunc: validateStringIn(graphStyleAttr, validGraphStyles), + }, + graphTagsAttr: tagMakeConfigSchema(graphTagsAttr), + }), + } +} + +func graphCreate(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + g := newGraph() + if err := g.ParseConfig(d); err != nil { + return errwrap.Wrapf("error parsing graph schema during create: {{err}}", err) + } + + if err := g.Create(ctxt); err != nil { + return errwrap.Wrapf("error creating graph: {{err}}", err) + } + + d.SetId(g.CID) + + return graphRead(d, meta) +} + +func graphExists(d *schema.ResourceData, meta interface{}) (bool, error) { + ctxt := meta.(*providerContext) + + cid := d.Id() + g, err := ctxt.client.FetchGraph(api.CIDType(&cid)) + if err != nil { + if strings.Contains(err.Error(), defaultCirconus404ErrorString) { + return false, nil + } + + return false, err + } + + if g.CID == "" { + return false, nil + } + + return true, nil +} + +// graphRead pulls data out of the Graph object and stores it into the +// appropriate place in the statefile. +func graphRead(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + cid := d.Id() + g, err := loadGraph(ctxt, api.CIDType(&cid)) + if err != nil { + return err + } + + d.SetId(g.CID) + + metrics := make([]interface{}, 0, len(g.Datapoints)) + for _, datapoint := range g.Datapoints { + dataPointAttrs := make(map[string]interface{}, 13) // 13 == len(members in api.GraphDatapoint) + + dataPointAttrs[string(graphMetricActiveAttr)] = !datapoint.Hidden + + if datapoint.Alpha != nil && *datapoint.Alpha != 0 { + dataPointAttrs[string(graphMetricAlphaAttr)] = *datapoint.Alpha + } + + switch datapoint.Axis { + case "l", "": + dataPointAttrs[string(graphMetricAxisAttr)] = "left" + case "r": + dataPointAttrs[string(graphMetricAxisAttr)] = "right" + default: + return fmt.Errorf("PROVIDER BUG: Unsupported axis type %q", datapoint.Axis) + } + + if datapoint.CAQL != nil { + dataPointAttrs[string(graphMetricCAQLAttr)] = *datapoint.CAQL + } + + if datapoint.CheckID != 0 { + dataPointAttrs[string(graphMetricCheckAttr)] = fmt.Sprintf("%s/%d", config.CheckPrefix, datapoint.CheckID) + } + + if datapoint.Color != nil { + dataPointAttrs[string(graphMetricColorAttr)] = *datapoint.Color + } + + if datapoint.DataFormula != nil { + dataPointAttrs[string(graphMetricFormulaAttr)] = *datapoint.DataFormula + } + + switch datapoint.Derive.(type) { + case bool: + case string: + dataPointAttrs[string(graphMetricFunctionAttr)] = datapoint.Derive.(string) + default: + return fmt.Errorf("PROVIDER BUG: Unsupported type for derive: %T", datapoint.Derive) + } + + if datapoint.LegendFormula != nil { + dataPointAttrs[string(graphMetricFormulaLegendAttr)] = *datapoint.LegendFormula + } + + if datapoint.MetricName != "" { + dataPointAttrs[string(graphMetricNameAttr)] = datapoint.MetricName + } + + if datapoint.MetricType != "" { + dataPointAttrs[string(graphMetricMetricTypeAttr)] = datapoint.MetricType + } + + if datapoint.Name != "" { + dataPointAttrs[string(graphMetricHumanNameAttr)] = datapoint.Name + } + + if datapoint.Stack != nil { + dataPointAttrs[string(graphMetricStackAttr)] = fmt.Sprintf("%d", *datapoint.Stack) + } + + metrics = append(metrics, dataPointAttrs) + } + + metricClusters := make([]interface{}, 0, len(g.MetricClusters)) + for _, metricCluster := range g.MetricClusters { + metricClusterAttrs := make(map[string]interface{}, 8) // 8 == len(num struct attrs in api.GraphMetricCluster) + + metricClusterAttrs[string(graphMetricClusterActiveAttr)] = !metricCluster.Hidden + + if metricCluster.AggregateFunc != "" { + metricClusterAttrs[string(graphMetricClusterAggregateAttr)] = metricCluster.AggregateFunc + } + + switch metricCluster.Axis { + case "l", "": + metricClusterAttrs[string(graphMetricClusterAxisAttr)] = "left" + case "r": + metricClusterAttrs[string(graphMetricClusterAxisAttr)] = "right" + default: + return fmt.Errorf("PROVIDER BUG: Unsupported axis type %q", metricCluster.Axis) + } + + if metricCluster.Color != nil { + metricClusterAttrs[string(graphMetricClusterColorAttr)] = *metricCluster.Color + } + + if metricCluster.DataFormula != nil { + metricClusterAttrs[string(graphMetricFormulaAttr)] = *metricCluster.DataFormula + } + + if metricCluster.LegendFormula != nil { + metricClusterAttrs[string(graphMetricFormulaLegendAttr)] = *metricCluster.LegendFormula + } + + if metricCluster.MetricCluster != "" { + metricClusterAttrs[string(graphMetricClusterQueryAttr)] = metricCluster.MetricCluster + } + + if metricCluster.Name != "" { + metricClusterAttrs[string(graphMetricHumanNameAttr)] = metricCluster.Name + } + + if metricCluster.Stack != nil { + metricClusterAttrs[string(graphMetricStackAttr)] = fmt.Sprintf("%d", *metricCluster.Stack) + } + + metricClusters = append(metricClusters, metricClusterAttrs) + } + + leftAxisMap := make(map[string]interface{}, 3) + if g.LogLeftY != nil { + leftAxisMap[string(graphAxisLogarithmicAttr)] = fmt.Sprintf("%d", *g.LogLeftY) + } + + if g.MaxLeftY != nil { + leftAxisMap[string(graphAxisMaxAttr)] = strconv.FormatFloat(*g.MaxLeftY, 'f', -1, 64) + } + + if g.MinLeftY != nil { + leftAxisMap[string(graphAxisMinAttr)] = strconv.FormatFloat(*g.MinLeftY, 'f', -1, 64) + } + + rightAxisMap := make(map[string]interface{}, 3) + if g.LogRightY != nil { + rightAxisMap[string(graphAxisLogarithmicAttr)] = fmt.Sprintf("%d", *g.LogRightY) + } + + if g.MaxRightY != nil { + rightAxisMap[string(graphAxisMaxAttr)] = strconv.FormatFloat(*g.MaxRightY, 'f', -1, 64) + } + + if g.MinRightY != nil { + rightAxisMap[string(graphAxisMinAttr)] = strconv.FormatFloat(*g.MinRightY, 'f', -1, 64) + } + + d.Set(graphDescriptionAttr, g.Description) + + if err := d.Set(graphLeftAttr, leftAxisMap); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphLeftAttr), err) + } + + d.Set(graphLineStyleAttr, g.LineStyle) + d.Set(graphNameAttr, g.Title) + d.Set(graphNotesAttr, indirect(g.Notes)) + + if err := d.Set(graphRightAttr, rightAxisMap); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphRightAttr), err) + } + + if err := d.Set(graphMetricAttr, metrics); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphMetricAttr), err) + } + + if err := d.Set(graphMetricClusterAttr, metricClusters); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphMetricClusterAttr), err) + } + + d.Set(graphStyleAttr, g.Style) + + if err := d.Set(graphTagsAttr, tagsToState(apiToTags(g.Tags))); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphTagsAttr), err) + } + + return nil +} + +func graphUpdate(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + g := newGraph() + if err := g.ParseConfig(d); err != nil { + return err + } + + g.CID = d.Id() + if err := g.Update(ctxt); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to update graph %q: {{err}}", d.Id()), err) + } + + return graphRead(d, meta) +} + +func graphDelete(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + cid := d.Id() + if _, err := ctxt.client.DeleteGraphByCID(api.CIDType(&cid)); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to delete graph %q: {{err}}", d.Id()), err) + } + + d.SetId("") + + return nil +} + +type circonusGraph struct { + api.Graph +} + +func newGraph() circonusGraph { + g := circonusGraph{ + Graph: *api.NewGraph(), + } + + return g +} + +func loadGraph(ctxt *providerContext, cid api.CIDType) (circonusGraph, error) { + var g circonusGraph + ng, err := ctxt.client.FetchGraph(cid) + if err != nil { + return circonusGraph{}, err + } + g.Graph = *ng + + return g, nil +} + +// ParseConfig reads Terraform config data and stores the information into a +// Circonus Graph object. ParseConfig and graphRead() must be kept in sync. +func (g *circonusGraph) ParseConfig(d *schema.ResourceData) error { + g.Datapoints = make([]api.GraphDatapoint, 0, defaultGraphDatapoints) + + if v, found := d.GetOk(graphLeftAttr); found { + listRaw := v.(map[string]interface{}) + leftAxisMap := make(map[string]interface{}, len(listRaw)) + for k, v := range listRaw { + leftAxisMap[k] = v + } + + if v, ok := leftAxisMap[string(graphAxisLogarithmicAttr)]; ok { + i64, _ := strconv.ParseInt(v.(string), 10, 64) + i := int(i64) + g.LogLeftY = &i + } + + if v, ok := leftAxisMap[string(graphAxisMaxAttr)]; ok && v.(string) != "" { + f, _ := strconv.ParseFloat(v.(string), 64) + g.MaxLeftY = &f + } + + if v, ok := leftAxisMap[string(graphAxisMinAttr)]; ok && v.(string) != "" { + f, _ := strconv.ParseFloat(v.(string), 64) + g.MinLeftY = &f + } + } + + if v, found := d.GetOk(graphRightAttr); found { + listRaw := v.(map[string]interface{}) + rightAxisMap := make(map[string]interface{}, len(listRaw)) + for k, v := range listRaw { + rightAxisMap[k] = v + } + + if v, ok := rightAxisMap[string(graphAxisLogarithmicAttr)]; ok { + i64, _ := strconv.ParseInt(v.(string), 10, 64) + i := int(i64) + g.LogRightY = &i + } + + if v, ok := rightAxisMap[string(graphAxisMaxAttr)]; ok && v.(string) != "" { + f, _ := strconv.ParseFloat(v.(string), 64) + g.MaxRightY = &f + } + + if v, ok := rightAxisMap[string(graphAxisMinAttr)]; ok && v.(string) != "" { + f, _ := strconv.ParseFloat(v.(string), 64) + g.MinRightY = &f + } + } + + if v, found := d.GetOk(graphDescriptionAttr); found { + g.Description = v.(string) + } + + if v, found := d.GetOk(graphLineStyleAttr); found { + switch v.(type) { + case string: + s := v.(string) + g.LineStyle = &s + case *string: + g.LineStyle = v.(*string) + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphLineStyleAttr, v) + } + } + + if v, found := d.GetOk(graphNameAttr); found { + g.Title = v.(string) + } + + if v, found := d.GetOk(graphNotesAttr); found { + s := v.(string) + g.Notes = &s + } + + if listRaw, found := d.GetOk(graphMetricAttr); found { + metricList := listRaw.([]interface{}) + for _, metricListElem := range metricList { + metricAttrs := newInterfaceMap(metricListElem.(map[string]interface{})) + datapoint := api.GraphDatapoint{} + + if v, found := metricAttrs[graphMetricActiveAttr]; found { + datapoint.Hidden = !(v.(bool)) + } + + if v, found := metricAttrs[graphMetricAlphaAttr]; found { + f := v.(float64) + if f != 0 { + datapoint.Alpha = &f + } + } + + if v, found := metricAttrs[graphMetricAxisAttr]; found { + switch v.(string) { + case "left", "": + datapoint.Axis = "l" + case "right": + datapoint.Axis = "r" + default: + return fmt.Errorf("PROVIDER BUG: Unsupported axis attribute %q: %q", graphMetricAxisAttr, v.(string)) + } + } + + if v, found := metricAttrs[graphMetricCheckAttr]; found { + re := regexp.MustCompile(config.CheckCIDRegex) + matches := re.FindStringSubmatch(v.(string)) + if len(matches) == 3 { + checkID, _ := strconv.ParseUint(matches[2], 10, 64) + datapoint.CheckID = uint(checkID) + } + } + + if v, found := metricAttrs[graphMetricColorAttr]; found { + s := v.(string) + datapoint.Color = &s + } + + if v, found := metricAttrs[graphMetricFormulaAttr]; found { + switch v.(type) { + case string: + s := v.(string) + datapoint.DataFormula = &s + case *string: + datapoint.DataFormula = v.(*string) + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricAttr, v) + } + } + + if v, found := metricAttrs[graphMetricFunctionAttr]; found { + s := v.(string) + if s != "" { + datapoint.Derive = s + } else { + datapoint.Derive = false + } + } else { + datapoint.Derive = false + } + + if v, found := metricAttrs[graphMetricFormulaLegendAttr]; found { + switch u := v.(type) { + case string: + datapoint.LegendFormula = &u + case *string: + datapoint.LegendFormula = u + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricAttr, v) + } + } + + if v, found := metricAttrs[graphMetricNameAttr]; found { + s := v.(string) + if s != "" { + datapoint.MetricName = s + } + } + + if v, found := metricAttrs[graphMetricMetricTypeAttr]; found { + s := v.(string) + if s != "" { + datapoint.MetricType = s + } + } + + if v, found := metricAttrs[graphMetricHumanNameAttr]; found { + s := v.(string) + if s != "" { + datapoint.Name = s + } + } + + if v, found := metricAttrs[graphMetricStackAttr]; found { + var stackStr string + switch u := v.(type) { + case string: + stackStr = u + case *string: + if u != nil { + stackStr = *u + } + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricStackAttr, v) + } + + if stackStr != "" { + u64, _ := strconv.ParseUint(stackStr, 10, 64) + u := uint(u64) + datapoint.Stack = &u + } + } + + g.Datapoints = append(g.Datapoints, datapoint) + } + } + + if listRaw, found := d.GetOk(graphMetricClusterAttr); found { + metricClusterList := listRaw.([]interface{}) + + for _, metricClusterListRaw := range metricClusterList { + metricClusterAttrs := newInterfaceMap(metricClusterListRaw.(map[string]interface{})) + + metricCluster := api.GraphMetricCluster{} + + if v, found := metricClusterAttrs[graphMetricClusterActiveAttr]; found { + metricCluster.Hidden = !(v.(bool)) + } + + if v, found := metricClusterAttrs[graphMetricClusterAggregateAttr]; found { + metricCluster.AggregateFunc = v.(string) + } + + if v, found := metricClusterAttrs[graphMetricClusterAxisAttr]; found { + switch v.(string) { + case "left", "": + metricCluster.Axis = "l" + case "right": + metricCluster.Axis = "r" + default: + return fmt.Errorf("PROVIDER BUG: Unsupported axis attribute %q: %q", graphMetricClusterAxisAttr, v.(string)) + } + } + + if v, found := metricClusterAttrs[graphMetricClusterColorAttr]; found { + s := v.(string) + if s != "" { + metricCluster.Color = &s + } + } + + if v, found := metricClusterAttrs[graphMetricFormulaAttr]; found { + switch v.(type) { + case string: + s := v.(string) + metricCluster.DataFormula = &s + case *string: + metricCluster.DataFormula = v.(*string) + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricFormulaAttr, v) + } + } + + if v, found := metricClusterAttrs[graphMetricFormulaLegendAttr]; found { + switch v.(type) { + case string: + s := v.(string) + metricCluster.LegendFormula = &s + case *string: + metricCluster.LegendFormula = v.(*string) + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricFormulaLegendAttr, v) + } + } + + if v, found := metricClusterAttrs[graphMetricClusterQueryAttr]; found { + s := v.(string) + if s != "" { + metricCluster.MetricCluster = s + } + } + + if v, found := metricClusterAttrs[graphMetricHumanNameAttr]; found { + s := v.(string) + if s != "" { + metricCluster.Name = s + } + } + + if v, found := metricClusterAttrs[graphMetricStackAttr]; found { + var stackStr string + switch u := v.(type) { + case string: + stackStr = u + case *string: + if u != nil { + stackStr = *u + } + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricStackAttr, v) + } + + if stackStr != "" { + u64, _ := strconv.ParseUint(stackStr, 10, 64) + u := uint(u64) + metricCluster.Stack = &u + } + } + + g.MetricClusters = append(g.MetricClusters, metricCluster) + } + } + + if v, found := d.GetOk(graphStyleAttr); found { + switch v.(type) { + case string: + s := v.(string) + g.Style = &s + case *string: + g.Style = v.(*string) + default: + return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphStyleAttr, v) + } + } + + if v, found := d.GetOk(graphTagsAttr); found { + g.Tags = derefStringList(flattenSet(v.(*schema.Set))) + } + + if err := g.Validate(); err != nil { + return err + } + + return nil +} + +func (g *circonusGraph) Create(ctxt *providerContext) error { + ng, err := ctxt.client.CreateGraph(&g.Graph) + if err != nil { + return err + } + + g.CID = ng.CID + + return nil +} + +func (g *circonusGraph) Update(ctxt *providerContext) error { + _, err := ctxt.client.UpdateGraph(&g.Graph) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to update graph %s: {{err}}", g.CID), err) + } + + return nil +} + +func (g *circonusGraph) Validate() error { + for i, datapoint := range g.Datapoints { + if *g.Style == apiGraphStyleLine && datapoint.Alpha != nil && *datapoint.Alpha != 0 { + return fmt.Errorf("%s can not be set on graphs with style %s", graphMetricAlphaAttr, apiGraphStyleLine) + } + + if datapoint.CheckID != 0 && datapoint.MetricName == "" { + return fmt.Errorf("Error with %s[%d] name=%q: %s is set, missing attribute %s must also be set", graphMetricAttr, i, datapoint.Name, graphMetricCheckAttr, graphMetricNameAttr) + } + + if datapoint.CheckID == 0 && datapoint.MetricName != "" { + return fmt.Errorf("Error with %s[%d] name=%q: %s is set, missing attribute %s must also be set", graphMetricAttr, i, datapoint.Name, graphMetricNameAttr, graphMetricCheckAttr) + } + + if datapoint.CAQL != nil && (datapoint.CheckID != 0 || datapoint.MetricName != "") { + return fmt.Errorf("Error with %s[%d] name=%q: %q attribute is mutually exclusive with attributes %s or %s", graphMetricAttr, i, datapoint.Name, graphMetricCAQLAttr, graphMetricNameAttr, graphMetricCheckAttr) + } + } + + for i, mc := range g.MetricClusters { + if mc.AggregateFunc != "" && (mc.Color == nil || *mc.Color == "") { + return fmt.Errorf("Error with %s[%d] name=%q: %s is a required attribute for graphs with %s set", graphMetricClusterAttr, i, mc.Name, graphMetricClusterColorAttr, graphMetricClusterAggregateAttr) + } + } + + return nil +} diff --git a/builtin/providers/circonus/resource_circonus_graph_test.go b/builtin/providers/circonus/resource_circonus_graph_test.go new file mode 100644 index 0000000000..d51d00fc87 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_graph_test.go @@ -0,0 +1,199 @@ +package circonus + +import ( + "fmt" + "strings" + "testing" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccCirconusGraph_basic(t *testing.T) { + graphName := fmt.Sprintf("Test Graph - %s", acctest.RandString(5)) + checkName := fmt.Sprintf("ICMP Ping check - %s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDestroyCirconusGraph, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(testAccCirconusGraphConfigFmt, checkName, graphName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "name", graphName), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "description", "Terraform Test: mixed graph"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "notes", "test notes"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "graph_style", "line"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "left.%", "1"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "left.max", "11"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "right.%", "3"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "right.logarithmic", "10"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "right.max", "20"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "right.min", "-1"), + + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "line_style", "stepped"), + + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.#", "2"), + + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.caql", ""), + resource.TestCheckResourceAttrSet("circonus_graph.mixed-points", "metric.0.check"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.metric_name", "maximum"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.metric_type", "numeric"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.name", "Maximum Latency"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.axis", "left"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.color", "#657aa6"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.function", "gauge"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.0.active", "true"), + + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.caql", ""), + resource.TestCheckResourceAttrSet("circonus_graph.mixed-points", "metric.1.check"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.metric_name", "minimum"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.metric_type", "numeric"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.name", "Minimum Latency"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.axis", "right"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.color", "#657aa6"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.function", "gauge"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "metric.1.active", "true"), + + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "tags.#", "2"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "tags.2087084518", "author:terraform"), + resource.TestCheckResourceAttr("circonus_graph.mixed-points", "tags.1401442048", "lifecycle:unittest"), + ), + }, + }, + }) +} + +func testAccCheckDestroyCirconusGraph(s *terraform.State) error { + ctxt := testAccProvider.Meta().(*providerContext) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "circonus_graph" { + continue + } + + cid := rs.Primary.ID + exists, err := checkGraphExists(ctxt, api.CIDType(&cid)) + switch { + case !exists: + // noop + case exists: + return fmt.Errorf("graph still exists after destroy") + case err != nil: + return fmt.Errorf("Error checking graph %s", err) + } + } + + return nil +} + +func checkGraphExists(c *providerContext, graphID api.CIDType) (bool, error) { + g, err := c.client.FetchGraph(graphID) + if err != nil { + if strings.Contains(err.Error(), defaultCirconus404ErrorString) { + return false, nil + } + + return false, err + } + + if api.CIDType(&g.CID) == graphID { + return true, nil + } + + return false, nil +} + +const testAccCirconusGraphConfigFmt = ` +variable "test_tags" { + type = "list" + default = [ "author:terraform", "lifecycle:unittest" ] +} + +resource "circonus_check" "api_latency" { + active = true + name = "%s" + period = "60s" + + collector { + id = "/broker/1" + } + + icmp_ping { + count = 5 + } + + metric { + name = "maximum" + tags = [ "${var.test_tags}" ] + type = "numeric" + unit = "seconds" + } + + metric { + name = "minimum" + tags = [ "${var.test_tags}" ] + type = "numeric" + unit = "seconds" + } + + tags = [ "${var.test_tags}" ] + target = "api.circonus.com" +} + +resource "circonus_graph" "mixed-points" { + name = "%s" + description = "Terraform Test: mixed graph" + notes = "test notes" + graph_style = "line" + line_style = "stepped" + + metric { + # caql = "" # conflicts with metric_name/check + check = "${circonus_check.api_latency.checks[0]}" + metric_name = "maximum" + metric_type = "numeric" + name = "Maximum Latency" + axis = "left" # right + color = "#657aa6" + function = "gauge" + active = true + } + + metric { + # caql = "" # conflicts with metric_name/check + check = "${circonus_check.api_latency.checks[0]}" + metric_name = "minimum" + metric_type = "numeric" + name = "Minimum Latency" + axis = "right" # left + color = "#657aa6" + function = "gauge" + active = true + } + + // metric_cluster { + // active = true + // aggregate = "average" + // axis = "left" # right + // color = "#657aa6" + // group = "${circonus_check.api_latency.checks[0]}" + // name = "Metrics Used" + // } + + left { + max = 11 + } + + right { + logarithmic = 10 + max = 20 + min = -1 + } + + tags = [ "${var.test_tags}" ] +} +` diff --git a/builtin/providers/circonus/resource_circonus_metric.go b/builtin/providers/circonus/resource_circonus_metric.go new file mode 100644 index 0000000000..0b9bed1f26 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_metric.go @@ -0,0 +1,138 @@ +package circonus + +// The `circonus_metric` type is a synthetic, top-level resource that doesn't +// actually exist within Circonus. The `circonus_check` resource uses +// `circonus_metric` as input to its `metric` attribute. The `circonus_check` +// resource can, if configured, override various parameters in the +// `circonus_metric` resource if no value was set (e.g. the `icmp_ping` will +// implicitly set the `unit` metric to `seconds`). + +import ( + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + // circonus_metric.* resource attribute names + metricActiveAttr = "active" + metricIDAttr = "id" + metricNameAttr = "name" + metricTypeAttr = "type" + metricTagsAttr = "tags" + metricUnitAttr = "unit" + + // CheckBundle.Metric.Status can be one of these values + metricStatusActive = "active" + metricStatusAvailable = "available" +) + +var metricDescriptions = attrDescrs{ + metricActiveAttr: "Enables or disables the metric", + metricNameAttr: "Name of the metric", + metricTypeAttr: "Type of metric (e.g. numeric, histogram, text)", + metricTagsAttr: "Tags assigned to the metric", + metricUnitAttr: "The unit of measurement for a metric", +} + +func resourceMetric() *schema.Resource { + return &schema.Resource{ + Create: metricCreate, + Read: metricRead, + Update: metricUpdate, + Delete: metricDelete, + Exists: metricExists, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: convertToHelperSchema(metricDescriptions, map[schemaAttr]*schema.Schema{ + metricActiveAttr: &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + metricNameAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(metricNameAttr, `[\S]+`), + }, + metricTypeAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateStringIn(metricTypeAttr, validMetricTypes), + }, + metricTagsAttr: tagMakeConfigSchema(metricTagsAttr), + metricUnitAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: metricUnit, + ValidateFunc: validateRegexp(metricUnitAttr, metricUnitRegexp), + }, + }), + } +} + +func metricCreate(d *schema.ResourceData, meta interface{}) error { + m := newMetric() + + id := d.Id() + if id == "" { + var err error + id, err = newMetricID() + if err != nil { + return errwrap.Wrapf("metric ID creation failed: {{err}}", err) + } + } + + if err := m.ParseConfig(id, d); err != nil { + return errwrap.Wrapf("error parsing metric schema during create: {{err}}", err) + } + + if err := m.Create(d); err != nil { + return errwrap.Wrapf("error creating metric: {{err}}", err) + } + + return metricRead(d, meta) +} + +func metricRead(d *schema.ResourceData, meta interface{}) error { + m := newMetric() + + if err := m.ParseConfig(d.Id(), d); err != nil { + return errwrap.Wrapf("error parsing metric schema during read: {{err}}", err) + } + + if err := m.SaveState(d); err != nil { + return errwrap.Wrapf("error saving metric during read: {{err}}", err) + } + + return nil +} + +func metricUpdate(d *schema.ResourceData, meta interface{}) error { + m := newMetric() + + if err := m.ParseConfig(d.Id(), d); err != nil { + return errwrap.Wrapf("error parsing metric schema during update: {{err}}", err) + } + + if err := m.Update(d); err != nil { + return errwrap.Wrapf("error updating metric: {{err}}", err) + } + + return nil +} + +func metricDelete(d *schema.ResourceData, meta interface{}) error { + d.SetId("") + + return nil +} + +func metricExists(d *schema.ResourceData, meta interface{}) (bool, error) { + if id := d.Id(); id != "" { + return true, nil + } + + return false, nil +} diff --git a/builtin/providers/circonus/resource_circonus_metric_cluster.go b/builtin/providers/circonus/resource_circonus_metric_cluster.go new file mode 100644 index 0000000000..f8776099b3 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_metric_cluster.go @@ -0,0 +1,262 @@ +package circonus + +import ( + "bytes" + "fmt" + "strings" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +const ( + // circonus_metric_cluster.* resource attribute names + metricClusterDescriptionAttr = "description" + metricClusterNameAttr = "name" + metricClusterQueryAttr = "query" + metricClusterTagsAttr = "tags" + + // circonus_metric_cluster.* out parameters + metricClusterIDAttr = "id" + + // circonus_metric_cluster.query.* resource attribute names + metricClusterDefinitionAttr = "definition" + metricClusterTypeAttr = "type" +) + +var metricClusterDescriptions = attrDescrs{ + metricClusterDescriptionAttr: "A description of the metric cluster", + metricClusterIDAttr: "The ID of this metric cluster", + metricClusterNameAttr: "The name of the metric cluster", + metricClusterQueryAttr: "A metric cluster query definition", + metricClusterTagsAttr: "A list of tags assigned to the metric cluster", +} + +var metricClusterQueryDescriptions = attrDescrs{ + metricClusterDefinitionAttr: "A query to select a collection of metric streams", + metricClusterTypeAttr: "The operation to perform on the matching metric streams", +} + +func resourceMetricCluster() *schema.Resource { + return &schema.Resource{ + Create: metricClusterCreate, + Read: metricClusterRead, + Update: metricClusterUpdate, + Delete: metricClusterDelete, + Exists: metricClusterExists, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: convertToHelperSchema(metricClusterDescriptions, map[schemaAttr]*schema.Schema{ + metricClusterDescriptionAttr: &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: suppressWhitespace, + }, + metricClusterNameAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + metricClusterQueryAttr: &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: convertToHelperSchema(metricClusterQueryDescriptions, map[schemaAttr]*schema.Schema{ + metricClusterDefinitionAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateRegexp(metricClusterDefinitionAttr, `.+`), + }, + metricClusterTypeAttr: &schema.Schema{ + Type: schema.TypeString, + Required: true, + ValidateFunc: validateStringIn(metricClusterTypeAttr, supportedMetricClusterTypes), + }, + }), + }, + }, + metricClusterTagsAttr: tagMakeConfigSchema(metricClusterTagsAttr), + + // Out parameters + metricClusterIDAttr: &schema.Schema{ + Computed: true, + Type: schema.TypeString, + ValidateFunc: validateRegexp(metricClusterIDAttr, config.MetricClusterCIDRegex), + }, + }), + } +} + +func metricClusterCreate(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + mc := newMetricCluster() + + if err := mc.ParseConfig(d); err != nil { + return errwrap.Wrapf("error parsing metric cluster schema during create: {{err}}", err) + } + + if err := mc.Create(ctxt); err != nil { + return errwrap.Wrapf("error creating metric cluster: {{err}}", err) + } + + d.SetId(mc.CID) + + return metricClusterRead(d, meta) +} + +func metricClusterExists(d *schema.ResourceData, meta interface{}) (bool, error) { + ctxt := meta.(*providerContext) + + cid := d.Id() + mc, err := ctxt.client.FetchMetricCluster(api.CIDType(&cid), "") + if err != nil { + if strings.Contains(err.Error(), defaultCirconus404ErrorString) { + return false, nil + } + + return false, err + } + + if mc.CID == "" { + return false, nil + } + + return true, nil +} + +// metricClusterRead pulls data out of the MetricCluster object and stores it +// into the appropriate place in the statefile. +func metricClusterRead(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + cid := d.Id() + mc, err := loadMetricCluster(ctxt, api.CIDType(&cid)) + if err != nil { + return err + } + + d.SetId(mc.CID) + + queries := schema.NewSet(metricClusterQueryChecksum, nil) + for _, query := range mc.Queries { + queryAttrs := map[string]interface{}{ + string(metricClusterDefinitionAttr): query.Query, + string(metricClusterTypeAttr): query.Type, + } + + queries.Add(queryAttrs) + } + + d.Set(metricClusterDescriptionAttr, mc.Description) + d.Set(metricClusterNameAttr, mc.Name) + + if err := d.Set(metricClusterTagsAttr, tagsToState(apiToTags(mc.Tags))); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store metric cluster %q attribute: {{err}}", metricClusterTagsAttr), err) + } + + d.Set(metricClusterIDAttr, mc.CID) + + return nil +} + +func metricClusterUpdate(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + mc := newMetricCluster() + + if err := mc.ParseConfig(d); err != nil { + return err + } + + mc.CID = d.Id() + if err := mc.Update(ctxt); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to update metric cluster %q: {{err}}", d.Id()), err) + } + + return metricClusterRead(d, meta) +} + +func metricClusterDelete(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + cid := d.Id() + if _, err := ctxt.client.DeleteMetricClusterByCID(api.CIDType(&cid)); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to delete metric cluster %q: {{err}}", d.Id()), err) + } + + d.SetId("") + + return nil +} + +func metricClusterQueryChecksum(v interface{}) int { + m := v.(map[string]interface{}) + + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + + // Order writes to the buffer using lexically sorted list for easy visual + // reconciliation with other lists. + if v, found := m[metricClusterDefinitionAttr]; found { + fmt.Fprint(b, v.(string)) + } + + if v, found := m[metricClusterTypeAttr]; found { + fmt.Fprint(b, v.(string)) + } + + s := b.String() + return hashcode.String(s) +} + +// ParseConfig reads Terraform config data and stores the information into a +// Circonus MetricCluster object. +func (mc *circonusMetricCluster) ParseConfig(d *schema.ResourceData) error { + if v, found := d.GetOk(metricClusterDescriptionAttr); found { + mc.Description = v.(string) + } + + if v, found := d.GetOk(metricClusterNameAttr); found { + mc.Name = v.(string) + } + + if queryListRaw, found := d.GetOk(metricClusterQueryAttr); found { + queryList := queryListRaw.(*schema.Set).List() + + mc.Queries = make([]api.MetricQuery, 0, len(queryList)) + + for _, queryRaw := range queryList { + queryAttrs := newInterfaceMap(queryRaw) + + var query string + if v, found := queryAttrs[metricClusterDefinitionAttr]; found { + query = v.(string) + } + + var queryType string + if v, found := queryAttrs[metricClusterTypeAttr]; found { + queryType = v.(string) + } + + mc.Queries = append(mc.Queries, api.MetricQuery{ + Query: query, + Type: queryType, + }) + } + } + + if v, found := d.GetOk(metricClusterTagsAttr); found { + mc.Tags = derefStringList(flattenSet(v.(*schema.Set))) + } + + if err := mc.Validate(); err != nil { + return err + } + + return nil +} diff --git a/builtin/providers/circonus/resource_circonus_metric_cluster_test.go b/builtin/providers/circonus/resource_circonus_metric_cluster_test.go new file mode 100644 index 0000000000..8c501041d1 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_metric_cluster_test.go @@ -0,0 +1,95 @@ +package circonus + +import ( + "fmt" + "strings" + "testing" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccCirconusMetricCluster_basic(t *testing.T) { + metricClusterName := fmt.Sprintf("job1-stream-agg - %s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDestroyCirconusMetricCluster, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(testAccCirconusMetricClusterConfigFmt, metricClusterName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("circonus_metric_cluster.nomad-job1", "description", `Metric Cluster Description`), + resource.TestCheckResourceAttrSet("circonus_metric_cluster.nomad-job1", "id"), + resource.TestCheckResourceAttr("circonus_metric_cluster.nomad-job1", "name", metricClusterName), + resource.TestCheckResourceAttr("circonus_metric_cluster.nomad-job1", "query.236803225.definition", "*`nomad-jobname`memory`rss"), + resource.TestCheckResourceAttr("circonus_metric_cluster.nomad-job1", "query.236803225.type", "average"), + resource.TestCheckResourceAttr("circonus_metric_cluster.nomad-job1", "tags.2087084518", "author:terraform"), + resource.TestCheckResourceAttr("circonus_metric_cluster.nomad-job1", "tags.3354173695", "source:nomad"), + ), + }, + }, + }) +} + +func testAccCheckDestroyCirconusMetricCluster(s *terraform.State) error { + ctxt := testAccProvider.Meta().(*providerContext) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "circonus_metric_cluster" { + continue + } + + cid := rs.Primary.ID + exists, err := checkMetricClusterExists(ctxt, api.CIDType(&cid)) + switch { + case !exists: + // noop + case exists: + return fmt.Errorf("metric cluster still exists after destroy") + case err != nil: + return fmt.Errorf("Error checking metric cluster: %v", err) + } + } + + return nil +} + +func checkMetricClusterExists(c *providerContext, metricClusterCID api.CIDType) (bool, error) { + cmc, err := c.client.FetchMetricCluster(metricClusterCID, "") + if err != nil { + if strings.Contains(err.Error(), defaultCirconus404ErrorString) { + return false, nil + } + + return false, err + } + + if api.CIDType(&cmc.CID) == metricClusterCID { + return true, nil + } + + return false, nil +} + +const testAccCirconusMetricClusterConfigFmt = ` +resource "circonus_metric_cluster" "nomad-job1" { + description = < 0 { + thenAttrs[string(ruleSetAfterAttr)] = fmt.Sprintf("%ds", 60*rule.Wait) + } + thenAttrs[string(ruleSetSeverityAttr)] = int(rule.Severity) + + if rule.WindowingFunction != nil { + valueOverAttrs[string(ruleSetUsingAttr)] = *rule.WindowingFunction + + // NOTE: Only save the window duration if a function was specified + valueOverAttrs[string(ruleSetLastAttr)] = fmt.Sprintf("%ds", rule.WindowingDuration) + } + valueOverSet := schema.NewSet(ruleSetValueOverChecksum, nil) + valueOverSet.Add(valueOverAttrs) + valueAttrs[string(ruleSetOverAttr)] = valueOverSet + + if contactGroups, ok := rs.ContactGroups[uint8(rule.Severity)]; ok { + sort.Strings(contactGroups) + thenAttrs[string(ruleSetNotifyAttr)] = contactGroups + } + thenSet := schema.NewSet(ruleSetThenChecksum, nil) + thenSet.Add(thenAttrs) + + valueSet := schema.NewSet(ruleSetValueChecksum, nil) + valueSet.Add(valueAttrs) + ifAttrs[string(ruleSetThenAttr)] = thenSet + ifAttrs[string(ruleSetValueAttr)] = valueSet + + ifRules = append(ifRules, ifAttrs) + } + + d.Set(ruleSetCheckAttr, rs.CheckCID) + + if err := d.Set(ruleSetIfAttr, ifRules); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store rule set %q attribute: {{err}}", ruleSetIfAttr), err) + } + + d.Set(ruleSetLinkAttr, indirect(rs.Link)) + d.Set(ruleSetMetricNameAttr, rs.MetricName) + d.Set(ruleSetMetricTypeAttr, rs.MetricType) + d.Set(ruleSetNotesAttr, indirect(rs.Notes)) + d.Set(ruleSetParentAttr, indirect(rs.Parent)) + + if err := d.Set(ruleSetTagsAttr, tagsToState(apiToTags(rs.Tags))); err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to store rule set %q attribute: {{err}}", ruleSetTagsAttr), err) + } + + return nil +} + +func ruleSetUpdate(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + rs := newRuleSet() + + if err := rs.ParseConfig(d); err != nil { + return err + } + + rs.CID = d.Id() + if err := rs.Update(ctxt); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to update rule set %q: {{err}}", d.Id()), err) + } + + return ruleSetRead(d, meta) +} + +func ruleSetDelete(d *schema.ResourceData, meta interface{}) error { + ctxt := meta.(*providerContext) + + cid := d.Id() + if _, err := ctxt.client.DeleteRuleSetByCID(api.CIDType(&cid)); err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to delete rule set %q: {{err}}", d.Id()), err) + } + + d.SetId("") + + return nil +} + +type circonusRuleSet struct { + api.RuleSet +} + +func newRuleSet() circonusRuleSet { + rs := circonusRuleSet{ + RuleSet: *api.NewRuleSet(), + } + + rs.ContactGroups = make(map[uint8][]string, config.NumSeverityLevels) + for i := uint8(0); i < config.NumSeverityLevels; i++ { + rs.ContactGroups[i+1] = make([]string, 0, 1) + } + + rs.Rules = make([]api.RuleSetRule, 0, 1) + + return rs +} + +func loadRuleSet(ctxt *providerContext, cid api.CIDType) (circonusRuleSet, error) { + var rs circonusRuleSet + crs, err := ctxt.client.FetchRuleSet(cid) + if err != nil { + return circonusRuleSet{}, err + } + rs.RuleSet = *crs + + return rs, nil +} + +func ruleSetThenChecksum(v interface{}) int { + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + + writeInt := func(m map[string]interface{}, attrName string) { + if v, found := m[attrName]; found { + i := v.(int) + if i != 0 { + fmt.Fprintf(b, "%x", i) + } + } + } + + writeString := func(m map[string]interface{}, attrName string) { + if v, found := m[attrName]; found { + s := strings.TrimSpace(v.(string)) + if s != "" { + fmt.Fprint(b, s) + } + } + } + + writeStringArray := func(m map[string]interface{}, attrName string) { + if v, found := m[attrName]; found { + a := v.([]string) + if a != nil { + sort.Strings(a) + for _, s := range a { + fmt.Fprint(b, strings.TrimSpace(s)) + } + } + } + } + + m := v.(map[string]interface{}) + + writeString(m, ruleSetAfterAttr) + writeStringArray(m, ruleSetNotifyAttr) + writeInt(m, ruleSetSeverityAttr) + + s := b.String() + return hashcode.String(s) +} + +func ruleSetValueChecksum(v interface{}) int { + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + + writeBool := func(m map[string]interface{}, attrName string) { + if v, found := m[attrName]; found { + fmt.Fprintf(b, "%t", v.(bool)) + } + } + + writeDuration := func(m map[string]interface{}, attrName string) { + if v, found := m[attrName]; found { + s := v.(string) + if s != "" { + d, _ := time.ParseDuration(s) + fmt.Fprint(b, d.String()) + } + } + } + + writeString := func(m map[string]interface{}, attrName string) { + if v, found := m[attrName]; found { + s := strings.TrimSpace(v.(string)) + if s != "" { + fmt.Fprint(b, s) + } + } + } + + m := v.(map[string]interface{}) + + if v, found := m[ruleSetValueAttr]; found { + valueMap := v.(map[string]interface{}) + if valueMap != nil { + writeDuration(valueMap, ruleSetAbsentAttr) + writeBool(valueMap, ruleSetChangedAttr) + writeString(valueMap, ruleSetContainsAttr) + writeString(valueMap, ruleSetMatchAttr) + writeString(valueMap, ruleSetNotMatchAttr) + writeString(valueMap, ruleSetMinValueAttr) + writeString(valueMap, ruleSetNotContainAttr) + writeString(valueMap, ruleSetMaxValueAttr) + + if v, found := valueMap[ruleSetOverAttr]; found { + overMap := v.(map[string]interface{}) + writeDuration(overMap, ruleSetLastAttr) + writeString(overMap, ruleSetUsingAttr) + } + } + } + + s := b.String() + return hashcode.String(s) +} + +func ruleSetValueOverChecksum(v interface{}) int { + b := &bytes.Buffer{} + b.Grow(defaultHashBufSize) + + writeString := func(m map[string]interface{}, attrName string) { + if v, found := m[attrName]; found { + s := strings.TrimSpace(v.(string)) + if s != "" { + fmt.Fprint(b, s) + } + } + } + + m := v.(map[string]interface{}) + + writeString(m, ruleSetLastAttr) + writeString(m, ruleSetUsingAttr) + + s := b.String() + return hashcode.String(s) +} + +// ParseConfig reads Terraform config data and stores the information into a +// Circonus RuleSet object. ParseConfig, ruleSetRead(), and ruleSetChecksum +// must be kept in sync. +func (rs *circonusRuleSet) ParseConfig(d *schema.ResourceData) error { + if v, found := d.GetOk(ruleSetCheckAttr); found { + rs.CheckCID = v.(string) + } + + if v, found := d.GetOk(ruleSetLinkAttr); found { + s := v.(string) + rs.Link = &s + } + + if v, found := d.GetOk(ruleSetMetricTypeAttr); found { + rs.MetricType = v.(string) + } + + if v, found := d.GetOk(ruleSetNotesAttr); found { + s := v.(string) + rs.Notes = &s + } + + if v, found := d.GetOk(ruleSetParentAttr); found { + s := v.(string) + rs.Parent = &s + } + + if v, found := d.GetOk(ruleSetMetricNameAttr); found { + rs.MetricName = v.(string) + } + + rs.Rules = make([]api.RuleSetRule, 0, defaultRuleSetRuleLen) + if ifListRaw, found := d.GetOk(ruleSetIfAttr); found { + ifList := ifListRaw.([]interface{}) + for _, ifListElem := range ifList { + ifAttrs := newInterfaceMap(ifListElem.(map[string]interface{})) + + rule := api.RuleSetRule{} + + if thenListRaw, found := ifAttrs[ruleSetThenAttr]; found { + thenList := thenListRaw.(*schema.Set).List() + + for _, thenListRaw := range thenList { + thenAttrs := newInterfaceMap(thenListRaw) + + if v, found := thenAttrs[ruleSetAfterAttr]; found { + s := v.(string) + if s != "" { + d, err := time.ParseDuration(v.(string)) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to parse %q duration %q: {{err}}", ruleSetAfterAttr, v.(string)), err) + } + rule.Wait = uint(d.Minutes()) + } + } + + // NOTE: break from convention of alpha sorting attributes and handle Notify after Severity + + if i, found := thenAttrs[ruleSetSeverityAttr]; found { + rule.Severity = uint(i.(int)) + } + + if notifyListRaw, found := thenAttrs[ruleSetNotifyAttr]; found { + notifyList := interfaceList(notifyListRaw.([]interface{})) + + sev := uint8(rule.Severity) + for _, contactGroupCID := range notifyList.List() { + var found bool + if contactGroups, ok := rs.ContactGroups[sev]; ok { + for _, contactGroup := range contactGroups { + if contactGroup == contactGroupCID { + found = true + break + } + } + } + if !found { + rs.ContactGroups[sev] = append(rs.ContactGroups[sev], contactGroupCID) + } + } + } + } + } + + if ruleSetValueListRaw, found := ifAttrs[ruleSetValueAttr]; found { + ruleSetValueList := ruleSetValueListRaw.(*schema.Set).List() + + for _, valueListRaw := range ruleSetValueList { + valueAttrs := newInterfaceMap(valueListRaw) + + METRIC_TYPE: + switch rs.MetricType { + case ruleSetMetricTypeNumeric: + if v, found := valueAttrs[ruleSetAbsentAttr]; found { + s := v.(string) + if s != "" { + d, _ := time.ParseDuration(s) + rule.Criteria = apiRuleSetAbsent + rule.Value = float64(d.Seconds()) + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetChangedAttr]; found { + b := v.(bool) + if b { + rule.Criteria = apiRuleSetChanged + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetMinValueAttr]; found { + s := v.(string) + if s != "" { + rule.Criteria = apiRuleSetMinValue + rule.Value = s + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetMaxValueAttr]; found { + s := v.(string) + if s != "" { + rule.Criteria = apiRuleSetMaxValue + rule.Value = s + break METRIC_TYPE + } + } + case ruleSetMetricTypeText: + if v, found := valueAttrs[ruleSetAbsentAttr]; found { + s := v.(string) + if s != "" { + d, _ := time.ParseDuration(s) + rule.Criteria = apiRuleSetAbsent + rule.Value = float64(d.Seconds()) + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetChangedAttr]; found { + b := v.(bool) + if b { + rule.Criteria = apiRuleSetChanged + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetContainsAttr]; found { + s := v.(string) + if s != "" { + rule.Criteria = apiRuleSetContains + rule.Value = s + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetMatchAttr]; found { + s := v.(string) + if s != "" { + rule.Criteria = apiRuleSetMatch + rule.Value = s + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetNotMatchAttr]; found { + s := v.(string) + if s != "" { + rule.Criteria = apiRuleSetNotMatch + rule.Value = s + break METRIC_TYPE + } + } + + if v, found := valueAttrs[ruleSetNotContainAttr]; found { + s := v.(string) + if s != "" { + rule.Criteria = apiRuleSetNotContains + rule.Value = s + break METRIC_TYPE + } + } + default: + return fmt.Errorf("PROVIDER BUG: unsupported rule set metric type: %q", rs.MetricType) + } + + if ruleSetOverListRaw, found := valueAttrs[ruleSetOverAttr]; found { + overList := ruleSetOverListRaw.(*schema.Set).List() + + for _, overListRaw := range overList { + overAttrs := newInterfaceMap(overListRaw) + + if v, found := overAttrs[ruleSetLastAttr]; found { + last, err := time.ParseDuration(v.(string)) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("unable to parse duration %s attribute", ruleSetLastAttr), err) + } + rule.WindowingDuration = uint(last.Seconds()) + } + + if v, found := overAttrs[ruleSetUsingAttr]; found { + s := v.(string) + rule.WindowingFunction = &s + } + } + } + } + } + rs.Rules = append(rs.Rules, rule) + } + } + + if v, found := d.GetOk(ruleSetTagsAttr); found { + rs.Tags = derefStringList(flattenSet(v.(*schema.Set))) + } + + if err := rs.Validate(); err != nil { + return err + } + + return nil +} + +func (rs *circonusRuleSet) Create(ctxt *providerContext) error { + crs, err := ctxt.client.CreateRuleSet(&rs.RuleSet) + if err != nil { + return err + } + + rs.CID = crs.CID + + return nil +} + +func (rs *circonusRuleSet) Update(ctxt *providerContext) error { + _, err := ctxt.client.UpdateRuleSet(&rs.RuleSet) + if err != nil { + return errwrap.Wrapf(fmt.Sprintf("Unable to update rule set %s: {{err}}", rs.CID), err) + } + + return nil +} + +func (rs *circonusRuleSet) Validate() error { + // TODO(sean@): From https://login.circonus.com/resources/api/calls/rule_set + // under `value`: + // + // For an 'on absence' rule this is the number of seconds the metric must not + // have been collected for, and should not be lower than either the period or + // timeout of the metric being collected. + + for i, rule := range rs.Rules { + if rule.Criteria == "" { + return fmt.Errorf("rule %d for check ID %s has an empty criteria", i, rs.CheckCID) + } + } + + return nil +} diff --git a/builtin/providers/circonus/resource_circonus_rule_set_test.go b/builtin/providers/circonus/resource_circonus_rule_set_test.go new file mode 100644 index 0000000000..71cf94ceb0 --- /dev/null +++ b/builtin/providers/circonus/resource_circonus_rule_set_test.go @@ -0,0 +1,226 @@ +package circonus + +import ( + "fmt" + "strings" + "testing" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccCirconusRuleSet_basic(t *testing.T) { + checkName := fmt.Sprintf("ICMP Ping check - %s", acctest.RandString(5)) + contactGroupName := fmt.Sprintf("ops-staging-sev3 - %s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDestroyCirconusRuleSet, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(testAccCirconusRuleSetConfigFmt, contactGroupName, checkName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("circonus_rule_set.icmp-latency-alarm", "check"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "metric_name", "maximum"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "metric_type", "numeric"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "notes", "Simple check to create notifications based on ICMP performance."), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "link", "https://wiki.example.org/playbook/what-to-do-when-high-latency-strikes"), + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "parent", "some check ID"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.#", "4"), + + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.0.value.#", "1"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.0.value.360613670.absent", "70s"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.0.value.360613670.over.#", "0"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.0.then.#", "1"), + // Computed: + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.0.then..notify.#", "1"), + // resource.TestCheckResourceAttrSet("circonus_rule_set.icmp-latency-alarm", "if.0.then..notify.0"), + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.0.then..severity", "1"), + + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.value.#", "1"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.value.2300199732.over.#", "1"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.value.2300199732.over.689776960.last", "120s"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.value.2300199732.over.689776960.using", "average"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.value.2300199732.min_value", "2"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.then.#", "1"), + // Computed: + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.then..notify.#", "1"), + // resource.TestCheckResourceAttrSet("circonus_rule_set.icmp-latency-alarm", "if.1.then..notify.0"), + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.1.then..severity", "2"), + + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.value.#", "1"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.value.2842654150.over.#", "1"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.value.2842654150.over.999877839.last", "180s"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.value.2842654150.over.999877839.using", "average"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.value.2842654150.max_value", "300"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.then.#", "1"), + // Computed: + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.then..notify.#", "1"), + // resource.TestCheckResourceAttrSet("circonus_rule_set.icmp-latency-alarm", "if.2.then..notify.0"), + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.2.then..severity", "3"), + + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.3.value.#", "1"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.3.value.803690187.over.#", "0"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.3.value.803690187.max_value", "400"), + // Computed: + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.3.then..notify.#", "1"), + // resource.TestCheckResourceAttrSet("circonus_rule_set.icmp-latency-alarm", "if.3.then..notify.0"), + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.3.then..after", "2400s"), + // resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "if.3.then..severity", "4"), + + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "tags.#", "2"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "tags.2087084518", "author:terraform"), + resource.TestCheckResourceAttr("circonus_rule_set.icmp-latency-alarm", "tags.1401442048", "lifecycle:unittest"), + ), + }, + }, + }) +} + +func testAccCheckDestroyCirconusRuleSet(s *terraform.State) error { + ctxt := testAccProvider.Meta().(*providerContext) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "circonus_rule_set" { + continue + } + + cid := rs.Primary.ID + exists, err := checkRuleSetExists(ctxt, api.CIDType(&cid)) + switch { + case !exists: + // noop + case exists: + return fmt.Errorf("rule set still exists after destroy") + case err != nil: + return fmt.Errorf("Error checking rule set: %v", err) + } + } + + return nil +} + +func checkRuleSetExists(c *providerContext, ruleSetCID api.CIDType) (bool, error) { + rs, err := c.client.FetchRuleSet(ruleSetCID) + if err != nil { + if strings.Contains(err.Error(), defaultCirconus404ErrorString) { + return false, nil + } + + return false, err + } + + if api.CIDType(&rs.CID) == ruleSetCID { + return true, nil + } + + return false, nil +} + +const testAccCirconusRuleSetConfigFmt = ` +variable "test_tags" { + type = "list" + default = [ "author:terraform", "lifecycle:unittest" ] +} + +resource "circonus_contact_group" "test-trigger" { + name = "%s" + tags = [ "${var.test_tags}" ] +} + +resource "circonus_check" "api_latency" { + active = true + name = "%s" + period = "60s" + + collector { + id = "/broker/1" + } + + icmp_ping { + count = 1 + } + + metric { + name = "maximum" + tags = [ "${var.test_tags}" ] + type = "numeric" + unit = "seconds" + } + + tags = [ "${var.test_tags}" ] + target = "api.circonus.com" +} + +resource "circonus_rule_set" "icmp-latency-alarm" { + check = "${circonus_check.api_latency.checks[0]}" + metric_name = "maximum" + // metric_name = "${circonus_check.api_latency.metric["maximum"].name}" + // metric_type = "${circonus_check.api_latency.metric["maximum"].type}" + notes = <", v.(string), err) + } + + return fmt.Sprintf("%ds", int(d.Seconds())) + default: + return fmt.Sprintf("", v) + } +} + +func indirect(v interface{}) interface{} { + switch v.(type) { + case string: + return v + case *string: + p := v.(*string) + if p == nil { + return nil + } + return *p + default: + return v + } +} + +func suppressEquivalentTimeDurations(k, old, new string, d *schema.ResourceData) bool { + d1, err := time.ParseDuration(old) + if err != nil { + return false + } + + d2, err := time.ParseDuration(new) + if err != nil { + return false + } + + return d1 == d2 +} + +func suppressWhitespace(v interface{}) string { + return strings.TrimSpace(v.(string)) +} diff --git a/builtin/providers/circonus/validators.go b/builtin/providers/circonus/validators.go new file mode 100644 index 0000000000..dca2de36c8 --- /dev/null +++ b/builtin/providers/circonus/validators.go @@ -0,0 +1,376 @@ +package circonus + +import ( + "fmt" + "net/url" + "regexp" + "strings" + "time" + + "github.com/circonus-labs/circonus-gometrics/api" + "github.com/circonus-labs/circonus-gometrics/api/config" + "github.com/hashicorp/errwrap" +) + +var knownCheckTypes map[circonusCheckType]struct{} +var knownContactMethods map[contactMethods]struct{} + +var userContactMethods map[contactMethods]struct{} +var externalContactMethods map[contactMethods]struct{} +var supportedHTTPVersions = validStringValues{"0.9", "1.0", "1.1", "2.0"} +var supportedMetricClusterTypes = validStringValues{ + "average", "count", "counter", "counter2", "counter2_stddev", + "counter_stddev", "derive", "derive2", "derive2_stddev", "derive_stddev", + "histogram", "stddev", "text", +} + +func init() { + checkTypes := []circonusCheckType{ + "caql", "cim", "circonuswindowsagent", "circonuswindowsagent,nad", + "collectd", "composite", "dcm", "dhcp", "dns", "elasticsearch", + "external", "ganglia", "googleanalytics", "haproxy", "http", + "http,apache", "httptrap", "imap", "jmx", "json", "json,couchdb", + "json,mongodb", "json,nad", "json,riak", "ldap", "memcached", + "munin", "mysql", "newrelic_rpm", "nginx", "nrpe", "ntp", + "oracle", "ping_icmp", "pop3", "postgres", "redis", "resmon", + "smtp", "snmp", "snmp,momentum", "sqlserver", "ssh2", "statsd", + "tcp", "varnish", "keynote", "keynote_pulse", "cloudwatch", + "ec_console", "mongodb", + } + + knownCheckTypes = make(map[circonusCheckType]struct{}, len(checkTypes)) + for _, k := range checkTypes { + knownCheckTypes[k] = struct{}{} + } + + userMethods := []contactMethods{"email", "sms", "xmpp"} + externalMethods := []contactMethods{"slack"} + + knownContactMethods = make(map[contactMethods]struct{}, len(externalContactMethods)+len(userContactMethods)) + + externalContactMethods = make(map[contactMethods]struct{}, len(externalMethods)) + for _, k := range externalMethods { + knownContactMethods[k] = struct{}{} + externalContactMethods[k] = struct{}{} + } + + userContactMethods = make(map[contactMethods]struct{}, len(userMethods)) + for _, k := range userMethods { + knownContactMethods[k] = struct{}{} + userContactMethods[k] = struct{}{} + } +} + +func validateCheckType(v interface{}, key string) (warnings []string, errors []error) { + if _, ok := knownCheckTypes[circonusCheckType(v.(string))]; !ok { + warnings = append(warnings, fmt.Sprintf("Possibly unsupported check type: %s", v.(string))) + } + + return warnings, errors +} + +func validateCheckCloudWatchDimmensions(v interface{}, key string) (warnings []string, errors []error) { + validDimmensionName := regexp.MustCompile(`^[\S]+$`) + validDimmensionValue := regexp.MustCompile(`^[\S]+$`) + + dimmensions := v.(map[string]interface{}) + for k, vRaw := range dimmensions { + if !validDimmensionName.MatchString(k) { + errors = append(errors, fmt.Errorf("Invalid CloudWatch Dimmension Name specified: %q", k)) + continue + } + + v := vRaw.(string) + if !validDimmensionValue.MatchString(v) { + errors = append(errors, fmt.Errorf("Invalid value for CloudWatch Dimmension %q specified: %q", k, v)) + } + } + + return warnings, errors +} + +func validateContactGroup(cg *api.ContactGroup) error { + for i := range cg.Reminders { + if cg.Reminders[i] != 0 && cg.AggregationWindow > cg.Reminders[i] { + return fmt.Errorf("severity %d reminder (%ds) is shorter than the aggregation window (%ds)", i+1, cg.Reminders[i], cg.AggregationWindow) + } + } + + for severityIndex := range cg.Escalations { + switch { + case cg.Escalations[severityIndex] == nil: + continue + case cg.Escalations[severityIndex].After > 0 && cg.Escalations[severityIndex].ContactGroupCID == "", + cg.Escalations[severityIndex].After == 0 && cg.Escalations[severityIndex].ContactGroupCID != "": + return fmt.Errorf("severity %d escalation requires both and %s and %s be set", severityIndex+1, contactEscalateToAttr, contactEscalateAfterAttr) + } + } + + return nil +} + +func validateContactGroupCID(attrName schemaAttr) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + validContactGroupCID := regexp.MustCompile(config.ContactGroupCIDRegex) + + if !validContactGroupCID.MatchString(v.(string)) { + errors = append(errors, fmt.Errorf("Invalid %s specified (%q)", attrName, v.(string))) + } + + return warnings, errors + } +} + +func validateDurationMin(attrName schemaAttr, minDuration string) func(v interface{}, key string) (warnings []string, errors []error) { + var min time.Duration + { + var err error + min, err = time.ParseDuration(minDuration) + if err != nil { + return func(interface{}, string) (warnings []string, errors []error) { + errors = []error{errwrap.Wrapf(fmt.Sprintf("Invalid time +%q: {{err}}", minDuration), err)} + return warnings, errors + } + } + } + + return func(v interface{}, key string) (warnings []string, errors []error) { + d, err := time.ParseDuration(v.(string)) + switch { + case err != nil: + errors = append(errors, errwrap.Wrapf(fmt.Sprintf("Invalid %s specified (%q): {{err}}", attrName, v.(string)), err)) + case d < min: + errors = append(errors, fmt.Errorf("Invalid %s specified (%q): minimum value must be %s", attrName, v.(string), min)) + } + + return warnings, errors + } +} + +func validateDurationMax(attrName schemaAttr, maxDuration string) func(v interface{}, key string) (warnings []string, errors []error) { + var max time.Duration + { + var err error + max, err = time.ParseDuration(maxDuration) + if err != nil { + return func(interface{}, string) (warnings []string, errors []error) { + errors = []error{errwrap.Wrapf(fmt.Sprintf("Invalid time +%q: {{err}}", maxDuration), err)} + return warnings, errors + } + } + } + + return func(v interface{}, key string) (warnings []string, errors []error) { + d, err := time.ParseDuration(v.(string)) + switch { + case err != nil: + errors = append(errors, errwrap.Wrapf(fmt.Sprintf("Invalid %s specified (%q): {{err}}", attrName, v.(string)), err)) + case d > max: + errors = append(errors, fmt.Errorf("Invalid %s specified (%q): maximum value must be less than or equal to %s", attrName, v.(string), max)) + } + + return warnings, errors + } +} + +func validateFloatMin(attrName schemaAttr, min float64) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + if v.(float64) < min { + errors = append(errors, fmt.Errorf("Invalid %s specified (%f): minimum value must be %f", attrName, v.(float64), min)) + } + + return warnings, errors + } +} + +func validateFloatMax(attrName schemaAttr, max float64) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + if v.(float64) > max { + errors = append(errors, fmt.Errorf("Invalid %s specified (%f): maximum value must be %f", attrName, v.(float64), max)) + } + + return warnings, errors + } +} + +// validateFuncs takes a list of functions and runs them in serial until either +// a warning or error is returned from the first validation function argument. +func validateFuncs(fns ...func(v interface{}, key string) (warnings []string, errors []error)) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + for _, fn := range fns { + warnings, errors = fn(v, key) + if len(warnings) > 0 || len(errors) > 0 { + break + } + } + return warnings, errors + } +} + +func validateHTTPHeaders(v interface{}, key string) (warnings []string, errors []error) { + validHTTPHeader := regexp.MustCompile(`.+`) + validHTTPValue := regexp.MustCompile(`.+`) + + headers := v.(map[string]interface{}) + for k, vRaw := range headers { + if !validHTTPHeader.MatchString(k) { + errors = append(errors, fmt.Errorf("Invalid HTTP Header specified: %q", k)) + continue + } + + v := vRaw.(string) + if !validHTTPValue.MatchString(v) { + errors = append(errors, fmt.Errorf("Invalid value for HTTP Header %q specified: %q", k, v)) + } + } + + return warnings, errors +} + +func validateGraphAxisOptions(v interface{}, key string) (warnings []string, errors []error) { + axisOptionsMap := v.(map[string]interface{}) + validOpts := map[schemaAttr]struct{}{ + graphAxisLogarithmicAttr: struct{}{}, + graphAxisMaxAttr: struct{}{}, + graphAxisMinAttr: struct{}{}, + } + + for k := range axisOptionsMap { + if _, ok := validOpts[schemaAttr(k)]; !ok { + errors = append(errors, fmt.Errorf("Invalid axis option specified: %q", k)) + continue + } + } + + return warnings, errors +} + +func validateIntMin(attrName schemaAttr, min int) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + if v.(int) < min { + errors = append(errors, fmt.Errorf("Invalid %s specified (%d): minimum value must be %d", attrName, v.(int), min)) + } + + return warnings, errors + } +} + +func validateIntMax(attrName schemaAttr, max int) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + if v.(int) > max { + errors = append(errors, fmt.Errorf("Invalid %s specified (%d): maximum value must be %d", attrName, v.(int), max)) + } + + return warnings, errors + } +} + +func validateMetricType(v interface{}, key string) (warnings []string, errors []error) { + value := v.(string) + switch value { + case "caql", "composite", "histogram", "numeric", "text": + default: + errors = append(errors, fmt.Errorf("unsupported metric type %s", value)) + } + + return warnings, errors +} + +func validateRegexp(attrName schemaAttr, reString string) func(v interface{}, key string) (warnings []string, errors []error) { + re := regexp.MustCompile(reString) + + return func(v interface{}, key string) (warnings []string, errors []error) { + if !re.MatchString(v.(string)) { + errors = append(errors, fmt.Errorf("Invalid %s specified (%q): regexp failed to match string", attrName, v.(string))) + } + + return warnings, errors + } +} + +func validateTag(v interface{}, key string) (warnings []string, errors []error) { + tag := v.(string) + if !strings.ContainsRune(tag, ':') { + errors = append(errors, fmt.Errorf("tag %q is missing a category", tag)) + } + + return warnings, errors +} + +func validateUserCID(attrName string) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + valid := regexp.MustCompile(config.UserCIDRegex) + + if !valid.MatchString(v.(string)) { + errors = append(errors, fmt.Errorf("Invalid %s specified (%q)", attrName, v.(string))) + } + + return warnings, errors + } +} + +type urlParseFlags int + +const ( + urlIsAbs urlParseFlags = 1 << iota + urlOptional + urlWithoutPort + urlWithoutSchema +) + +const urlBasicCheck urlParseFlags = 0 + +func validateHTTPURL(attrName schemaAttr, checkFlags urlParseFlags) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + s := v.(string) + if checkFlags&urlOptional != 0 && s == "" { + return warnings, errors + } + + u, err := url.Parse(v.(string)) + switch { + case err != nil: + errors = append(errors, errwrap.Wrapf(fmt.Sprintf("Invalid %s specified (%q): {{err}}", attrName, v.(string)), err)) + case u.Host == "": + errors = append(errors, fmt.Errorf("Invalid %s specified: host can not be empty", attrName)) + case !(u.Scheme == "http" || u.Scheme == "https"): + errors = append(errors, fmt.Errorf("Invalid %s specified: scheme unsupported (only support http and https)", attrName)) + } + + if checkFlags&urlIsAbs != 0 && !u.IsAbs() { + errors = append(errors, fmt.Errorf("Schema is missing from URL %q (HINT: https://%s)", v.(string), v.(string))) + } + + if checkFlags&urlWithoutSchema != 0 && u.IsAbs() { + errors = append(errors, fmt.Errorf("Schema is present on URL %q (HINT: drop the https://%s)", v.(string), v.(string))) + } + + if checkFlags&urlWithoutPort != 0 { + hostParts := strings.SplitN(u.Host, ":", 2) + if len(hostParts) != 1 { + errors = append(errors, fmt.Errorf("Port is present on URL %q (HINT: drop the :%s)", v.(string), hostParts[1])) + } + } + + return warnings, errors + } +} + +func validateStringIn(attrName schemaAttr, valid validStringValues) func(v interface{}, key string) (warnings []string, errors []error) { + return func(v interface{}, key string) (warnings []string, errors []error) { + s := v.(string) + var found bool + for i := range valid { + if s == string(valid[i]) { + found = true + break + } + } + + if !found { + errors = append(errors, fmt.Errorf("Invalid %q specified: %q not found in list %#v", string(attrName), s, valid)) + } + + return warnings, errors + } +} diff --git a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go index ffee80f4a0..9bdd4ab4a7 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go +++ b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go @@ -28,6 +28,12 @@ func resourceCloudStackIPAddress() *schema.Resource { ForceNew: true, }, + "zone_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "project": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -63,6 +69,11 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{}) p.SetVpcid(vpcid.(string)) } + if zoneid, ok := d.GetOk("zone_id"); ok { + // Set the vpcid + p.SetZoneid(zoneid.(string)) + } + // If there is a project supplied, we retrieve and set the project id if err := setProjectid(p, cs, d); err != nil { return err @@ -109,6 +120,10 @@ func resourceCloudStackIPAddressRead(d *schema.ResourceData, meta interface{}) e d.Set("vpc_id", ip.Vpcid) } + if _, ok := d.GetOk("zone_id"); ok { + d.Set("zone_id", ip.Zoneid) + } + setValueOrID(d, "project", ip.Project, ip.Projectid) return nil diff --git a/builtin/providers/consul/config.go b/builtin/providers/consul/config.go index c048b5ddac..959293f84f 100644 --- a/builtin/providers/consul/config.go +++ b/builtin/providers/consul/config.go @@ -3,6 +3,7 @@ package consul import ( "log" "net/http" + "strings" consulapi "github.com/hashicorp/consul/api" ) @@ -11,6 +12,7 @@ type Config struct { Datacenter string `mapstructure:"datacenter"` Address string `mapstructure:"address"` Scheme string `mapstructure:"scheme"` + HttpAuth string `mapstructure:"http_auth"` Token string `mapstructure:"token"` CAFile string `mapstructure:"ca_file"` CertFile string `mapstructure:"cert_file"` @@ -41,6 +43,18 @@ func (c *Config) Client() (*consulapi.Client, error) { } config.HttpClient.Transport.(*http.Transport).TLSClientConfig = cc + if c.HttpAuth != "" { + var username, password string + if strings.Contains(c.HttpAuth, ":") { + split := strings.SplitN(c.HttpAuth, ":", 2) + username = split[0] + password = split[1] + } else { + username = c.HttpAuth + } + config.HttpAuth = &consulapi.HttpBasicAuth{username, password} + } + if c.Token != "" { config.Token = c.Token } diff --git a/builtin/providers/consul/resource_provider.go b/builtin/providers/consul/resource_provider.go index fb316adccf..dc800e3661 100644 --- a/builtin/providers/consul/resource_provider.go +++ b/builtin/providers/consul/resource_provider.go @@ -35,6 +35,12 @@ func Provider() terraform.ResourceProvider { }, "http"), }, + "http_auth": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("CONSUL_HTTP_AUTH", ""), + }, + "ca_file": &schema.Schema{ Type: schema.TypeString, Optional: true, diff --git a/builtin/providers/datadog/import_datadog_downtime_test.go b/builtin/providers/datadog/import_datadog_downtime_test.go new file mode 100644 index 0000000000..4c5e3454ce --- /dev/null +++ b/builtin/providers/datadog/import_datadog_downtime_test.go @@ -0,0 +1,37 @@ +package datadog + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestDatadogDowntime_import(t *testing.T) { + resourceName := "datadog_downtime.foo" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigImported, + }, + resource.TestStep{ + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +const testAccCheckDatadogDowntimeConfigImported = ` +resource "datadog_downtime" "foo" { + scope = ["host:X", "host:Y"] + start = 1735707600 + end = 1735765200 + + message = "Example Datadog downtime message." +} +` diff --git a/builtin/providers/datadog/provider.go b/builtin/providers/datadog/provider.go index 60b4cef277..0c97f80172 100644 --- a/builtin/providers/datadog/provider.go +++ b/builtin/providers/datadog/provider.go @@ -25,6 +25,7 @@ func Provider() terraform.ResourceProvider { }, ResourcesMap: map[string]*schema.Resource{ + "datadog_downtime": resourceDatadogDowntime(), "datadog_monitor": resourceDatadogMonitor(), "datadog_timeboard": resourceDatadogTimeboard(), "datadog_user": resourceDatadogUser(), diff --git a/builtin/providers/datadog/resource_datadog_downtime.go b/builtin/providers/datadog/resource_datadog_downtime.go new file mode 100644 index 0000000000..29bd3240fd --- /dev/null +++ b/builtin/providers/datadog/resource_datadog_downtime.go @@ -0,0 +1,339 @@ +package datadog + +import ( + "fmt" + "log" + "strconv" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "gopkg.in/zorkian/go-datadog-api.v2" +) + +func resourceDatadogDowntime() *schema.Resource { + return &schema.Resource{ + Create: resourceDatadogDowntimeCreate, + Read: resourceDatadogDowntimeRead, + Update: resourceDatadogDowntimeUpdate, + Delete: resourceDatadogDowntimeDelete, + Exists: resourceDatadogDowntimeExists, + Importer: &schema.ResourceImporter{ + State: resourceDatadogDowntimeImport, + }, + + Schema: map[string]*schema.Schema{ + "active": { + Type: schema.TypeBool, + Optional: true, + }, + "disabled": { + Type: schema.TypeBool, + Optional: true, + }, + "end": { + Type: schema.TypeInt, + Optional: true, + }, + "message": { + Type: schema.TypeString, + Optional: true, + StateFunc: func(val interface{}) string { + return strings.TrimSpace(val.(string)) + }, + }, + "recurrence": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "period": { + Type: schema.TypeInt, + Required: true, + }, + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateDatadogDowntimeRecurrenceType, + }, + "until_date": { + Type: schema.TypeInt, + Optional: true, + ConflictsWith: []string{"recurrence.until_occurrences"}, + }, + "until_occurrences": { + Type: schema.TypeInt, + Optional: true, + ConflictsWith: []string{"recurrence.until_date"}, + }, + "week_days": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateDatadogDowntimeRecurrenceWeekDays, + }, + }, + }, + }, + }, + "scope": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "start": { + Type: schema.TypeInt, + Optional: true, + }, + }, + } +} + +func buildDowntimeStruct(d *schema.ResourceData) *datadog.Downtime { + var dt datadog.Downtime + + if attr, ok := d.GetOk("active"); ok { + dt.SetActive(attr.(bool)) + } + if attr, ok := d.GetOk("disabled"); ok { + dt.SetDisabled(attr.(bool)) + } + if attr, ok := d.GetOk("end"); ok { + dt.SetEnd(attr.(int)) + } + if attr, ok := d.GetOk("message"); ok { + dt.SetMessage(strings.TrimSpace(attr.(string))) + } + if _, ok := d.GetOk("recurrence"); ok { + var recurrence datadog.Recurrence + + if attr, ok := d.GetOk("recurrence.0.period"); ok { + recurrence.SetPeriod(attr.(int)) + } + if attr, ok := d.GetOk("recurrence.0.type"); ok { + recurrence.SetType(attr.(string)) + } + if attr, ok := d.GetOk("recurrence.0.until_date"); ok { + recurrence.SetUntilDate(attr.(int)) + } + if attr, ok := d.GetOk("recurrence.0.until_occurrences"); ok { + recurrence.SetUntilOccurrences(attr.(int)) + } + if attr, ok := d.GetOk("recurrence.0.week_days"); ok { + weekDays := make([]string, 0, len(attr.([]interface{}))) + for _, weekDay := range attr.([]interface{}) { + weekDays = append(weekDays, weekDay.(string)) + } + recurrence.WeekDays = weekDays + } + + dt.SetRecurrence(recurrence) + } + scope := []string{} + for _, s := range d.Get("scope").([]interface{}) { + scope = append(scope, s.(string)) + } + dt.Scope = scope + if attr, ok := d.GetOk("start"); ok { + dt.SetStart(attr.(int)) + } + + return &dt +} + +func resourceDatadogDowntimeExists(d *schema.ResourceData, meta interface{}) (b bool, e error) { + // Exists - This is called to verify a resource still exists. It is called prior to Read, + // and lowers the burden of Read to be able to assume the resource exists. + client := meta.(*datadog.Client) + + id, err := strconv.Atoi(d.Id()) + if err != nil { + return false, err + } + + if _, err = client.GetDowntime(id); err != nil { + if strings.Contains(err.Error(), "404 Not Found") { + return false, nil + } + return false, err + } + + return true, nil +} + +func resourceDatadogDowntimeCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*datadog.Client) + + dts := buildDowntimeStruct(d) + dt, err := client.CreateDowntime(dts) + if err != nil { + return fmt.Errorf("error updating downtime: %s", err.Error()) + } + + d.SetId(strconv.Itoa(dt.GetId())) + + return nil +} + +func resourceDatadogDowntimeRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*datadog.Client) + + id, err := strconv.Atoi(d.Id()) + if err != nil { + return err + } + + dt, err := client.GetDowntime(id) + if err != nil { + return err + } + + log.Printf("[DEBUG] downtime: %v", dt) + d.Set("active", dt.GetActive()) + d.Set("disabled", dt.GetDisabled()) + d.Set("end", dt.GetEnd()) + d.Set("message", dt.GetMessage()) + if r, ok := dt.GetRecurrenceOk(); ok { + recurrence := make(map[string]interface{}) + recurrenceList := make([]map[string]interface{}, 0, 1) + + if attr, ok := r.GetPeriodOk(); ok { + recurrence["period"] = strconv.Itoa(attr) + } + if attr, ok := r.GetTypeOk(); ok { + recurrence["type"] = attr + } + if attr, ok := r.GetUntilDateOk(); ok { + recurrence["until_date"] = strconv.Itoa(attr) + } + if attr, ok := r.GetUntilOccurrencesOk(); ok { + recurrence["until_occurrences"] = strconv.Itoa(attr) + } + if r.WeekDays != nil { + weekDays := make([]string, 0, len(r.WeekDays)) + for _, weekDay := range r.WeekDays { + weekDays = append(weekDays, weekDay) + } + recurrence["week_days"] = weekDays + } + recurrenceList = append(recurrenceList, recurrence) + d.Set("recurrence", recurrenceList) + } + d.Set("scope", dt.Scope) + d.Set("start", dt.GetStart()) + + return nil +} + +func resourceDatadogDowntimeUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*datadog.Client) + + var dt datadog.Downtime + + id, err := strconv.Atoi(d.Id()) + if err != nil { + return err + } + + dt.SetId(id) + if attr, ok := d.GetOk("active"); ok { + dt.SetActive(attr.(bool)) + } + if attr, ok := d.GetOk("disabled"); ok { + dt.SetDisabled(attr.(bool)) + } + if attr, ok := d.GetOk("end"); ok { + dt.SetEnd(attr.(int)) + } + if attr, ok := d.GetOk("message"); ok { + dt.SetMessage(attr.(string)) + } + + if _, ok := d.GetOk("recurrence"); ok { + var recurrence datadog.Recurrence + + if attr, ok := d.GetOk("recurrence.0.period"); ok { + recurrence.SetPeriod(attr.(int)) + } + if attr, ok := d.GetOk("recurrence.0.type"); ok { + recurrence.SetType(attr.(string)) + } + if attr, ok := d.GetOk("recurrence.0.until_date"); ok { + recurrence.SetUntilDate(attr.(int)) + } + if attr, ok := d.GetOk("recurrence.0.until_occurrences"); ok { + recurrence.SetUntilOccurrences(attr.(int)) + } + if attr, ok := d.GetOk("recurrence.0.week_days"); ok { + weekDays := make([]string, 0, len(attr.([]interface{}))) + for _, weekDay := range attr.([]interface{}) { + weekDays = append(weekDays, weekDay.(string)) + } + recurrence.WeekDays = weekDays + } + + dt.SetRecurrence(recurrence) + } + + scope := make([]string, 0) + for _, v := range d.Get("scope").([]interface{}) { + scope = append(scope, v.(string)) + } + dt.Scope = scope + if attr, ok := d.GetOk("start"); ok { + dt.SetStart(attr.(int)) + } + + if err = client.UpdateDowntime(&dt); err != nil { + return fmt.Errorf("error updating downtime: %s", err.Error()) + } + + return resourceDatadogDowntimeRead(d, meta) +} + +func resourceDatadogDowntimeDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*datadog.Client) + + id, err := strconv.Atoi(d.Id()) + if err != nil { + return err + } + + if err = client.DeleteDowntime(id); err != nil { + return err + } + + return nil +} + +func resourceDatadogDowntimeImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + if err := resourceDatadogDowntimeRead(d, meta); err != nil { + return nil, err + } + return []*schema.ResourceData{d}, nil +} + +func validateDatadogDowntimeRecurrenceType(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + switch value { + case "days", "months", "weeks", "years": + break + default: + errors = append(errors, fmt.Errorf( + "%q contains an invalid recurrence type parameter %q. Valid parameters are days, months, weeks, or years", k, value)) + } + return +} + +func validateDatadogDowntimeRecurrenceWeekDays(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + switch value { + case "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun": + break + default: + errors = append(errors, fmt.Errorf( + "%q contains an invalid recurrence week day parameter %q. Valid parameters are Mon, Tue, Wed, Thu, Fri, Sat, or Sun", k, value)) + } + return +} diff --git a/builtin/providers/datadog/resource_datadog_downtime_test.go b/builtin/providers/datadog/resource_datadog_downtime_test.go new file mode 100644 index 0000000000..e44c69b9bb --- /dev/null +++ b/builtin/providers/datadog/resource_datadog_downtime_test.go @@ -0,0 +1,527 @@ +package datadog + +import ( + "fmt" + "strconv" + "strings" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "gopkg.in/zorkian/go-datadog-api.v2" +) + +func TestAccDatadogDowntime_Basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "*"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "days"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "1"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func TestAccDatadogDowntime_BasicMultiScope(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigMultiScope, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "host:A"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.1", "host:B"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "days"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "1"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func TestAccDatadogDowntime_BasicNoRecurrence(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigNoRecurrence, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "host:NoRecurrence"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func TestAccDatadogDowntime_BasicUntilDateRecurrence(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigUntilDateRecurrence, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "host:UntilDateRecurrence"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "days"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "1"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.until_date", "1736226000"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func TestAccDatadogDowntime_BasicUntilOccurrencesRecurrence(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigUntilOccurrencesRecurrence, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "host:UntilOccurrencesRecurrence"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "days"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "1"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.until_occurrences", "5"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func TestAccDatadogDowntime_WeekDayRecurring(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigWeekDaysRecurrence, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "WeekDaysRecurrence"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735646400"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735732799"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "weeks"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "1"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.week_days.0", "Sat"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.week_days.1", "Sun"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func TestAccDatadogDowntime_Updated(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "*"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "days"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "1"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigUpdated, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "Updated"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "days"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "3"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func TestAccDatadogDowntime_TrimWhitespace(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDatadogDowntimeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDatadogDowntimeConfigWhitespace, + Check: resource.ComposeTestCheckFunc( + testAccCheckDatadogDowntimeExists("datadog_downtime.foo"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "scope.0", "host:Whitespace"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "start", "1735707600"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "end", "1735765200"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.type", "days"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "recurrence.0.period", "1"), + resource.TestCheckResourceAttr( + "datadog_downtime.foo", "message", "Example Datadog downtime message."), + ), + }, + }, + }) +} + +func testAccCheckDatadogDowntimeDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*datadog.Client) + + if err := datadogDowntimeDestroyHelper(s, client); err != nil { + return err + } + return nil +} + +func testAccCheckDatadogDowntimeExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := testAccProvider.Meta().(*datadog.Client) + if err := datadogDowntimeExistsHelper(s, client); err != nil { + return err + } + return nil + } +} + +const testAccCheckDatadogDowntimeConfig = ` +resource "datadog_downtime" "foo" { + scope = ["*"] + start = 1735707600 + end = 1735765200 + + recurrence { + type = "days" + period = 1 + } + + message = "Example Datadog downtime message." +} +` + +const testAccCheckDatadogDowntimeConfigMultiScope = ` +resource "datadog_downtime" "foo" { + scope = ["host:A", "host:B"] + start = 1735707600 + end = 1735765200 + + recurrence { + type = "days" + period = 1 + } + + message = "Example Datadog downtime message." +} +` + +const testAccCheckDatadogDowntimeConfigNoRecurrence = ` +resource "datadog_downtime" "foo" { + scope = ["host:NoRecurrence"] + start = 1735707600 + end = 1735765200 + message = "Example Datadog downtime message." +} +` + +const testAccCheckDatadogDowntimeConfigUntilDateRecurrence = ` +resource "datadog_downtime" "foo" { + scope = ["host:UntilDateRecurrence"] + start = 1735707600 + end = 1735765200 + + recurrence { + type = "days" + period = 1 + until_date = 1736226000 + } + + message = "Example Datadog downtime message." +} +` + +const testAccCheckDatadogDowntimeConfigUntilOccurrencesRecurrence = ` +resource "datadog_downtime" "foo" { + scope = ["host:UntilOccurrencesRecurrence"] + start = 1735707600 + end = 1735765200 + + recurrence { + type = "days" + period = 1 + until_occurrences = 5 + } + + message = "Example Datadog downtime message." +} +` + +const testAccCheckDatadogDowntimeConfigWeekDaysRecurrence = ` +resource "datadog_downtime" "foo" { + scope = ["WeekDaysRecurrence"] + start = 1735646400 + end = 1735732799 + + recurrence { + period = 1 + type = "weeks" + week_days = ["Sat", "Sun"] + } + + message = "Example Datadog downtime message." +} +` + +const testAccCheckDatadogDowntimeConfigUpdated = ` +resource "datadog_downtime" "foo" { + scope = ["Updated"] + start = 1735707600 + end = 1735765200 + + recurrence { + type = "days" + period = 3 + } + + message = "Example Datadog downtime message." +} +` + +const testAccCheckDatadogDowntimeConfigWhitespace = ` +resource "datadog_downtime" "foo" { + scope = ["host:Whitespace"] + start = 1735707600 + end = 1735765200 + + recurrence { + type = "days" + period = 1 + } + + message = < 0 { + d.Set("record", records[0]) + } else { + d.Set("record", "") + } + d.Set("records", records) + return nil +} diff --git a/builtin/providers/dns/provider.go b/builtin/providers/dns/provider.go index 785621782e..8d960dac69 100644 --- a/builtin/providers/dns/provider.go +++ b/builtin/providers/dns/provider.go @@ -49,6 +49,12 @@ func Provider() terraform.ResourceProvider { }, }, + DataSourcesMap: map[string]*schema.Resource{ + "dns_a_record_set": dataSourceDnsARecordSet(), + "dns_cname_record_set": dataSourceDnsCnameRecordSet(), + "dns_txt_record_set": dataSourceDnsTxtRecordSet(), + }, + ResourcesMap: map[string]*schema.Resource{ "dns_a_record_set": resourceDnsARecordSet(), "dns_aaaa_record_set": resourceDnsAAAARecordSet(), diff --git a/builtin/providers/dns/resource_dns_a_record_set_test.go b/builtin/providers/dns/resource_dns_a_record_set_test.go index 5eb632e426..45fc1a5250 100644 --- a/builtin/providers/dns/resource_dns_a_record_set_test.go +++ b/builtin/providers/dns/resource_dns_a_record_set_test.go @@ -10,7 +10,7 @@ import ( "github.com/miekg/dns" ) -func TestAccDnsARecordSet_basic(t *testing.T) { +func TestAccDnsARecordSet_Basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, diff --git a/builtin/providers/dns/test_check_attr_string_array.go b/builtin/providers/dns/test_check_attr_string_array.go new file mode 100644 index 0000000000..344c7cb8de --- /dev/null +++ b/builtin/providers/dns/test_check_attr_string_array.go @@ -0,0 +1,52 @@ +package dns + +import ( + "fmt" + "strconv" + + r "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func testCheckAttrStringArray(name, key string, value []string) r.TestCheckFunc { + return func(s *terraform.State) error { + ms := s.RootModule() + rs, ok := ms.Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + is := rs.Primary + if is == nil { + return fmt.Errorf("No primary instance: %s", name) + } + + attrKey := fmt.Sprintf("%s.#", key) + count, ok := is.Attributes[attrKey] + if !ok { + return fmt.Errorf("Attributes not found for %s", attrKey) + } + + got, _ := strconv.Atoi(count) + if got != len(value) { + return fmt.Errorf("Mismatch array count for %s: got %s, wanted %d", key, count, len(value)) + } + + for i, want := range value { + attrKey = fmt.Sprintf("%s.%d", key, i) + got, ok := is.Attributes[attrKey] + if !ok { + return fmt.Errorf("Missing array item for %s", attrKey) + } + if got != want { + return fmt.Errorf( + "Mismatched array item for %s: got %s, want %s", + attrKey, + got, + want) + } + } + + return nil + } +} diff --git a/builtin/providers/dnsimple/provider.go b/builtin/providers/dnsimple/provider.go index b06eef76d3..1c73c1a5b3 100644 --- a/builtin/providers/dnsimple/provider.go +++ b/builtin/providers/dnsimple/provider.go @@ -14,7 +14,7 @@ func Provider() terraform.ResourceProvider { "email": &schema.Schema{ Type: schema.TypeString, Optional: true, - DefaultFunc: schema.EnvDefaultFunc("DNSIMPLE_EMAIL", nil), + DefaultFunc: schema.EnvDefaultFunc("DNSIMPLE_EMAIL", ""), Description: "The DNSimple account email address.", }, diff --git a/builtin/providers/dnsimple/resource_dnsimple_record.go b/builtin/providers/dnsimple/resource_dnsimple_record.go index 4a9f7aa2ed..a5e39472c6 100644 --- a/builtin/providers/dnsimple/resource_dnsimple_record.go +++ b/builtin/providers/dnsimple/resource_dnsimple_record.go @@ -58,6 +58,7 @@ func resourceDNSimpleRecord() *schema.Resource { "priority": { Type: schema.TypeString, Computed: true, + Optional: true, }, }, } @@ -76,6 +77,10 @@ func resourceDNSimpleRecordCreate(d *schema.ResourceData, meta interface{}) erro newRecord.TTL, _ = strconv.Atoi(attr.(string)) } + if attr, ok := d.GetOk("priority"); ok { + newRecord.Priority, _ = strconv.Atoi(attr.(string)) + } + log.Printf("[DEBUG] DNSimple Record create configuration: %#v", newRecord) resp, err := provider.client.Zones.CreateRecord(provider.config.Account, d.Get("domain").(string), newRecord) @@ -142,6 +147,10 @@ func resourceDNSimpleRecordUpdate(d *schema.ResourceData, meta interface{}) erro updateRecord.TTL, _ = strconv.Atoi(attr.(string)) } + if attr, ok := d.GetOk("priority"); ok { + updateRecord.Priority, _ = strconv.Atoi(attr.(string)) + } + log.Printf("[DEBUG] DNSimple Record update configuration: %#v", updateRecord) _, err = provider.client.Zones.UpdateRecord(provider.config.Account, d.Get("domain").(string), recordID, updateRecord) diff --git a/builtin/providers/dnsimple/resource_dnsimple_record_test.go b/builtin/providers/dnsimple/resource_dnsimple_record_test.go index 7195ba2c41..e7e5e876f5 100644 --- a/builtin/providers/dnsimple/resource_dnsimple_record_test.go +++ b/builtin/providers/dnsimple/resource_dnsimple_record_test.go @@ -37,6 +37,33 @@ func TestAccDNSimpleRecord_Basic(t *testing.T) { }) } +func TestAccDNSimpleRecord_CreateMxWithPriority(t *testing.T) { + var record dnsimple.ZoneRecord + domain := os.Getenv("DNSIMPLE_DOMAIN") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDNSimpleRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckDNSimpleRecordConfig_mx, domain), + Check: resource.ComposeTestCheckFunc( + testAccCheckDNSimpleRecordExists("dnsimple_record.foobar", &record), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "name", ""), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "domain", domain), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "value", "mx.example.com"), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "priority", "5"), + ), + }, + }, + }) +} + func TestAccDNSimpleRecord_Updated(t *testing.T) { var record dnsimple.ZoneRecord domain := os.Getenv("DNSIMPLE_DOMAIN") @@ -76,6 +103,47 @@ func TestAccDNSimpleRecord_Updated(t *testing.T) { }) } +func TestAccDNSimpleRecord_UpdatedMx(t *testing.T) { + var record dnsimple.ZoneRecord + domain := os.Getenv("DNSIMPLE_DOMAIN") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDNSimpleRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckDNSimpleRecordConfig_mx, domain), + Check: resource.ComposeTestCheckFunc( + testAccCheckDNSimpleRecordExists("dnsimple_record.foobar", &record), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "name", ""), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "domain", domain), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "value", "mx.example.com"), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "priority", "5"), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckDNSimpleRecordConfig_mx_new_value, domain), + Check: resource.ComposeTestCheckFunc( + testAccCheckDNSimpleRecordExists("dnsimple_record.foobar", &record), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "name", ""), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "domain", domain), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "value", "mx2.example.com"), + resource.TestCheckResourceAttr( + "dnsimple_record.foobar", "priority", "10"), + ), + }, + }, + }) +} + func testAccCheckDNSimpleRecordDestroy(s *terraform.State) error { provider := testAccProvider.Meta().(*Client) @@ -166,3 +234,25 @@ resource "dnsimple_record" "foobar" { type = "A" ttl = 3600 }` + +const testAccCheckDNSimpleRecordConfig_mx = ` +resource "dnsimple_record" "foobar" { + domain = "%s" + + name = "" + value = "mx.example.com" + type = "MX" + ttl = 3600 + priority = 5 +}` + +const testAccCheckDNSimpleRecordConfig_mx_new_value = ` +resource "dnsimple_record" "foobar" { + domain = "%s" + + name = "" + value = "mx2.example.com" + type = "MX" + ttl = 3600 + priority = 10 +}` diff --git a/builtin/providers/fastly/resource_fastly_service_v1.go b/builtin/providers/fastly/resource_fastly_service_v1.go index 1fd709c6f9..da734c9f0d 100644 --- a/builtin/providers/fastly/resource_fastly_service_v1.go +++ b/builtin/providers/fastly/resource_fastly_service_v1.go @@ -189,6 +189,19 @@ func resourceServiceV1() *schema.Resource { Optional: true, Default: "", Description: "SSL certificate hostname", + Deprecated: "Use ssl_cert_hostname and ssl_sni_hostname instead.", + }, + "ssl_cert_hostname": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: "SSL certificate hostname for cert verification", + }, + "ssl_sni_hostname": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: "SSL certificate hostname for SNI verification", }, // UseSSL is something we want to support in the future, but // requires SSL setup we don't yet have @@ -1011,6 +1024,8 @@ func resourceServiceV1Update(d *schema.ResourceData, meta interface{}) error { AutoLoadbalance: gofastly.CBool(df["auto_loadbalance"].(bool)), SSLCheckCert: gofastly.CBool(df["ssl_check_cert"].(bool)), SSLHostname: df["ssl_hostname"].(string), + SSLCertHostname: df["ssl_cert_hostname"].(string), + SSLSNIHostname: df["ssl_sni_hostname"].(string), Shield: df["shield"].(string), Port: uint(df["port"].(int)), BetweenBytesTimeout: uint(df["between_bytes_timeout"].(int)), @@ -1917,6 +1932,8 @@ func flattenBackends(backendList []*gofastly.Backend) []map[string]interface{} { "shield": b.Shield, "ssl_check_cert": gofastly.CBool(b.SSLCheckCert), "ssl_hostname": b.SSLHostname, + "ssl_cert_hostname": b.SSLCertHostname, + "ssl_sni_hostname": b.SSLSNIHostname, "weight": int(b.Weight), "request_condition": b.RequestCondition, } diff --git a/builtin/providers/fastly/resource_fastly_service_v1_test.go b/builtin/providers/fastly/resource_fastly_service_v1_test.go index 0dba2f8a64..c05006138d 100644 --- a/builtin/providers/fastly/resource_fastly_service_v1_test.go +++ b/builtin/providers/fastly/resource_fastly_service_v1_test.go @@ -73,6 +73,8 @@ func TestResourceFastlyFlattenBackend(t *testing.T) { RequestCondition: "", SSLCheckCert: true, SSLHostname: "", + SSLCertHostname: "", + SSLSNIHostname: "", Shield: "New York", Weight: uint(100), }, @@ -91,6 +93,8 @@ func TestResourceFastlyFlattenBackend(t *testing.T) { "request_condition": "", "ssl_check_cert": gofastly.CBool(true), "ssl_hostname": "", + "ssl_cert_hostname": "", + "ssl_sni_hostname": "", "shield": "New York", "weight": 100, }, diff --git a/builtin/providers/github/provider.go b/builtin/providers/github/provider.go index 9d3c6ee7e3..b3fd81d514 100644 --- a/builtin/providers/github/provider.go +++ b/builtin/providers/github/provider.go @@ -37,6 +37,8 @@ func Provider() terraform.ResourceProvider { "github_team_repository": resourceGithubTeamRepository(), "github_membership": resourceGithubMembership(), "github_repository": resourceGithubRepository(), + "github_repository_webhook": resourceGithubRepositoryWebhook(), + "github_organization_webhook": resourceGithubOrganizationWebhook(), "github_repository_collaborator": resourceGithubRepositoryCollaborator(), "github_issue_label": resourceGithubIssueLabel(), }, diff --git a/builtin/providers/github/resource_github_issue_label.go b/builtin/providers/github/resource_github_issue_label.go index f0b5e2b8d4..0d89c0343b 100644 --- a/builtin/providers/github/resource_github_issue_label.go +++ b/builtin/providers/github/resource_github_issue_label.go @@ -1,6 +1,9 @@ package github import ( + "context" + "log" + "github.com/google/go-github/github" "github.com/hashicorp/terraform/helper/schema" ) @@ -16,20 +19,20 @@ func resourceGithubIssueLabel() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "repository": &schema.Schema{ + "repository": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "color": &schema.Schema{ + "color": { Type: schema.TypeString, Required: true, }, - "url": &schema.Schema{ + "url": { Type: schema.TypeString, Computed: true, }, @@ -42,11 +45,14 @@ func resourceGithubIssueLabelCreate(d *schema.ResourceData, meta interface{}) er r := d.Get("repository").(string) n := d.Get("name").(string) c := d.Get("color").(string) - - _, _, err := client.Issues.CreateLabel(meta.(*Organization).name, r, &github.Label{ + label := github.Label{ Name: &n, Color: &c, - }) + } + + log.Printf("[DEBUG] Creating label: %#v", label) + _, resp, err := client.Issues.CreateLabel(context.TODO(), meta.(*Organization).name, r, &label) + log.Printf("[DEBUG] Response from creating label: %s", *resp) if err != nil { return err } @@ -60,7 +66,7 @@ func resourceGithubIssueLabelRead(d *schema.ResourceData, meta interface{}) erro client := meta.(*Organization).client r, n := parseTwoPartID(d.Id()) - githubLabel, _, err := client.Issues.GetLabel(meta.(*Organization).name, r, n) + githubLabel, _, err := client.Issues.GetLabel(context.TODO(), meta.(*Organization).name, r, n) if err != nil { d.SetId("") return nil @@ -81,7 +87,7 @@ func resourceGithubIssueLabelUpdate(d *schema.ResourceData, meta interface{}) er c := d.Get("color").(string) _, originalName := parseTwoPartID(d.Id()) - _, _, err := client.Issues.EditLabel(meta.(*Organization).name, r, originalName, &github.Label{ + _, _, err := client.Issues.EditLabel(context.TODO(), meta.(*Organization).name, r, originalName, &github.Label{ Name: &n, Color: &c, }) @@ -99,6 +105,6 @@ func resourceGithubIssueLabelDelete(d *schema.ResourceData, meta interface{}) er r := d.Get("repository").(string) n := d.Get("name").(string) - _, err := client.Issues.DeleteLabel(meta.(*Organization).name, r, n) + _, err := client.Issues.DeleteLabel(context.TODO(), meta.(*Organization).name, r, n) return err } diff --git a/builtin/providers/github/resource_github_issue_label_test.go b/builtin/providers/github/resource_github_issue_label_test.go index f279bc00af..d3b3a0597f 100644 --- a/builtin/providers/github/resource_github_issue_label_test.go +++ b/builtin/providers/github/resource_github_issue_label_test.go @@ -1,6 +1,7 @@ package github import ( + "context" "fmt" "testing" @@ -17,14 +18,14 @@ func TestAccGithubIssueLabel_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccGithubIssueLabelDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGithubIssueLabelConfig, Check: resource.ComposeTestCheckFunc( testAccCheckGithubIssueLabelExists("github_issue_label.test", &label), testAccCheckGithubIssueLabelAttributes(&label, "foo", "000000"), ), }, - resource.TestStep{ + { Config: testAccGithubIssueLabelUpdateConfig, Check: resource.ComposeTestCheckFunc( testAccCheckGithubIssueLabelExists("github_issue_label.test", &label), @@ -41,10 +42,10 @@ func TestAccGithubIssueLabel_importBasic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccGithubIssueLabelDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGithubIssueLabelConfig, }, - resource.TestStep{ + { ResourceName: "github_issue_label.test", ImportState: true, ImportStateVerify: true, @@ -68,7 +69,7 @@ func testAccCheckGithubIssueLabelExists(n string, label *github.Label) resource. o := testAccProvider.Meta().(*Organization).name r, n := parseTwoPartID(rs.Primary.ID) - githubLabel, _, err := conn.Issues.GetLabel(o, r, n) + githubLabel, _, err := conn.Issues.GetLabel(context.TODO(), o, r, n) if err != nil { return err } @@ -102,7 +103,7 @@ func testAccGithubIssueLabelDestroy(s *terraform.State) error { o := testAccProvider.Meta().(*Organization).name r, n := parseTwoPartID(rs.Primary.ID) - label, res, err := conn.Issues.GetLabel(o, r, n) + label, res, err := conn.Issues.GetLabel(context.TODO(), o, r, n) if err == nil { if label != nil && diff --git a/builtin/providers/github/resource_github_membership.go b/builtin/providers/github/resource_github_membership.go index e13b0025ce..50bc2f164c 100644 --- a/builtin/providers/github/resource_github_membership.go +++ b/builtin/providers/github/resource_github_membership.go @@ -1,6 +1,8 @@ package github import ( + "context" + "github.com/google/go-github/github" "github.com/hashicorp/terraform/helper/schema" ) @@ -17,12 +19,12 @@ func resourceGithubMembership() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "username": &schema.Schema{ + "username": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "role": &schema.Schema{ + "role": { Type: schema.TypeString, Optional: true, ValidateFunc: validateValueFunc([]string{"member", "admin"}), @@ -37,7 +39,7 @@ func resourceGithubMembershipCreate(d *schema.ResourceData, meta interface{}) er n := d.Get("username").(string) r := d.Get("role").(string) - membership, _, err := client.Organizations.EditOrgMembership(n, meta.(*Organization).name, + membership, _, err := client.Organizations.EditOrgMembership(context.TODO(), n, meta.(*Organization).name, &github.Membership{Role: &r}) if err != nil { return err @@ -52,7 +54,7 @@ func resourceGithubMembershipRead(d *schema.ResourceData, meta interface{}) erro client := meta.(*Organization).client _, n := parseTwoPartID(d.Id()) - membership, _, err := client.Organizations.GetOrgMembership(n, meta.(*Organization).name) + membership, _, err := client.Organizations.GetOrgMembership(context.TODO(), n, meta.(*Organization).name) if err != nil { d.SetId("") return nil @@ -68,7 +70,7 @@ func resourceGithubMembershipUpdate(d *schema.ResourceData, meta interface{}) er n := d.Get("username").(string) r := d.Get("role").(string) - membership, _, err := client.Organizations.EditOrgMembership(n, meta.(*Organization).name, &github.Membership{ + membership, _, err := client.Organizations.EditOrgMembership(context.TODO(), n, meta.(*Organization).name, &github.Membership{ Role: &r, }) if err != nil { @@ -83,7 +85,7 @@ func resourceGithubMembershipDelete(d *schema.ResourceData, meta interface{}) er client := meta.(*Organization).client n := d.Get("username").(string) - _, err := client.Organizations.RemoveOrgMembership(n, meta.(*Organization).name) + _, err := client.Organizations.RemoveOrgMembership(context.TODO(), n, meta.(*Organization).name) return err } diff --git a/builtin/providers/github/resource_github_membership_test.go b/builtin/providers/github/resource_github_membership_test.go index b6e1f19f54..0caed0e046 100644 --- a/builtin/providers/github/resource_github_membership_test.go +++ b/builtin/providers/github/resource_github_membership_test.go @@ -1,6 +1,7 @@ package github import ( + "context" "fmt" "testing" @@ -17,7 +18,7 @@ func TestAccGithubMembership_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckGithubMembershipDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGithubMembershipConfig, Check: resource.ComposeTestCheckFunc( testAccCheckGithubMembershipExists("github_membership.test_org_membership", &membership), @@ -34,10 +35,10 @@ func TestAccGithubMembership_importBasic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckGithubMembershipDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGithubMembershipConfig, }, - resource.TestStep{ + { ResourceName: "github_membership.test_org_membership", ImportState: true, ImportStateVerify: true, @@ -55,7 +56,7 @@ func testAccCheckGithubMembershipDestroy(s *terraform.State) error { } o, u := parseTwoPartID(rs.Primary.ID) - membership, resp, err := conn.Organizations.GetOrgMembership(u, o) + membership, resp, err := conn.Organizations.GetOrgMembership(context.TODO(), u, o) if err == nil { if membership != nil && @@ -85,7 +86,7 @@ func testAccCheckGithubMembershipExists(n string, membership *github.Membership) conn := testAccProvider.Meta().(*Organization).client o, u := parseTwoPartID(rs.Primary.ID) - githubMembership, _, err := conn.Organizations.GetOrgMembership(u, o) + githubMembership, _, err := conn.Organizations.GetOrgMembership(context.TODO(), u, o) if err != nil { return err } @@ -108,7 +109,7 @@ func testAccCheckGithubMembershipRoleState(n string, membership *github.Membersh conn := testAccProvider.Meta().(*Organization).client o, u := parseTwoPartID(rs.Primary.ID) - githubMembership, _, err := conn.Organizations.GetOrgMembership(u, o) + githubMembership, _, err := conn.Organizations.GetOrgMembership(context.TODO(), u, o) if err != nil { return err } diff --git a/builtin/providers/github/resource_github_organization_webhook.go b/builtin/providers/github/resource_github_organization_webhook.go new file mode 100644 index 0000000000..5eed3dd44f --- /dev/null +++ b/builtin/providers/github/resource_github_organization_webhook.go @@ -0,0 +1,137 @@ +package github + +import ( + "context" + "fmt" + "strconv" + + "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceGithubOrganizationWebhook() *schema.Resource { + + return &schema.Resource{ + Create: resourceGithubOrganizationWebhookCreate, + Read: resourceGithubOrganizationWebhookRead, + Update: resourceGithubOrganizationWebhookUpdate, + Delete: resourceGithubOrganizationWebhookDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateGithubOrganizationWebhookName, + }, + "events": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "configuration": { + Type: schema.TypeMap, + Optional: true, + }, + "url": { + Type: schema.TypeString, + Computed: true, + }, + "active": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + }, + } +} + +func validateGithubOrganizationWebhookName(v interface{}, k string) (ws []string, errors []error) { + if v.(string) != "web" { + errors = append(errors, fmt.Errorf("Github: name can only be web")) + } + return +} + +func resourceGithubOrganizationWebhookObject(d *schema.ResourceData) *github.Hook { + url := d.Get("url").(string) + active := d.Get("active").(bool) + events := []string{} + eventSet := d.Get("events").(*schema.Set) + for _, v := range eventSet.List() { + events = append(events, v.(string)) + } + name := d.Get("name").(string) + + hook := &github.Hook{ + Name: &name, + URL: &url, + Events: events, + Active: &active, + Config: d.Get("configuration").(map[string]interface{}), + } + + return hook +} + +func resourceGithubOrganizationWebhookCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hk := resourceGithubOrganizationWebhookObject(d) + + hook, _, err := client.Organizations.CreateHook(context.TODO(), meta.(*Organization).name, hk) + if err != nil { + return err + } + d.SetId(strconv.Itoa(*hook.ID)) + + return resourceGithubOrganizationWebhookRead(d, meta) +} + +func resourceGithubOrganizationWebhookRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hookID, _ := strconv.Atoi(d.Id()) + + hook, resp, err := client.Organizations.GetHook(context.TODO(), meta.(*Organization).name, hookID) + if err != nil { + if resp.StatusCode == 404 { + d.SetId("") + return nil + } + return err + } + d.Set("name", hook.Name) + d.Set("url", hook.URL) + d.Set("active", hook.Active) + d.Set("events", hook.Events) + d.Set("configuration", hook.Config) + + return nil +} + +func resourceGithubOrganizationWebhookUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hk := resourceGithubOrganizationWebhookObject(d) + hookID, err := strconv.Atoi(d.Id()) + if err != nil { + return err + } + + _, _, err = client.Organizations.EditHook(context.TODO(), meta.(*Organization).name, hookID, hk) + if err != nil { + return err + } + + return resourceGithubOrganizationWebhookRead(d, meta) +} + +func resourceGithubOrganizationWebhookDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hookID, err := strconv.Atoi(d.Id()) + if err != nil { + return err + } + + _, err = client.Organizations.DeleteHook(context.TODO(), meta.(*Organization).name, hookID) + return err +} diff --git a/builtin/providers/github/resource_github_organization_webhook_test.go b/builtin/providers/github/resource_github_organization_webhook_test.go new file mode 100644 index 0000000000..6f29dbc92b --- /dev/null +++ b/builtin/providers/github/resource_github_organization_webhook_test.go @@ -0,0 +1,166 @@ +package github + +import ( + "context" + "fmt" + "reflect" + "strconv" + "strings" + "testing" + + "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccGithubOrganizationWebhook_basic(t *testing.T) { + var hook github.Hook + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGithubOrganizationWebhookDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGithubOrganizationWebhookConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckGithubOrganizationWebhookExists("github_organization_webhook.foo", &hook), + testAccCheckGithubOrganizationWebhookAttributes(&hook, &testAccGithubOrganizationWebhookExpectedAttributes{ + Name: "web", + Events: []string{"pull_request"}, + Configuration: map[string]interface{}{ + "url": "https://google.de/webhook", + "content_type": "json", + "insecure_ssl": "1", + }, + Active: true, + }), + ), + }, + { + Config: testAccGithubOrganizationWebhookUpdateConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckGithubOrganizationWebhookExists("github_organization_webhook.foo", &hook), + testAccCheckGithubOrganizationWebhookAttributes(&hook, &testAccGithubOrganizationWebhookExpectedAttributes{ + Name: "web", + Events: []string{"issues"}, + Configuration: map[string]interface{}{ + "url": "https://google.de/webhooks", + "content_type": "form", + "insecure_ssl": "0", + }, + Active: false, + }), + ), + }, + }, + }) +} + +func testAccCheckGithubOrganizationWebhookExists(n string, hook *github.Hook) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not Found: %s", n) + } + + hookID, _ := strconv.Atoi(rs.Primary.ID) + if hookID == 0 { + return fmt.Errorf("No repository name is set") + } + + org := testAccProvider.Meta().(*Organization) + conn := org.client + getHook, _, err := conn.Organizations.GetHook(context.TODO(), org.name, hookID) + if err != nil { + return err + } + *hook = *getHook + return nil + } +} + +type testAccGithubOrganizationWebhookExpectedAttributes struct { + Name string + Events []string + Configuration map[string]interface{} + Active bool +} + +func testAccCheckGithubOrganizationWebhookAttributes(hook *github.Hook, want *testAccGithubOrganizationWebhookExpectedAttributes) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *hook.Name != want.Name { + return fmt.Errorf("got hook %q; want %q", *hook.Name, want.Name) + } + if *hook.Active != want.Active { + return fmt.Errorf("got hook %t; want %t", *hook.Active, want.Active) + } + if !strings.HasPrefix(*hook.URL, "https://") { + return fmt.Errorf("got http URL %q; want to start with 'https://'", *hook.URL) + } + if !reflect.DeepEqual(hook.Events, want.Events) { + return fmt.Errorf("got hook events %q; want %q", hook.Events, want.Events) + } + if !reflect.DeepEqual(hook.Config, want.Configuration) { + return fmt.Errorf("got hook configuration %q; want %q", hook.Config, want.Configuration) + } + + return nil + } +} + +func testAccCheckGithubOrganizationWebhookDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*Organization).client + orgName := testAccProvider.Meta().(*Organization).name + + for _, rs := range s.RootModule().Resources { + if rs.Type != "github_organization_webhook" { + continue + } + + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } + + gotHook, resp, err := conn.Organizations.GetHook(context.TODO(), orgName, id) + if err == nil { + if gotHook != nil && *gotHook.ID == id { + return fmt.Errorf("Webhook still exists") + } + } + if resp.StatusCode != 404 { + return err + } + return nil + } + return nil +} + +const testAccGithubOrganizationWebhookConfig = ` +resource "github_organization_webhook" "foo" { + name = "web" + configuration { + url = "https://google.de/webhook" + content_type = "json" + insecure_ssl = true + } + + events = ["pull_request"] +} +` + +const testAccGithubOrganizationWebhookUpdateConfig = ` +resource "github_organization_webhook" "foo" { + name = "web" + configuration { + url = "https://google.de/webhooks" + content_type = "form" + insecure_ssl = false + } + active = false + + events = ["issues"] +} +` diff --git a/builtin/providers/github/resource_github_repository.go b/builtin/providers/github/resource_github_repository.go index 726fe17766..ca889bc4fc 100644 --- a/builtin/providers/github/resource_github_repository.go +++ b/builtin/providers/github/resource_github_repository.go @@ -1,6 +1,7 @@ package github import ( + "context" "log" "github.com/google/go-github/github" @@ -19,61 +20,61 @@ func resourceGithubRepository() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, }, - "homepage_url": &schema.Schema{ + "homepage_url": { Type: schema.TypeString, Optional: true, }, - "private": &schema.Schema{ + "private": { Type: schema.TypeBool, Optional: true, }, - "has_issues": &schema.Schema{ + "has_issues": { Type: schema.TypeBool, Optional: true, }, - "has_wiki": &schema.Schema{ + "has_wiki": { Type: schema.TypeBool, Optional: true, }, - "has_downloads": &schema.Schema{ + "has_downloads": { Type: schema.TypeBool, Optional: true, }, - "auto_init": &schema.Schema{ + "auto_init": { Type: schema.TypeBool, Optional: true, }, - "full_name": &schema.Schema{ + "full_name": { Type: schema.TypeString, Computed: true, }, - "default_branch": &schema.Schema{ + "default_branch": { Type: schema.TypeString, Computed: true, }, - "ssh_clone_url": &schema.Schema{ + "ssh_clone_url": { Type: schema.TypeString, Computed: true, }, - "svn_url": &schema.Schema{ + "svn_url": { Type: schema.TypeString, Computed: true, }, - "git_clone_url": &schema.Schema{ + "git_clone_url": { Type: schema.TypeString, Computed: true, }, - "http_clone_url": &schema.Schema{ + "http_clone_url": { Type: schema.TypeString, Computed: true, }, @@ -110,7 +111,7 @@ func resourceGithubRepositoryCreate(d *schema.ResourceData, meta interface{}) er repoReq := resourceGithubRepositoryObject(d) log.Printf("[DEBUG] create github repository %s/%s", meta.(*Organization).name, *repoReq.Name) - repo, _, err := client.Repositories.Create(meta.(*Organization).name, repoReq) + repo, _, err := client.Repositories.Create(context.TODO(), meta.(*Organization).name, repoReq) if err != nil { return err } @@ -124,7 +125,7 @@ func resourceGithubRepositoryRead(d *schema.ResourceData, meta interface{}) erro repoName := d.Id() log.Printf("[DEBUG] read github repository %s/%s", meta.(*Organization).name, repoName) - repo, resp, err := client.Repositories.Get(meta.(*Organization).name, repoName) + repo, resp, err := client.Repositories.Get(context.TODO(), meta.(*Organization).name, repoName) if err != nil { if resp.StatusCode == 404 { log.Printf( @@ -158,7 +159,7 @@ func resourceGithubRepositoryUpdate(d *schema.ResourceData, meta interface{}) er repoReq := resourceGithubRepositoryObject(d) repoName := d.Id() log.Printf("[DEBUG] update github repository %s/%s", meta.(*Organization).name, repoName) - repo, _, err := client.Repositories.Edit(meta.(*Organization).name, repoName, repoReq) + repo, _, err := client.Repositories.Edit(context.TODO(), meta.(*Organization).name, repoName, repoReq) if err != nil { return err } @@ -171,6 +172,6 @@ func resourceGithubRepositoryDelete(d *schema.ResourceData, meta interface{}) er client := meta.(*Organization).client repoName := d.Id() log.Printf("[DEBUG] delete github repository %s/%s", meta.(*Organization).name, repoName) - _, err := client.Repositories.Delete(meta.(*Organization).name, repoName) + _, err := client.Repositories.Delete(context.TODO(), meta.(*Organization).name, repoName) return err } diff --git a/builtin/providers/github/resource_github_repository_collaborator.go b/builtin/providers/github/resource_github_repository_collaborator.go index cde09c6f81..84667c35b3 100644 --- a/builtin/providers/github/resource_github_repository_collaborator.go +++ b/builtin/providers/github/resource_github_repository_collaborator.go @@ -1,6 +1,8 @@ package github import ( + "context" + "github.com/google/go-github/github" "github.com/hashicorp/terraform/helper/schema" ) @@ -16,17 +18,17 @@ func resourceGithubRepositoryCollaborator() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "username": &schema.Schema{ + "username": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "repository": &schema.Schema{ + "repository": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "permission": &schema.Schema{ + "permission": { Type: schema.TypeString, Optional: true, ForceNew: true, @@ -43,7 +45,7 @@ func resourceGithubRepositoryCollaboratorCreate(d *schema.ResourceData, meta int r := d.Get("repository").(string) p := d.Get("permission").(string) - _, err := client.Repositories.AddCollaborator(meta.(*Organization).name, r, u, + _, err := client.Repositories.AddCollaborator(context.TODO(), meta.(*Organization).name, r, u, &github.RepositoryAddCollaboratorOptions{Permission: p}) if err != nil { @@ -59,36 +61,52 @@ func resourceGithubRepositoryCollaboratorRead(d *schema.ResourceData, meta inter client := meta.(*Organization).client r, u := parseTwoPartID(d.Id()) - isCollaborator, _, err := client.Repositories.IsCollaborator(meta.(*Organization).name, r, u) + // First, check if the user has been invited but has not yet accepted + invitation, err := findRepoInvitation(client, meta.(*Organization).name, r, u) + if err != nil { + return err + } else if invitation != nil { + permName, err := getInvitationPermission(invitation) + if err != nil { + return err + } - if !isCollaborator || err != nil { - d.SetId("") + d.Set("repository", r) + d.Set("username", u) + d.Set("permission", permName) return nil } - collaborators, _, err := client.Repositories.ListCollaborators(meta.(*Organization).name, r, - &github.ListOptions{}) - - if err != nil { - return err - } - - for _, c := range collaborators { - if *c.Login == u { - permName, err := getRepoPermission(c.Permissions) - - if err != nil { - return err - } - - d.Set("repository", r) - d.Set("username", u) - d.Set("permission", permName) - - return nil + // Next, check if the user has accepted the invite and is a full collaborator + opt := &github.ListOptions{PerPage: maxPerPage} + for { + collaborators, resp, err := client.Repositories.ListCollaborators(context.TODO(), meta.(*Organization).name, r, opt) + if err != nil { + return err } + + for _, c := range collaborators { + if *c.Login == u { + permName, err := getRepoPermission(c.Permissions) + if err != nil { + return err + } + + d.Set("repository", r) + d.Set("username", u) + d.Set("permission", permName) + return nil + } + } + + if resp.NextPage == 0 { + break + } + opt.Page = resp.NextPage } + // The user is neither invited nor a collaborator + d.SetId("") return nil } @@ -97,7 +115,37 @@ func resourceGithubRepositoryCollaboratorDelete(d *schema.ResourceData, meta int u := d.Get("username").(string) r := d.Get("repository").(string) - _, err := client.Repositories.RemoveCollaborator(meta.(*Organization).name, r, u) + // Delete any pending invitations + invitation, err := findRepoInvitation(client, meta.(*Organization).name, r, u) + if err != nil { + return err + } else if invitation != nil { + _, err = client.Repositories.DeleteInvitation(context.TODO(), meta.(*Organization).name, r, *invitation.ID) + return err + } + _, err = client.Repositories.RemoveCollaborator(context.TODO(), meta.(*Organization).name, r, u) return err } + +func findRepoInvitation(client *github.Client, owner string, repo string, collaborator string) (*github.RepositoryInvitation, error) { + opt := &github.ListOptions{PerPage: maxPerPage} + for { + invitations, resp, err := client.Repositories.ListInvitations(context.TODO(), owner, repo, opt) + if err != nil { + return nil, err + } + + for _, i := range invitations { + if *i.Invitee.Login == collaborator { + return i, nil + } + } + + if resp.NextPage == 0 { + break + } + opt.Page = resp.NextPage + } + return nil, nil +} diff --git a/builtin/providers/github/resource_github_repository_collaborator_test.go b/builtin/providers/github/resource_github_repository_collaborator_test.go index 18f5cb9efc..065cadeabd 100644 --- a/builtin/providers/github/resource_github_repository_collaborator_test.go +++ b/builtin/providers/github/resource_github_repository_collaborator_test.go @@ -1,10 +1,10 @@ package github import ( + "context" "fmt" "testing" - "github.com/google/go-github/github" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -17,7 +17,7 @@ func TestAccGithubRepositoryCollaborator_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckGithubRepositoryCollaboratorDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGithubRepositoryCollaboratorConfig, Check: resource.ComposeTestCheckFunc( testAccCheckGithubRepositoryCollaboratorExists("github_repository_collaborator.test_repo_collaborator"), @@ -34,10 +34,10 @@ func TestAccGithubRepositoryCollaborator_importBasic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckGithubRepositoryCollaboratorDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGithubRepositoryCollaboratorConfig, }, - resource.TestStep{ + { ResourceName: "github_repository_collaborator.test_repo_collaborator", ImportState: true, ImportStateVerify: true, @@ -56,7 +56,7 @@ func testAccCheckGithubRepositoryCollaboratorDestroy(s *terraform.State) error { o := testAccProvider.Meta().(*Organization).name r, u := parseTwoPartID(rs.Primary.ID) - isCollaborator, _, err := conn.Repositories.IsCollaborator(o, r, u) + isCollaborator, _, err := conn.Repositories.IsCollaborator(context.TODO(), o, r, u) if err != nil { return err @@ -87,14 +87,21 @@ func testAccCheckGithubRepositoryCollaboratorExists(n string) resource.TestCheck o := testAccProvider.Meta().(*Organization).name r, u := parseTwoPartID(rs.Primary.ID) - isCollaborator, _, err := conn.Repositories.IsCollaborator(o, r, u) - + invitations, _, err := conn.Repositories.ListInvitations(context.TODO(), o, r, nil) if err != nil { return err } - if !isCollaborator { - return fmt.Errorf("Repository collaborator does not exist") + hasInvitation := false + for _, i := range invitations { + if *i.Invitee.Login == u { + hasInvitation = true + break + } + } + + if !hasInvitation { + return fmt.Errorf("Repository collaboration invitation does not exist") } return nil @@ -116,15 +123,14 @@ func testAccCheckGithubRepositoryCollaboratorPermission(n string) resource.TestC o := testAccProvider.Meta().(*Organization).name r, u := parseTwoPartID(rs.Primary.ID) - collaborators, _, err := conn.Repositories.ListCollaborators(o, r, &github.ListOptions{}) - + invitations, _, err := conn.Repositories.ListInvitations(context.TODO(), o, r, nil) if err != nil { return err } - for _, c := range collaborators { - if *c.Login == u { - permName, err := getRepoPermission(c.Permissions) + for _, i := range invitations { + if *i.Invitee.Login == u { + permName, err := getInvitationPermission(i) if err != nil { return err diff --git a/builtin/providers/github/resource_github_repository_test.go b/builtin/providers/github/resource_github_repository_test.go index 685337195f..03101c89f0 100644 --- a/builtin/providers/github/resource_github_repository_test.go +++ b/builtin/providers/github/resource_github_repository_test.go @@ -1,30 +1,35 @@ package github import ( + "context" "fmt" "strings" "testing" "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccGithubRepository_basic(t *testing.T) { var repo github.Repository + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + name := fmt.Sprintf("tf-acc-test-%s", randString) + description := fmt.Sprintf("Terraform acceptance tests %s", randString) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubRepositoryDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubRepositoryConfig, + { + Config: testAccGithubRepositoryConfig(randString), Check: resource.ComposeTestCheckFunc( testAccCheckGithubRepositoryExists("github_repository.foo", &repo), testAccCheckGithubRepositoryAttributes(&repo, &testAccGithubRepositoryExpectedAttributes{ - Name: "foo", - Description: "Terraform acceptance tests", + Name: name, + Description: description, Homepage: "http://example.com/", HasIssues: true, HasWiki: true, @@ -33,13 +38,13 @@ func TestAccGithubRepository_basic(t *testing.T) { }), ), }, - resource.TestStep{ - Config: testAccGithubRepositoryUpdateConfig, + { + Config: testAccGithubRepositoryUpdateConfig(randString), Check: resource.ComposeTestCheckFunc( testAccCheckGithubRepositoryExists("github_repository.foo", &repo), testAccCheckGithubRepositoryAttributes(&repo, &testAccGithubRepositoryExpectedAttributes{ - Name: "foo", - Description: "Terraform acceptance tests!", + Name: name, + Description: "Updated " + description, Homepage: "http://example.com/", DefaultBranch: "master", }), @@ -50,15 +55,17 @@ func TestAccGithubRepository_basic(t *testing.T) { } func TestAccGithubRepository_importBasic(t *testing.T) { + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubRepositoryDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubRepositoryConfig, + { + Config: testAccGithubRepositoryConfig(randString), }, - resource.TestStep{ + { ResourceName: "github_repository.foo", ImportState: true, ImportStateVerify: true, @@ -81,7 +88,7 @@ func testAccCheckGithubRepositoryExists(n string, repo *github.Repository) resou org := testAccProvider.Meta().(*Organization) conn := org.client - gotRepo, _, err := conn.Repositories.Get(org.name, repoName) + gotRepo, _, err := conn.Repositories.Get(context.TODO(), org.name, repoName) if err != nil { return err } @@ -174,10 +181,10 @@ func testAccCheckGithubRepositoryDestroy(s *terraform.State) error { continue } - gotRepo, resp, err := conn.Repositories.Get(orgName, rs.Primary.ID) + gotRepo, resp, err := conn.Repositories.Get(context.TODO(), orgName, rs.Primary.ID) if err == nil { if gotRepo != nil && *gotRepo.Name == rs.Primary.ID { - return fmt.Errorf("Repository still exists") + return fmt.Errorf("Repository %s/%s still exists", orgName, *gotRepo.Name) } } if resp.StatusCode != 404 { @@ -188,10 +195,11 @@ func testAccCheckGithubRepositoryDestroy(s *terraform.State) error { return nil } -const testAccGithubRepositoryConfig = ` +func testAccGithubRepositoryConfig(randString string) string { + return fmt.Sprintf(` resource "github_repository" "foo" { - name = "foo" - description = "Terraform acceptance tests" + name = "tf-acc-test-%s" + description = "Terraform acceptance tests %s" homepage_url = "http://example.com/" # So that acceptance tests can be run in a github organization @@ -202,12 +210,14 @@ resource "github_repository" "foo" { has_wiki = true has_downloads = true } -` +`, randString, randString) +} -const testAccGithubRepositoryUpdateConfig = ` +func testAccGithubRepositoryUpdateConfig(randString string) string { + return fmt.Sprintf(` resource "github_repository" "foo" { - name = "foo" - description = "Terraform acceptance tests!" + name = "tf-acc-test-%s" + description = "Updated Terraform acceptance tests %s" homepage_url = "http://example.com/" # So that acceptance tests can be run in a github organization @@ -218,4 +228,5 @@ resource "github_repository" "foo" { has_wiki = false has_downloads = false } -` +`, randString, randString) +} diff --git a/builtin/providers/github/resource_github_repository_webhook.go b/builtin/providers/github/resource_github_repository_webhook.go new file mode 100644 index 0000000000..503e61c95c --- /dev/null +++ b/builtin/providers/github/resource_github_repository_webhook.go @@ -0,0 +1,132 @@ +package github + +import ( + "context" + "strconv" + + "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceGithubRepositoryWebhook() *schema.Resource { + return &schema.Resource{ + Create: resourceGithubRepositoryWebhookCreate, + Read: resourceGithubRepositoryWebhookRead, + Update: resourceGithubRepositoryWebhookUpdate, + Delete: resourceGithubRepositoryWebhookDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "repository": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "events": &schema.Schema{ + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "configuration": { + Type: schema.TypeMap, + Optional: true, + }, + "url": { + Type: schema.TypeString, + Computed: true, + }, + "active": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + }, + } +} + +func resourceGithubRepositoryWebhookObject(d *schema.ResourceData) *github.Hook { + url := d.Get("url").(string) + active := d.Get("active").(bool) + events := []string{} + eventSet := d.Get("events").(*schema.Set) + for _, v := range eventSet.List() { + events = append(events, v.(string)) + } + name := d.Get("name").(string) + + hook := &github.Hook{ + Name: &name, + URL: &url, + Events: events, + Active: &active, + Config: d.Get("configuration").(map[string]interface{}), + } + + return hook +} + +func resourceGithubRepositoryWebhookCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hk := resourceGithubRepositoryWebhookObject(d) + + hook, _, err := client.Repositories.CreateHook(context.TODO(), meta.(*Organization).name, d.Get("repository").(string), hk) + if err != nil { + return err + } + d.SetId(strconv.Itoa(*hook.ID)) + + return resourceGithubRepositoryWebhookRead(d, meta) +} + +func resourceGithubRepositoryWebhookRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hookID, _ := strconv.Atoi(d.Id()) + + hook, resp, err := client.Repositories.GetHook(context.TODO(), meta.(*Organization).name, d.Get("repository").(string), hookID) + if err != nil { + if resp.StatusCode == 404 { + d.SetId("") + return nil + } + return err + } + d.Set("name", hook.Name) + d.Set("url", hook.URL) + d.Set("active", hook.Active) + d.Set("events", hook.Events) + d.Set("configuration", hook.Config) + + return nil +} + +func resourceGithubRepositoryWebhookUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hk := resourceGithubRepositoryWebhookObject(d) + hookID, err := strconv.Atoi(d.Id()) + if err != nil { + return err + } + + _, _, err = client.Repositories.EditHook(context.TODO(), meta.(*Organization).name, d.Get("repository").(string), hookID, hk) + if err != nil { + return err + } + + return resourceGithubRepositoryWebhookRead(d, meta) +} + +func resourceGithubRepositoryWebhookDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*Organization).client + hookID, err := strconv.Atoi(d.Id()) + if err != nil { + return err + } + + _, err = client.Repositories.DeleteHook(context.TODO(), meta.(*Organization).name, d.Get("repository").(string), hookID) + return err +} diff --git a/builtin/providers/github/resource_github_repository_webhook_test.go b/builtin/providers/github/resource_github_repository_webhook_test.go new file mode 100644 index 0000000000..189cae5c3e --- /dev/null +++ b/builtin/providers/github/resource_github_repository_webhook_test.go @@ -0,0 +1,206 @@ +package github + +import ( + "context" + "fmt" + "reflect" + "strconv" + "strings" + "testing" + + "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccGithubRepositoryWebhook_basic(t *testing.T) { + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + var hook github.Hook + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGithubRepositoryWebhookDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGithubRepositoryWebhookConfig(randString), + Check: resource.ComposeTestCheckFunc( + testAccCheckGithubRepositoryWebhookExists("github_repository_webhook.foo", fmt.Sprintf("foo-%s", randString), &hook), + testAccCheckGithubRepositoryWebhookAttributes(&hook, &testAccGithubRepositoryWebhookExpectedAttributes{ + Name: "web", + Events: []string{"pull_request"}, + Configuration: map[string]interface{}{ + "url": "https://google.de/webhook", + "content_type": "json", + "insecure_ssl": "1", + }, + Active: true, + }), + ), + }, + { + Config: testAccGithubRepositoryWebhookUpdateConfig(randString), + Check: resource.ComposeTestCheckFunc( + testAccCheckGithubRepositoryWebhookExists("github_repository_webhook.foo", fmt.Sprintf("foo-%s", randString), &hook), + testAccCheckGithubRepositoryWebhookAttributes(&hook, &testAccGithubRepositoryWebhookExpectedAttributes{ + Name: "web", + Events: []string{"issues"}, + Configuration: map[string]interface{}{ + "url": "https://google.de/webhooks", + "content_type": "form", + "insecure_ssl": "0", + }, + Active: false, + }), + ), + }, + }, + }) +} + +func testAccCheckGithubRepositoryWebhookExists(n string, repoName string, hook *github.Hook) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not Found: %s", n) + } + + hookID, _ := strconv.Atoi(rs.Primary.ID) + if hookID == 0 { + return fmt.Errorf("No repository name is set") + } + + org := testAccProvider.Meta().(*Organization) + conn := org.client + getHook, _, err := conn.Repositories.GetHook(context.TODO(), org.name, repoName, hookID) + if err != nil { + return err + } + *hook = *getHook + return nil + } +} + +type testAccGithubRepositoryWebhookExpectedAttributes struct { + Name string + Events []string + Configuration map[string]interface{} + Active bool +} + +func testAccCheckGithubRepositoryWebhookAttributes(hook *github.Hook, want *testAccGithubRepositoryWebhookExpectedAttributes) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *hook.Name != want.Name { + return fmt.Errorf("got hook %q; want %q", *hook.Name, want.Name) + } + if *hook.Active != want.Active { + return fmt.Errorf("got hook %t; want %t", *hook.Active, want.Active) + } + if !strings.HasPrefix(*hook.URL, "https://") { + return fmt.Errorf("got http URL %q; want to start with 'https://'", *hook.URL) + } + if !reflect.DeepEqual(hook.Events, want.Events) { + return fmt.Errorf("got hook events %q; want %q", hook.Events, want.Events) + } + if !reflect.DeepEqual(hook.Config, want.Configuration) { + return fmt.Errorf("got hook configuration %q; want %q", hook.Config, want.Configuration) + } + + return nil + } +} + +func testAccCheckGithubRepositoryWebhookDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*Organization).client + orgName := testAccProvider.Meta().(*Organization).name + + for _, rs := range s.RootModule().Resources { + if rs.Type != "github_repository_webhook" { + continue + } + + id, err := strconv.Atoi(rs.Primary.ID) + if err != nil { + return err + } + + gotHook, resp, err := conn.Repositories.GetHook(context.TODO(), orgName, rs.Primary.Attributes["repository"], id) + if err == nil { + if gotHook != nil && *gotHook.ID == id { + return fmt.Errorf("Webhook still exists") + } + } + if resp.StatusCode != 404 { + return err + } + return nil + } + return nil +} + +func testAccGithubRepositoryWebhookConfig(randString string) string { + return fmt.Sprintf(` + resource "github_repository" "foo" { + name = "foo-%s" + description = "Terraform acceptance tests" + homepage_url = "http://example.com/" + + # So that acceptance tests can be run in a github organization + # with no billing + private = false + + has_issues = true + has_wiki = true + has_downloads = true + } + + resource "github_repository_webhook" "foo" { + depends_on = ["github_repository.foo"] + repository = "foo-%s" + + name = "web" + configuration { + url = "https://google.de/webhook" + content_type = "json" + insecure_ssl = true + } + + events = ["pull_request"] + } + `, randString, randString) +} + +func testAccGithubRepositoryWebhookUpdateConfig(randString string) string { + return fmt.Sprintf(` +resource "github_repository" "foo" { + name = "foo-%s" + description = "Terraform acceptance tests" + homepage_url = "http://example.com/" + + # So that acceptance tests can be run in a github organization + # with no billing + private = false + + has_issues = true + has_wiki = true + has_downloads = true +} + +resource "github_repository_webhook" "foo" { + depends_on = ["github_repository.foo"] + repository = "foo-%s" + + name = "web" + configuration { + url = "https://google.de/webhooks" + content_type = "form" + insecure_ssl = false + } + active = false + + events = ["issues"] +} +`, randString, randString) +} diff --git a/builtin/providers/github/resource_github_team.go b/builtin/providers/github/resource_github_team.go index dc6ad2c5ea..71c01d266b 100644 --- a/builtin/providers/github/resource_github_team.go +++ b/builtin/providers/github/resource_github_team.go @@ -1,6 +1,8 @@ package github import ( + "context" + "github.com/google/go-github/github" "github.com/hashicorp/terraform/helper/schema" ) @@ -17,15 +19,15 @@ func resourceGithubTeam() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, }, - "privacy": &schema.Schema{ + "privacy": { Type: schema.TypeString, Optional: true, Default: "secret", @@ -40,7 +42,7 @@ func resourceGithubTeamCreate(d *schema.ResourceData, meta interface{}) error { n := d.Get("name").(string) desc := d.Get("description").(string) p := d.Get("privacy").(string) - githubTeam, _, err := client.Organizations.CreateTeam(meta.(*Organization).name, &github.Team{ + githubTeam, _, err := client.Organizations.CreateTeam(context.TODO(), meta.(*Organization).name, &github.Team{ Name: &n, Description: &desc, Privacy: &p, @@ -82,7 +84,7 @@ func resourceGithubTeamUpdate(d *schema.ResourceData, meta interface{}) error { team.Name = &name team.Privacy = &privacy - team, _, err = client.Organizations.EditTeam(*team.ID, team) + team, _, err = client.Organizations.EditTeam(context.TODO(), *team.ID, team) if err != nil { return err } @@ -93,12 +95,12 @@ func resourceGithubTeamUpdate(d *schema.ResourceData, meta interface{}) error { func resourceGithubTeamDelete(d *schema.ResourceData, meta interface{}) error { client := meta.(*Organization).client id := toGithubID(d.Id()) - _, err := client.Organizations.DeleteTeam(id) + _, err := client.Organizations.DeleteTeam(context.TODO(), id) return err } func getGithubTeam(d *schema.ResourceData, github *github.Client) (*github.Team, error) { id := toGithubID(d.Id()) - team, _, err := github.Organizations.GetTeam(id) + team, _, err := github.Organizations.GetTeam(context.TODO(), id) return team, err } diff --git a/builtin/providers/github/resource_github_team_membership.go b/builtin/providers/github/resource_github_team_membership.go index e6f38b6759..ca54f1e95e 100644 --- a/builtin/providers/github/resource_github_team_membership.go +++ b/builtin/providers/github/resource_github_team_membership.go @@ -1,6 +1,7 @@ package github import ( + "context" "strings" "github.com/google/go-github/github" @@ -19,17 +20,17 @@ func resourceGithubTeamMembership() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "team_id": &schema.Schema{ + "team_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "username": &schema.Schema{ + "username": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "role": &schema.Schema{ + "role": { Type: schema.TypeString, Optional: true, ForceNew: true, @@ -46,7 +47,7 @@ func resourceGithubTeamMembershipCreate(d *schema.ResourceData, meta interface{} n := d.Get("username").(string) r := d.Get("role").(string) - _, _, err := client.Organizations.AddTeamMembership(toGithubID(t), n, + _, _, err := client.Organizations.AddTeamMembership(context.TODO(), toGithubID(t), n, &github.OrganizationAddTeamMembershipOptions{Role: r}) if err != nil { @@ -62,7 +63,7 @@ func resourceGithubTeamMembershipRead(d *schema.ResourceData, meta interface{}) client := meta.(*Organization).client t, n := parseTwoPartID(d.Id()) - membership, _, err := client.Organizations.GetTeamMembership(toGithubID(t), n) + membership, _, err := client.Organizations.GetTeamMembership(context.TODO(), toGithubID(t), n) if err != nil { d.SetId("") @@ -81,7 +82,7 @@ func resourceGithubTeamMembershipDelete(d *schema.ResourceData, meta interface{} t := d.Get("team_id").(string) n := d.Get("username").(string) - _, err := client.Organizations.RemoveTeamMembership(toGithubID(t), n) + _, err := client.Organizations.RemoveTeamMembership(context.TODO(), toGithubID(t), n) return err } diff --git a/builtin/providers/github/resource_github_team_membership_test.go b/builtin/providers/github/resource_github_team_membership_test.go index 9cf2dd788c..d344b0598c 100644 --- a/builtin/providers/github/resource_github_team_membership_test.go +++ b/builtin/providers/github/resource_github_team_membership_test.go @@ -1,16 +1,19 @@ package github import ( + "context" "fmt" "testing" "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccGithubTeamMembership_basic(t *testing.T) { var membership github.Membership + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) testAccGithubTeamMembershipUpdateConfig := fmt.Sprintf(` resource "github_membership" "test_org_membership" { @@ -19,7 +22,7 @@ func TestAccGithubTeamMembership_basic(t *testing.T) { } resource "github_team" "test_team" { - name = "foo" + name = "tf-acc-test-team-membership-%s" description = "Terraform acc test group" } @@ -28,21 +31,21 @@ func TestAccGithubTeamMembership_basic(t *testing.T) { username = "%s" role = "maintainer" } - `, testUser, testUser) + `, testUser, randString, testUser) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubTeamMembershipDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubTeamMembershipConfig, + { + Config: testAccGithubTeamMembershipConfig(randString, testUser), Check: resource.ComposeTestCheckFunc( testAccCheckGithubTeamMembershipExists("github_team_membership.test_team_membership", &membership), testAccCheckGithubTeamMembershipRoleState("github_team_membership.test_team_membership", "member", &membership), ), }, - resource.TestStep{ + { Config: testAccGithubTeamMembershipUpdateConfig, Check: resource.ComposeTestCheckFunc( testAccCheckGithubTeamMembershipExists("github_team_membership.test_team_membership", &membership), @@ -54,15 +57,17 @@ func TestAccGithubTeamMembership_basic(t *testing.T) { } func TestAccGithubTeamMembership_importBasic(t *testing.T) { + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubTeamMembershipDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubTeamMembershipConfig, + { + Config: testAccGithubTeamMembershipConfig(randString, testUser), }, - resource.TestStep{ + { ResourceName: "github_team_membership.test_team_membership", ImportState: true, ImportStateVerify: true, @@ -80,7 +85,7 @@ func testAccCheckGithubTeamMembershipDestroy(s *terraform.State) error { } t, u := parseTwoPartID(rs.Primary.ID) - membership, resp, err := conn.Organizations.GetTeamMembership(toGithubID(t), u) + membership, resp, err := conn.Organizations.GetTeamMembership(context.TODO(), toGithubID(t), u) if err == nil { if membership != nil { return fmt.Errorf("Team membership still exists") @@ -108,7 +113,7 @@ func testAccCheckGithubTeamMembershipExists(n string, membership *github.Members conn := testAccProvider.Meta().(*Organization).client t, u := parseTwoPartID(rs.Primary.ID) - teamMembership, _, err := conn.Organizations.GetTeamMembership(toGithubID(t), u) + teamMembership, _, err := conn.Organizations.GetTeamMembership(context.TODO(), toGithubID(t), u) if err != nil { return err @@ -132,7 +137,7 @@ func testAccCheckGithubTeamMembershipRoleState(n, expected string, membership *g conn := testAccProvider.Meta().(*Organization).client t, u := parseTwoPartID(rs.Primary.ID) - teamMembership, _, err := conn.Organizations.GetTeamMembership(toGithubID(t), u) + teamMembership, _, err := conn.Organizations.GetTeamMembership(context.TODO(), toGithubID(t), u) if err != nil { return err } @@ -151,14 +156,15 @@ func testAccCheckGithubTeamMembershipRoleState(n, expected string, membership *g } } -var testAccGithubTeamMembershipConfig string = fmt.Sprintf(` +func testAccGithubTeamMembershipConfig(randString, username string) string { + return fmt.Sprintf(` resource "github_membership" "test_org_membership" { username = "%s" role = "member" } resource "github_team" "test_team" { - name = "foo" + name = "tf-acc-test-team-membership-%s" description = "Terraform acc test group" } @@ -167,4 +173,5 @@ var testAccGithubTeamMembershipConfig string = fmt.Sprintf(` username = "%s" role = "member" } -`, testUser, testUser) +`, username, randString, username) +} diff --git a/builtin/providers/github/resource_github_team_repository.go b/builtin/providers/github/resource_github_team_repository.go index fa8b70e75a..7a13cef1be 100644 --- a/builtin/providers/github/resource_github_team_repository.go +++ b/builtin/providers/github/resource_github_team_repository.go @@ -1,6 +1,8 @@ package github import ( + "context" + "github.com/google/go-github/github" "github.com/hashicorp/terraform/helper/schema" ) @@ -16,17 +18,17 @@ func resourceGithubTeamRepository() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "team_id": &schema.Schema{ + "team_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "repository": &schema.Schema{ + "repository": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "permission": &schema.Schema{ + "permission": { Type: schema.TypeString, Optional: true, Default: "pull", @@ -42,7 +44,7 @@ func resourceGithubTeamRepositoryCreate(d *schema.ResourceData, meta interface{} r := d.Get("repository").(string) p := d.Get("permission").(string) - _, err := client.Organizations.AddTeamRepo(toGithubID(t), meta.(*Organization).name, r, + _, err := client.Organizations.AddTeamRepo(context.TODO(), toGithubID(t), meta.(*Organization).name, r, &github.OrganizationAddTeamRepoOptions{Permission: p}) if err != nil { @@ -58,7 +60,7 @@ func resourceGithubTeamRepositoryRead(d *schema.ResourceData, meta interface{}) client := meta.(*Organization).client t, r := parseTwoPartID(d.Id()) - repo, _, repoErr := client.Organizations.IsTeamRepo(toGithubID(t), meta.(*Organization).name, r) + repo, _, repoErr := client.Organizations.IsTeamRepo(context.TODO(), toGithubID(t), meta.(*Organization).name, r) if repoErr != nil { d.SetId("") @@ -88,7 +90,7 @@ func resourceGithubTeamRepositoryUpdate(d *schema.ResourceData, meta interface{} p := d.Get("permission").(string) // the go-github library's AddTeamRepo method uses the add/update endpoint from Github API - _, err := client.Organizations.AddTeamRepo(toGithubID(t), meta.(*Organization).name, r, + _, err := client.Organizations.AddTeamRepo(context.TODO(), toGithubID(t), meta.(*Organization).name, r, &github.OrganizationAddTeamRepoOptions{Permission: p}) if err != nil { @@ -104,7 +106,7 @@ func resourceGithubTeamRepositoryDelete(d *schema.ResourceData, meta interface{} t := d.Get("team_id").(string) r := d.Get("repository").(string) - _, err := client.Organizations.RemoveTeamRepo(toGithubID(t), meta.(*Organization).name, r) + _, err := client.Organizations.RemoveTeamRepo(context.TODO(), toGithubID(t), meta.(*Organization).name, r) return err } diff --git a/builtin/providers/github/resource_github_team_repository_test.go b/builtin/providers/github/resource_github_team_repository_test.go index 3d764305bb..9f1007a3d4 100644 --- a/builtin/providers/github/resource_github_team_repository_test.go +++ b/builtin/providers/github/resource_github_team_repository_test.go @@ -1,31 +1,34 @@ package github import ( + "context" "fmt" "testing" "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccGithubTeamRepository_basic(t *testing.T) { var repository github.Repository + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubTeamRepositoryDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubTeamRepositoryConfig, + { + Config: testAccGithubTeamRepositoryConfig(randString, testRepo), Check: resource.ComposeTestCheckFunc( testAccCheckGithubTeamRepositoryExists("github_team_repository.test_team_test_repo", &repository), testAccCheckGithubTeamRepositoryRoleState("pull", &repository), ), }, - resource.TestStep{ - Config: testAccGithubTeamRepositoryUpdateConfig, + { + Config: testAccGithubTeamRepositoryUpdateConfig(randString, testRepo), Check: resource.ComposeTestCheckFunc( testAccCheckGithubTeamRepositoryExists("github_team_repository.test_team_test_repo", &repository), testAccCheckGithubTeamRepositoryRoleState("push", &repository), @@ -36,15 +39,17 @@ func TestAccGithubTeamRepository_basic(t *testing.T) { } func TestAccGithubTeamRepository_importBasic(t *testing.T) { + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubTeamRepositoryDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubTeamRepositoryConfig, + { + Config: testAccGithubTeamRepositoryConfig(randString, testRepo), }, - resource.TestStep{ + { ResourceName: "github_team_repository.test_team_test_repo", ImportState: true, ImportStateVerify: true, @@ -108,7 +113,8 @@ func testAccCheckGithubTeamRepositoryExists(n string, repository *github.Reposit conn := testAccProvider.Meta().(*Organization).client t, r := parseTwoPartID(rs.Primary.ID) - repo, _, err := conn.Organizations.IsTeamRepo(toGithubID(t), + repo, _, err := conn.Organizations.IsTeamRepo(context.TODO(), + toGithubID(t), testAccProvider.Meta().(*Organization).name, r) if err != nil { @@ -128,7 +134,8 @@ func testAccCheckGithubTeamRepositoryDestroy(s *terraform.State) error { } t, r := parseTwoPartID(rs.Primary.ID) - repo, resp, err := conn.Organizations.IsTeamRepo(toGithubID(t), + repo, resp, err := conn.Organizations.IsTeamRepo(context.TODO(), + toGithubID(t), testAccProvider.Meta().(*Organization).name, r) if err == nil { @@ -145,9 +152,10 @@ func testAccCheckGithubTeamRepositoryDestroy(s *terraform.State) error { return nil } -var testAccGithubTeamRepositoryConfig string = fmt.Sprintf(` +func testAccGithubTeamRepositoryConfig(randString, repoName string) string { + return fmt.Sprintf(` resource "github_team" "test_team" { - name = "foo" + name = "tf-acc-test-team-repo-%s" description = "Terraform acc test group" } @@ -156,11 +164,13 @@ resource "github_team_repository" "test_team_test_repo" { repository = "%s" permission = "pull" } -`, testRepo) +`, randString, repoName) +} -var testAccGithubTeamRepositoryUpdateConfig string = fmt.Sprintf(` +func testAccGithubTeamRepositoryUpdateConfig(randString, repoName string) string { + return fmt.Sprintf(` resource "github_team" "test_team" { - name = "foo" + name = "tf-acc-test-team-repo-%s" description = "Terraform acc test group" } @@ -169,4 +179,5 @@ resource "github_team_repository" "test_team_test_repo" { repository = "%s" permission = "push" } -`, testRepo) +`, randString, repoName) +} diff --git a/builtin/providers/github/resource_github_team_test.go b/builtin/providers/github/resource_github_team_test.go index 1077e96559..b597031895 100644 --- a/builtin/providers/github/resource_github_team_test.go +++ b/builtin/providers/github/resource_github_team_test.go @@ -1,34 +1,39 @@ package github import ( + "context" "fmt" "testing" "github.com/google/go-github/github" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccGithubTeam_basic(t *testing.T) { var team github.Team + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + name := fmt.Sprintf("tf-acc-test-%s", randString) + updatedName := fmt.Sprintf("tf-acc-test-updated-%s", randString) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubTeamDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubTeamConfig, + { + Config: testAccGithubTeamConfig(randString), Check: resource.ComposeTestCheckFunc( testAccCheckGithubTeamExists("github_team.foo", &team), - testAccCheckGithubTeamAttributes(&team, "foo", "Terraform acc test group"), + testAccCheckGithubTeamAttributes(&team, name, "Terraform acc test group"), ), }, - resource.TestStep{ - Config: testAccGithubTeamUpdateConfig, + { + Config: testAccGithubTeamUpdateConfig(randString), Check: resource.ComposeTestCheckFunc( testAccCheckGithubTeamExists("github_team.foo", &team), - testAccCheckGithubTeamAttributes(&team, "foo2", "Terraform acc test group - updated"), + testAccCheckGithubTeamAttributes(&team, updatedName, "Terraform acc test group - updated"), ), }, }, @@ -36,15 +41,17 @@ func TestAccGithubTeam_basic(t *testing.T) { } func TestAccGithubTeam_importBasic(t *testing.T) { + randString := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGithubTeamDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGithubTeamConfig, + { + Config: testAccGithubTeamConfig(randString), }, - resource.TestStep{ + { ResourceName: "github_team.foo", ImportState: true, ImportStateVerify: true, @@ -65,7 +72,7 @@ func testAccCheckGithubTeamExists(n string, team *github.Team) resource.TestChec } conn := testAccProvider.Meta().(*Organization).client - githubTeam, _, err := conn.Organizations.GetTeam(toGithubID(rs.Primary.ID)) + githubTeam, _, err := conn.Organizations.GetTeam(context.TODO(), toGithubID(rs.Primary.ID)) if err != nil { return err } @@ -96,7 +103,7 @@ func testAccCheckGithubTeamDestroy(s *terraform.State) error { continue } - team, resp, err := conn.Organizations.GetTeam(toGithubID(rs.Primary.ID)) + team, resp, err := conn.Organizations.GetTeam(context.TODO(), toGithubID(rs.Primary.ID)) if err == nil { if team != nil && fromGithubID(team.ID) == rs.Primary.ID { @@ -111,18 +118,22 @@ func testAccCheckGithubTeamDestroy(s *terraform.State) error { return nil } -const testAccGithubTeamConfig = ` +func testAccGithubTeamConfig(randString string) string { + return fmt.Sprintf(` resource "github_team" "foo" { - name = "foo" + name = "tf-acc-test-%s" description = "Terraform acc test group" privacy = "secret" } -` +`, randString) +} -const testAccGithubTeamUpdateConfig = ` +func testAccGithubTeamUpdateConfig(randString string) string { + return fmt.Sprintf(` resource "github_team" "foo" { - name = "foo2" + name = "tf-acc-test-updated-%s" description = "Terraform acc test group - updated" privacy = "closed" } -` +`, randString) +} diff --git a/builtin/providers/github/util.go b/builtin/providers/github/util.go index 96256a0545..d8f07df5a2 100644 --- a/builtin/providers/github/util.go +++ b/builtin/providers/github/util.go @@ -8,6 +8,11 @@ import ( "github.com/hashicorp/terraform/helper/schema" ) +const ( + // https://developer.github.com/guides/traversing-with-pagination/#basics-of-pagination + maxPerPage = 100 +) + func toGithubID(id string) int { githubID, _ := strconv.Atoi(id) return githubID diff --git a/builtin/providers/github/util_permissions.go b/builtin/providers/github/util_permissions.go index 43dd2744df..edd8b164a4 100644 --- a/builtin/providers/github/util_permissions.go +++ b/builtin/providers/github/util_permissions.go @@ -1,10 +1,20 @@ package github -import "errors" +import ( + "errors" + "fmt" -const pullPermission string = "pull" -const pushPermission string = "push" -const adminPermission string = "admin" + "github.com/google/go-github/github" +) + +const ( + pullPermission string = "pull" + pushPermission string = "push" + adminPermission string = "admin" + + writePermission string = "write" + readPermission string = "read" +) func getRepoPermission(p *map[string]bool) (string, error) { @@ -22,3 +32,18 @@ func getRepoPermission(p *map[string]bool) (string, error) { return "", errors.New("At least one permission expected from permissions map.") } } + +func getInvitationPermission(i *github.RepositoryInvitation) (string, error) { + // Permissions for some GitHub API routes are expressed as "read", + // "write", and "admin"; in other places, they are expressed as "pull", + // "push", and "admin". + if *i.Permissions == readPermission { + return pullPermission, nil + } else if *i.Permissions == writePermission { + return pushPermission, nil + } else if *i.Permissions == adminPermission { + return adminPermission, nil + } + + return "", fmt.Errorf("unexpected permission value: %v", *i.Permissions) +} diff --git a/builtin/providers/google/image.go b/builtin/providers/google/image.go index e4a50905ef..d21210d991 100644 --- a/builtin/providers/google/image.go +++ b/builtin/providers/google/image.go @@ -2,96 +2,193 @@ package google import ( "fmt" + "regexp" "strings" + + "google.golang.org/api/googleapi" ) -// If the given name is a URL, return it. -// If it is of the form project/name, search the specified project first, then -// search image families in the specified project. -// If it is of the form name then look in the configured project, then hosted -// image projects, and lastly at image families in hosted image projects. -func resolveImage(c *Config, name string) (string, error) { +const ( + resolveImageProjectRegex = "[-_a-zA-Z0-9]*" + resolveImageFamilyRegex = "[-_a-zA-Z0-9]*" + resolveImageImageRegex = "[-_a-zA-Z0-9]*" +) - if strings.HasPrefix(name, "https://www.googleapis.com/compute/v1/") { - return name, nil +var ( + resolveImageProjectImage = regexp.MustCompile(fmt.Sprintf("^projects/(%s)/global/images/(%s)$", resolveImageProjectRegex, resolveImageImageRegex)) + resolveImageProjectFamily = regexp.MustCompile(fmt.Sprintf("^projects/(%s)/global/images/family/(%s)$", resolveImageProjectRegex, resolveImageFamilyRegex)) + resolveImageGlobalImage = regexp.MustCompile(fmt.Sprintf("^global/images/(%s)$", resolveImageImageRegex)) + resolveImageGlobalFamily = regexp.MustCompile(fmt.Sprintf("^global/images/family/(%s)$", resolveImageFamilyRegex)) + resolveImageFamilyFamily = regexp.MustCompile(fmt.Sprintf("^family/(%s)$", resolveImageFamilyRegex)) + resolveImageProjectImageShorthand = regexp.MustCompile(fmt.Sprintf("^(%s)/(%s)$", resolveImageProjectRegex, resolveImageImageRegex)) + resolveImageProjectFamilyShorthand = regexp.MustCompile(fmt.Sprintf("^(%s)/(%s)$", resolveImageProjectRegex, resolveImageFamilyRegex)) + resolveImageFamily = regexp.MustCompile(fmt.Sprintf("^(%s)$", resolveImageFamilyRegex)) + resolveImageImage = regexp.MustCompile(fmt.Sprintf("^(%s)$", resolveImageImageRegex)) + resolveImageLink = regexp.MustCompile(fmt.Sprintf("^https://www.googleapis.com/compute/v1/projects/(%s)/global/images/(%s)", resolveImageProjectRegex, resolveImageImageRegex)) +) +func resolveImageImageExists(c *Config, project, name string) (bool, error) { + if _, err := c.clientCompute.Images.Get(project, name).Do(); err == nil { + return true, nil + } else if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + return false, nil } else { - splitName := strings.Split(name, "/") - if len(splitName) == 1 { + return false, fmt.Errorf("Error checking if image %s exists: %s", name, err) + } +} - // Must infer the project name: +func resolveImageFamilyExists(c *Config, project, name string) (bool, error) { + if _, err := c.clientCompute.Images.GetFromFamily(project, name).Do(); err == nil { + return true, nil + } else if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + return false, nil + } else { + return false, fmt.Errorf("Error checking if family %s exists: %s", name, err) + } +} - // First, try the configured project for a specific image: - image, err := c.clientCompute.Images.Get(c.Project, name).Do() - if err == nil { - return image.SelfLink, nil - } +func sanityTestRegexMatches(expected int, got []string, regexType, name string) error { + if len(got)-1 != expected { // subtract one, index zero is the entire matched expression + return fmt.Errorf("Expected %d %s regex matches, got %d for %s", expected, regexType, len(got)-1, name) + } + return nil +} - // If it doesn't exist, try to see if it works as an image family: - image, err = c.clientCompute.Images.GetFromFamily(c.Project, name).Do() - if err == nil { - return image.SelfLink, nil - } - - // If we match a lookup for an alternate project, then try that next. - // If not, we return the original error. - - // If the image name contains the left hand side, we use the project from - // the right hand side. - imageMap := map[string]string{ - "centos": "centos-cloud", - "coreos": "coreos-cloud", - "debian": "debian-cloud", - "opensuse": "opensuse-cloud", - "rhel": "rhel-cloud", - "sles": "suse-cloud", - "ubuntu": "ubuntu-os-cloud", - "windows": "windows-cloud", - } - var project string - for k, v := range imageMap { - if strings.Contains(name, k) { - project = v - break - } - } - if project == "" { - return "", err - } - - // There was a match, but the image still may not exist, so check it: - image, err = c.clientCompute.Images.Get(project, name).Do() - if err == nil { - return image.SelfLink, nil - } - - // If it doesn't exist, try to see if it works as an image family: - image, err = c.clientCompute.Images.GetFromFamily(project, name).Do() - if err == nil { - return image.SelfLink, nil - } - - return "", err - - } else if len(splitName) == 2 { - - // Check if image exists in the specified project: - image, err := c.clientCompute.Images.Get(splitName[0], splitName[1]).Do() - if err == nil { - return image.SelfLink, nil - } - - // If it doesn't, check if it exists as an image family: - image, err = c.clientCompute.Images.GetFromFamily(splitName[0], splitName[1]).Do() - if err == nil { - return image.SelfLink, nil - } - - return "", err - - } else { - return "", fmt.Errorf("Invalid image name, require URL, project/name, or just name: %s", name) +// If the given name is a URL, return it. +// If it's in the form projects/{project}/global/images/{image}, return it +// If it's in the form projects/{project}/global/images/family/{family}, return it +// If it's in the form global/images/{image}, return it +// If it's in the form global/images/family/{family}, return it +// If it's in the form family/{family}, check if it's a family in the current project. If it is, return it as global/images/family/{family}. +// If not, check if it could be a GCP-provided family, and if it exists. If it does, return it as projects/{project}/global/images/family/{family}. +// If it's in the form {project}/{family-or-image}, check if it's an image in the named project. If it is, return it as projects/{project}/global/images/{image}. +// If not, check if it's a family in the named project. If it is, return it as projects/{project}/global/images/family/{family}. +// If it's in the form {family-or-image}, check if it's an image in the current project. If it is, return it as global/images/{image}. +// If not, check if it could be a GCP-provided image, and if it exists. If it does, return it as projects/{project}/global/images/{image}. +// If not, check if it's a family in the current project. If it is, return it as global/images/family/{family}. +// If not, check if it could be a GCP-provided family, and if it exists. If it does, return it as projects/{project}/global/images/family/{family} +func resolveImage(c *Config, name string) (string, error) { + // built-in projects to look for images/families containing the string + // on the left in + imageMap := map[string]string{ + "centos": "centos-cloud", + "coreos": "coreos-cloud", + "debian": "debian-cloud", + "opensuse": "opensuse-cloud", + "rhel": "rhel-cloud", + "sles": "suse-cloud", + "ubuntu": "ubuntu-os-cloud", + "windows": "windows-cloud", + } + var builtInProject string + for k, v := range imageMap { + if strings.Contains(name, k) { + builtInProject = v + break } } - + switch { + case resolveImageLink.MatchString(name): // https://www.googleapis.com/compute/v1/projects/xyz/global/images/xyz + return name, nil + case resolveImageProjectImage.MatchString(name): // projects/xyz/global/images/xyz + res := resolveImageProjectImage.FindStringSubmatch(name) + if err := sanityTestRegexMatches(2, res, "project image", name); err != nil { + return "", err + } + return fmt.Sprintf("projects/%s/global/images/%s", res[1], res[2]), nil + case resolveImageProjectFamily.MatchString(name): // projects/xyz/global/images/family/xyz + res := resolveImageProjectFamily.FindStringSubmatch(name) + if err := sanityTestRegexMatches(2, res, "project family", name); err != nil { + return "", err + } + return fmt.Sprintf("projects/%s/global/images/family/%s", res[1], res[2]), nil + case resolveImageGlobalImage.MatchString(name): // global/images/xyz + res := resolveImageGlobalImage.FindStringSubmatch(name) + if err := sanityTestRegexMatches(1, res, "global image", name); err != nil { + return "", err + } + return fmt.Sprintf("global/images/%s", res[1]), nil + case resolveImageGlobalFamily.MatchString(name): // global/images/family/xyz + res := resolveImageGlobalFamily.FindStringSubmatch(name) + if err := sanityTestRegexMatches(1, res, "global family", name); err != nil { + return "", err + } + return fmt.Sprintf("global/images/family/%s", res[1]), nil + case resolveImageFamilyFamily.MatchString(name): // family/xyz + res := resolveImageFamilyFamily.FindStringSubmatch(name) + if err := sanityTestRegexMatches(1, res, "family family", name); err != nil { + return "", err + } + if ok, err := resolveImageFamilyExists(c, c.Project, res[1]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("global/images/family/%s", res[1]), nil + } + if builtInProject != "" { + if ok, err := resolveImageFamilyExists(c, builtInProject, res[1]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("projects/%s/global/images/family/%s", builtInProject, res[1]), nil + } + } + case resolveImageProjectImageShorthand.MatchString(name): // xyz/xyz + res := resolveImageProjectImageShorthand.FindStringSubmatch(name) + if err := sanityTestRegexMatches(2, res, "project image shorthand", name); err != nil { + return "", err + } + if ok, err := resolveImageImageExists(c, res[1], res[2]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("projects/%s/global/images/%s", res[1], res[2]), nil + } + fallthrough // check if it's a family + case resolveImageProjectFamilyShorthand.MatchString(name): // xyz/xyz + res := resolveImageProjectFamilyShorthand.FindStringSubmatch(name) + if err := sanityTestRegexMatches(2, res, "project family shorthand", name); err != nil { + return "", err + } + if ok, err := resolveImageFamilyExists(c, res[1], res[2]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("projects/%s/global/images/family/%s", res[1], res[2]), nil + } + case resolveImageImage.MatchString(name): // xyz + res := resolveImageImage.FindStringSubmatch(name) + if err := sanityTestRegexMatches(1, res, "image", name); err != nil { + return "", err + } + if ok, err := resolveImageImageExists(c, c.Project, res[1]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("global/images/%s", res[1]), nil + } + if builtInProject != "" { + // check the images GCP provides + if ok, err := resolveImageImageExists(c, builtInProject, res[1]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("projects/%s/global/images/%s", builtInProject, res[1]), nil + } + } + fallthrough // check if the name is a family, instead of an image + case resolveImageFamily.MatchString(name): // xyz + res := resolveImageFamily.FindStringSubmatch(name) + if err := sanityTestRegexMatches(1, res, "family", name); err != nil { + return "", err + } + if ok, err := resolveImageFamilyExists(c, c.Project, res[1]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("global/images/family/%s", res[1]), nil + } + if builtInProject != "" { + // check the families GCP provides + if ok, err := resolveImageFamilyExists(c, builtInProject, res[1]); err != nil { + return "", err + } else if ok { + return fmt.Sprintf("projects/%s/global/images/family/%s", builtInProject, res[1]), nil + } + } + } + return "", fmt.Errorf("Could not find image or family %s", name) } diff --git a/builtin/providers/google/image_test.go b/builtin/providers/google/image_test.go new file mode 100644 index 0000000000..e0f56518af --- /dev/null +++ b/builtin/providers/google/image_test.go @@ -0,0 +1,107 @@ +package google + +import ( + "fmt" + "testing" + + compute "google.golang.org/api/compute/v1" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccComputeImage_resolveImage(t *testing.T) { + var image compute.Image + rand := acctest.RandString(10) + name := fmt.Sprintf("test-image-%s", rand) + fam := fmt.Sprintf("test-image-family-%s", rand) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeImageDestroy, + Steps: []resource.TestStep{ + { + Config: testAccComputeImage_resolving(name, fam), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeImageExists( + "google_compute_image.foobar", &image), + testAccCheckComputeImageResolution("google_compute_image.foobar"), + ), + }, + }, + }) +} + +func testAccCheckComputeImageResolution(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + project := config.Project + + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Resource not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + if rs.Primary.Attributes["name"] == "" { + return fmt.Errorf("No image name is set") + } + if rs.Primary.Attributes["family"] == "" { + return fmt.Errorf("No image family is set") + } + if rs.Primary.Attributes["self_link"] == "" { + return fmt.Errorf("No self_link is set") + } + + name := rs.Primary.Attributes["name"] + family := rs.Primary.Attributes["family"] + link := rs.Primary.Attributes["self_link"] + + images := map[string]string{ + "family/debian-8": "projects/debian-cloud/global/images/family/debian-8", + "projects/debian-cloud/global/images/debian-8-jessie-v20170110": "projects/debian-cloud/global/images/debian-8-jessie-v20170110", + "debian-8": "projects/debian-cloud/global/images/family/debian-8", + "debian-8-jessie-v20170110": "projects/debian-cloud/global/images/debian-8-jessie-v20170110", + "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-8-jessie-v20170110": "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-8-jessie-v20170110", + + "global/images/" + name: "global/images/" + name, + "global/images/family/" + family: "global/images/family/" + family, + name: "global/images/" + name, + family: "global/images/family/" + family, + "family/" + family: "global/images/family/" + family, + project + "/" + name: "projects/" + project + "/global/images/" + name, + project + "/" + family: "projects/" + project + "/global/images/family/" + family, + link: link, + } + + for input, expectation := range images { + result, err := resolveImage(config, input) + if err != nil { + return fmt.Errorf("Error resolving input %s to image: %+v\n", input, err) + } + if result != expectation { + return fmt.Errorf("Expected input '%s' to resolve to '%s', it resolved to '%s' instead.\n", input, expectation, result) + } + } + return nil + } +} + +func testAccComputeImage_resolving(name, family string) string { + return fmt.Sprintf(` +resource "google_compute_disk" "foobar" { + name = "%s" + zone = "us-central1-a" + image = "debian-8-jessie-v20160803" +} +resource "google_compute_image" "foobar" { + name = "%s" + family = "%s" + source_disk = "${google_compute_disk.foobar.self_link}" +} +`, name, name, family) +} diff --git a/builtin/providers/google/provider.go b/builtin/providers/google/provider.go index 7984a1f225..7562609c38 100644 --- a/builtin/providers/google/provider.go +++ b/builtin/providers/google/provider.go @@ -5,7 +5,6 @@ import ( "fmt" "strings" - "github.com/hashicorp/terraform/helper/pathorcontents" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" "google.golang.org/api/compute/v1" @@ -16,14 +15,6 @@ import ( func Provider() terraform.ResourceProvider { return &schema.Provider{ Schema: map[string]*schema.Schema{ - "account_file": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - DefaultFunc: schema.EnvDefaultFunc("GOOGLE_ACCOUNT_FILE", nil), - ValidateFunc: validateAccountFile, - Deprecated: "Use the credentials field instead", - }, - "credentials": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -115,9 +106,6 @@ func Provider() terraform.ResourceProvider { func providerConfigure(d *schema.ResourceData) (interface{}, error) { credentials := d.Get("credentials").(string) - if credentials == "" { - credentials = d.Get("account_file").(string) - } config := Config{ Credentials: credentials, Project: d.Get("project").(string), @@ -131,36 +119,6 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { return &config, nil } -func validateAccountFile(v interface{}, k string) (warnings []string, errors []error) { - if v == nil { - return - } - - value := v.(string) - - if value == "" { - return - } - - contents, wasPath, err := pathorcontents.Read(value) - if err != nil { - errors = append(errors, fmt.Errorf("Error loading Account File: %s", err)) - } - if wasPath { - warnings = append(warnings, `account_file was provided as a path instead of -as file contents. This support will be removed in the future. Please update -your configuration to use ${file("filename.json")} instead.`) - } - - var account accountFile - if err := json.Unmarshal([]byte(contents), &account); err != nil { - errors = append(errors, - fmt.Errorf("account_file not valid JSON '%s': %s", contents, err)) - } - - return -} - func validateCredentials(v interface{}, k string) (warnings []string, errors []error) { if v == nil || v.(string) == "" { return @@ -271,17 +229,20 @@ func getNetworkLink(d *schema.ResourceData, config *Config, field string) (strin func getNetworkName(d *schema.ResourceData, field string) (string, error) { if v, ok := d.GetOk(field); ok { network := v.(string) - - if strings.HasPrefix(network, "https://www.googleapis.com/compute/") { - // extract the network name from SelfLink URL - networkName := network[strings.LastIndex(network, "/")+1:] - if networkName == "" { - return "", fmt.Errorf("network url not valid") - } - return networkName, nil - } - - return network, nil + return getNetworkNameFromSelfLink(network) } return "", nil } + +func getNetworkNameFromSelfLink(network string) (string, error) { + if strings.HasPrefix(network, "https://www.googleapis.com/compute/") { + // extract the network name from SelfLink URL + networkName := network[strings.LastIndex(network, "/")+1:] + if networkName == "" { + return "", fmt.Errorf("network url not valid") + } + return networkName, nil + } + + return network, nil +} diff --git a/builtin/providers/google/resource_compute_backend_service.go b/builtin/providers/google/resource_compute_backend_service.go index 94b05fe44c..cd4d9bd135 100644 --- a/builtin/providers/google/resource_compute_backend_service.go +++ b/builtin/providers/google/resource_compute_backend_service.go @@ -118,10 +118,10 @@ func resourceComputeBackendService() *schema.Resource { }, "region": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Deprecated: "This parameter has been removed as it was never used", + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Removed: "region has been removed as it was never used", }, "self_link": &schema.Schema{ diff --git a/builtin/providers/google/resource_compute_disk.go b/builtin/providers/google/resource_compute_disk.go index 44efb5b02f..36554ca73c 100644 --- a/builtin/providers/google/resource_compute_disk.go +++ b/builtin/providers/google/resource_compute_disk.go @@ -112,6 +112,7 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error { } disk.SourceImage = imageUrl + log.Printf("[DEBUG] Image name resolved to: %s", imageUrl) } if v, ok := d.GetOk("type"); ok { diff --git a/builtin/providers/google/resource_compute_forwarding_rule.go b/builtin/providers/google/resource_compute_forwarding_rule.go index 5db038110a..b4bd4a7792 100644 --- a/builtin/providers/google/resource_compute_forwarding_rule.go +++ b/builtin/providers/google/resource_compute_forwarding_rule.go @@ -76,6 +76,12 @@ func resourceComputeForwardingRule() *schema.Resource { Type: schema.TypeString, Optional: true, ForceNew: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == new+"-"+new { + return true + } + return false + }, }, "ports": &schema.Schema{ diff --git a/builtin/providers/google/resource_compute_forwarding_rule_test.go b/builtin/providers/google/resource_compute_forwarding_rule_test.go index 2ae4a10026..349ebd82c2 100644 --- a/builtin/providers/google/resource_compute_forwarding_rule_test.go +++ b/builtin/providers/google/resource_compute_forwarding_rule_test.go @@ -29,6 +29,26 @@ func TestAccComputeForwardingRule_basic(t *testing.T) { }) } +func TestAccComputeForwardingRule_singlePort(t *testing.T) { + poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + ruleName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeForwardingRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeForwardingRule_singlePort(poolName, ruleName), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeForwardingRuleExists( + "google_compute_forwarding_rule.foobar"), + ), + }, + }, + }) +} + func TestAccComputeForwardingRule_ip(t *testing.T) { addrName := fmt.Sprintf("tf-%s", acctest.RandString(10)) poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) @@ -133,6 +153,23 @@ resource "google_compute_forwarding_rule" "foobar" { `, poolName, ruleName) } +func testAccComputeForwardingRule_singlePort(poolName, ruleName string) string { + return fmt.Sprintf(` +resource "google_compute_target_pool" "foobar-tp" { + description = "Resource created for Terraform acceptance testing" + instances = ["us-central1-a/foo", "us-central1-b/bar"] + name = "%s" +} +resource "google_compute_forwarding_rule" "foobar" { + description = "Resource created for Terraform acceptance testing" + ip_protocol = "UDP" + name = "%s" + port_range = "80" + target = "${google_compute_target_pool.foobar-tp.self_link}" +} +`, poolName, ruleName) +} + func testAccComputeForwardingRule_ip(addrName, poolName, ruleName string) string { return fmt.Sprintf(` resource "google_compute_address" "foo" { diff --git a/builtin/providers/google/resource_compute_instance_group.go b/builtin/providers/google/resource_compute_instance_group.go index a6ece3a416..1f2b93e063 100644 --- a/builtin/providers/google/resource_compute_instance_group.go +++ b/builtin/providers/google/resource_compute_instance_group.go @@ -18,6 +18,8 @@ func resourceComputeInstanceGroup() *schema.Resource { Update: resourceComputeInstanceGroupUpdate, Delete: resourceComputeInstanceGroupDelete, + SchemaVersion: 1, + Schema: map[string]*schema.Schema{ "name": &schema.Schema{ Type: schema.TypeString, @@ -38,9 +40,10 @@ func resourceComputeInstanceGroup() *schema.Resource { }, "instances": &schema.Schema{ - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, }, "named_port": &schema.Schema{ @@ -142,7 +145,7 @@ func resourceComputeInstanceGroupCreate(d *schema.ResourceData, meta interface{} } if v, ok := d.GetOk("instances"); ok { - instanceUrls := convertStringArr(v.([]interface{})) + instanceUrls := convertStringArr(v.(*schema.Set).List()) if !validInstanceURLs(instanceUrls) { return fmt.Errorf("Error invalid instance URLs: %v", instanceUrls) } @@ -239,8 +242,8 @@ func resourceComputeInstanceGroupUpdate(d *schema.ResourceData, meta interface{} // to-do check for no instances from_, to_ := d.GetChange("instances") - from := convertStringArr(from_.([]interface{})) - to := convertStringArr(to_.([]interface{})) + from := convertStringArr(from_.(*schema.Set).List()) + to := convertStringArr(to_.(*schema.Set).List()) if !validInstanceURLs(from) { return fmt.Errorf("Error invalid instance URLs: %v", from) diff --git a/builtin/providers/google/resource_compute_instance_group_migrate.go b/builtin/providers/google/resource_compute_instance_group_migrate.go new file mode 100644 index 0000000000..1db04c22a2 --- /dev/null +++ b/builtin/providers/google/resource_compute_instance_group_migrate.go @@ -0,0 +1,74 @@ +package google + +import ( + "fmt" + "log" + "strconv" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +func resourceComputeInstanceGroupMigrateState( + v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { + if is.Empty() { + log.Println("[DEBUG] Empty InstanceState; nothing to migrate.") + return is, nil + } + + switch v { + case 0: + log.Println("[INFO] Found Compute Instance Group State v0; migrating to v1") + is, err := migrateInstanceGroupStateV0toV1(is) + if err != nil { + return is, err + } + return is, nil + default: + return is, fmt.Errorf("Unexpected schema version: %d", v) + } +} + +func migrateInstanceGroupStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { + log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) + + newInstances := []string{} + + for k, v := range is.Attributes { + if !strings.HasPrefix(k, "instances.") { + continue + } + + if k == "instances.#" { + continue + } + + // Key is now of the form instances.%d + kParts := strings.Split(k, ".") + + // Sanity check: two parts should be there and should be a number + badFormat := false + if len(kParts) != 2 { + badFormat = true + } else if _, err := strconv.Atoi(kParts[1]); err != nil { + badFormat = true + } + + if badFormat { + return is, fmt.Errorf("migration error: found instances key in unexpected format: %s", k) + } + + newInstances = append(newInstances, v) + delete(is.Attributes, k) + } + + for _, v := range newInstances { + hash := schema.HashString(v) + newKey := fmt.Sprintf("instances.%d", hash) + is.Attributes[newKey] = v + } + + log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes) + return is, nil +} diff --git a/builtin/providers/google/resource_compute_instance_group_migrate_test.go b/builtin/providers/google/resource_compute_instance_group_migrate_test.go new file mode 100644 index 0000000000..88057d99e5 --- /dev/null +++ b/builtin/providers/google/resource_compute_instance_group_migrate_test.go @@ -0,0 +1,75 @@ +package google + +import ( + "testing" + + "github.com/hashicorp/terraform/terraform" +) + +func TestComputeInstanceGroupMigrateState(t *testing.T) { + cases := map[string]struct { + StateVersion int + Attributes map[string]string + Expected map[string]string + Meta interface{} + }{ + "change instances from list to set": { + StateVersion: 0, + Attributes: map[string]string{ + "instances.#": "1", + "instances.0": "https://www.googleapis.com/compute/v1/projects/project_name/zones/zone_name/instances/instancegroup-test-1", + "instances.1": "https://www.googleapis.com/compute/v1/projects/project_name/zones/zone_name/instances/instancegroup-test-0", + }, + Expected: map[string]string{ + "instances.#": "1", + "instances.764135222": "https://www.googleapis.com/compute/v1/projects/project_name/zones/zone_name/instances/instancegroup-test-1", + "instances.1519187872": "https://www.googleapis.com/compute/v1/projects/project_name/zones/zone_name/instances/instancegroup-test-0", + }, + Meta: &Config{}, + }, + } + + for tn, tc := range cases { + is := &terraform.InstanceState{ + ID: "i-abc123", + Attributes: tc.Attributes, + } + is, err := resourceComputeInstanceGroupMigrateState( + tc.StateVersion, is, tc.Meta) + + if err != nil { + t.Fatalf("bad: %s, err: %#v", tn, err) + } + + for k, v := range tc.Expected { + if is.Attributes[k] != v { + t.Fatalf( + "bad: %s\n\n expected: %#v -> %#v\n got: %#v -> %#v\n in: %#v", + tn, k, v, k, is.Attributes[k], is.Attributes) + } + } + } +} + +func TestComputeInstanceGroupMigrateState_empty(t *testing.T) { + var is *terraform.InstanceState + var meta *Config + + // should handle nil + is, err := resourceComputeInstanceGroupMigrateState(0, is, meta) + + if err != nil { + t.Fatalf("err: %#v", err) + } + if is != nil { + t.Fatalf("expected nil instancestate, got: %#v", is) + } + + // should handle non-nil but empty + is = &terraform.InstanceState{} + is, err = resourceComputeInstanceGroupMigrateState(0, is, meta) + + if err != nil { + t.Fatalf("err: %#v", err) + } +} diff --git a/builtin/providers/google/resource_compute_instance_group_test.go b/builtin/providers/google/resource_compute_instance_group_test.go index 4435454c1c..2dfe63d345 100644 --- a/builtin/providers/google/resource_compute_instance_group_test.go +++ b/builtin/providers/google/resource_compute_instance_group_test.go @@ -70,6 +70,26 @@ func TestAccComputeInstanceGroup_update(t *testing.T) { }) } +func TestAccComputeInstanceGroup_outOfOrderInstances(t *testing.T) { + var instanceGroup compute.InstanceGroup + var instanceName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccComputeInstanceGroup_destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeInstanceGroup_outOfOrderInstances(instanceName), + Check: resource.ComposeTestCheckFunc( + testAccComputeInstanceGroup_exists( + "google_compute_instance_group.group", &instanceGroup), + ), + }, + }, + }) +} + func testAccComputeInstanceGroup_destroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) @@ -297,3 +317,51 @@ func testAccComputeInstanceGroup_update2(instance string) string { } }`, instance, instance) } + +func testAccComputeInstanceGroup_outOfOrderInstances(instance string) string { + return fmt.Sprintf(` + resource "google_compute_instance" "ig_instance" { + name = "%s-1" + machine_type = "n1-standard-1" + can_ip_forward = false + zone = "us-central1-c" + + disk { + image = "debian-8-jessie-v20160803" + } + + network_interface { + network = "default" + } + } + + resource "google_compute_instance" "ig_instance_2" { + name = "%s-2" + machine_type = "n1-standard-1" + can_ip_forward = false + zone = "us-central1-c" + + disk { + image = "debian-8-jessie-v20160803" + } + + network_interface { + network = "default" + } + } + + resource "google_compute_instance_group" "group" { + description = "Terraform test instance group" + name = "%s" + zone = "us-central1-c" + instances = [ "${google_compute_instance.ig_instance_2.self_link}", "${google_compute_instance.ig_instance.self_link}" ] + named_port { + name = "http" + port = "8080" + } + named_port { + name = "https" + port = "8443" + } + }`, instance, instance, instance) +} diff --git a/builtin/providers/google/resource_container_cluster.go b/builtin/providers/google/resource_container_cluster.go index 1337e0d920..203a990b85 100644 --- a/builtin/providers/google/resource_container_cluster.go +++ b/builtin/providers/google/resource_container_cluster.go @@ -11,6 +11,10 @@ import ( "google.golang.org/api/googleapi" ) +var ( + instanceGroupManagerURL = regexp.MustCompile("^https://www.googleapis.com/compute/v1/projects/([a-z][a-z0-9-]{5}(?:[-a-z0-9]{0,23}[a-z0-9])?)/zones/([a-z0-9-]*)/instanceGroupManagers/([^/]*)") +) + func resourceContainerCluster() *schema.Resource { return &schema.Resource{ Create: resourceContainerClusterCreate, @@ -227,6 +231,22 @@ func resourceContainerCluster() *schema.Resource { }, }, + "local_ssd_count": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + + if value < 0 { + errors = append(errors, fmt.Errorf( + "%q cannot be negative", k)) + } + return + }, + }, + "oauth_scopes": &schema.Schema{ Type: schema.TypeList, Optional: true, @@ -239,6 +259,27 @@ func resourceContainerCluster() *schema.Resource { }, }, }, + + "service_account": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "metadata": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: schema.TypeString, + }, + + "image_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, }, }, }, @@ -365,6 +406,10 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er cluster.NodeConfig.DiskSizeGb = int64(v.(int)) } + if v, ok = nodeConfig["local_ssd_count"]; ok { + cluster.NodeConfig.LocalSsdCount = int64(v.(int)) + } + if v, ok := nodeConfig["oauth_scopes"]; ok { scopesList := v.([]interface{}) scopes := []string{} @@ -374,6 +419,22 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er cluster.NodeConfig.OauthScopes = scopes } + + if v, ok = nodeConfig["service_account"]; ok { + cluster.NodeConfig.ServiceAccount = v.(string) + } + + if v, ok = nodeConfig["metadata"]; ok { + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + cluster.NodeConfig.Metadata = m + } + + if v, ok = nodeConfig["image_type"]; ok { + cluster.NodeConfig.ImageType = v.(string) + } } req := &container.CreateClusterRequest{ @@ -460,7 +521,12 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro d.Set("network", d.Get("network").(string)) d.Set("subnetwork", cluster.Subnetwork) d.Set("node_config", flattenClusterNodeConfig(cluster.NodeConfig)) - d.Set("instance_group_urls", cluster.InstanceGroupUrls) + + if igUrls, err := getInstanceGroupUrlsFromManagerUrls(config, cluster.InstanceGroupUrls); err != nil { + return err + } else { + d.Set("instance_group_urls", igUrls) + } return nil } @@ -531,11 +597,39 @@ func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) er return nil } +// container engine's API currently mistakenly returns the instance group manager's +// URL instead of the instance group's URL in its responses. This shim detects that +// error, and corrects it, by fetching the instance group manager URL and retrieving +// the instance group manager, then using that to look up the instance group URL, which +// is then substituted. +// +// This should be removed when the API response is fixed. +func getInstanceGroupUrlsFromManagerUrls(config *Config, igmUrls []string) ([]string, error) { + instanceGroupURLs := make([]string, 0, len(igmUrls)) + for _, u := range igmUrls { + if !instanceGroupManagerURL.MatchString(u) { + instanceGroupURLs = append(instanceGroupURLs, u) + continue + } + matches := instanceGroupManagerURL.FindStringSubmatch(u) + instanceGroupManager, err := config.clientCompute.InstanceGroupManagers.Get(matches[1], matches[2], matches[3]).Do() + if err != nil { + return nil, fmt.Errorf("Error reading instance group manager returned as an instance group URL: %s", err) + } + instanceGroupURLs = append(instanceGroupURLs, instanceGroupManager.InstanceGroup) + } + return instanceGroupURLs, nil +} + func flattenClusterNodeConfig(c *container.NodeConfig) []map[string]interface{} { config := []map[string]interface{}{ map[string]interface{}{ - "machine_type": c.MachineType, - "disk_size_gb": c.DiskSizeGb, + "machine_type": c.MachineType, + "disk_size_gb": c.DiskSizeGb, + "local_ssd_count": c.LocalSsdCount, + "service_account": c.ServiceAccount, + "metadata": c.Metadata, + "image_type": c.ImageType, }, } diff --git a/builtin/providers/google/resource_container_cluster_test.go b/builtin/providers/google/resource_container_cluster_test.go index 4f4ff82010..f0723dcb12 100644 --- a/builtin/providers/google/resource_container_cluster_test.go +++ b/builtin/providers/google/resource_container_cluster_test.go @@ -4,10 +4,11 @@ import ( "fmt" "testing" + "strconv" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "strconv" ) func TestAccContainerCluster_basic(t *testing.T) { @@ -19,7 +20,7 @@ func TestAccContainerCluster_basic(t *testing.T) { resource.TestStep{ Config: testAccContainerCluster_basic, Check: resource.ComposeTestCheckFunc( - testAccCheckContainerClusterExists( + testAccCheckContainerCluster( "google_container_cluster.primary"), ), }, @@ -36,10 +37,8 @@ func TestAccContainerCluster_withAdditionalZones(t *testing.T) { resource.TestStep{ Config: testAccContainerCluster_withAdditionalZones, Check: resource.ComposeTestCheckFunc( - testAccCheckContainerClusterExists( + testAccCheckContainerCluster( "google_container_cluster.with_additional_zones"), - testAccCheckContainerClusterAdditionalZonesExist( - "google_container_cluster.with_additional_zones", 2), ), }, }, @@ -55,7 +54,7 @@ func TestAccContainerCluster_withVersion(t *testing.T) { resource.TestStep{ Config: testAccContainerCluster_withVersion, Check: resource.ComposeTestCheckFunc( - testAccCheckContainerClusterExists( + testAccCheckContainerCluster( "google_container_cluster.with_version"), ), }, @@ -72,7 +71,7 @@ func TestAccContainerCluster_withNodeConfig(t *testing.T) { resource.TestStep{ Config: testAccContainerCluster_withNodeConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckContainerClusterExists( + testAccCheckContainerCluster( "google_container_cluster.with_node_config"), ), }, @@ -89,7 +88,7 @@ func TestAccContainerCluster_withNodeConfigScopeAlias(t *testing.T) { resource.TestStep{ Config: testAccContainerCluster_withNodeConfigScopeAlias, Check: resource.ComposeTestCheckFunc( - testAccCheckContainerClusterExists( + testAccCheckContainerCluster( "google_container_cluster.with_node_config_scope_alias"), ), }, @@ -106,9 +105,9 @@ func TestAccContainerCluster_network(t *testing.T) { resource.TestStep{ Config: testAccContainerCluster_networkRef, Check: resource.ComposeTestCheckFunc( - testAccCheckContainerClusterExists( + testAccCheckContainerCluster( "google_container_cluster.with_net_ref_by_url"), - testAccCheckContainerClusterExists( + testAccCheckContainerCluster( "google_container_cluster.with_net_ref_by_name"), ), }, @@ -116,6 +115,23 @@ func TestAccContainerCluster_network(t *testing.T) { }) } +func TestAccContainerCluster_backend(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccContainerCluster_backendRef, + Check: resource.ComposeTestCheckFunc( + testAccCheckContainerCluster( + "google_container_cluster.primary"), + ), + }, + }, + }) +} + func testAccCheckContainerClusterDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) @@ -135,51 +151,165 @@ func testAccCheckContainerClusterDestroy(s *terraform.State) error { return nil } -func testAccCheckContainerClusterExists(n string) resource.TestCheckFunc { +func testAccCheckContainerCluster(n string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No ID is set") + attributes, err := getResourceAttributes(n, s) + if err != nil { + return err } config := testAccProvider.Meta().(*Config) - - attributes := rs.Primary.Attributes - found, err := config.clientContainer.Projects.Zones.Clusters.Get( + cluster, err := config.clientContainer.Projects.Zones.Clusters.Get( config.Project, attributes["zone"], attributes["name"]).Do() if err != nil { return err } - if found.Name != attributes["name"] { - return fmt.Errorf("Cluster not found") + if cluster.Name != attributes["name"] { + return fmt.Errorf("Cluster %s not found, found %s instead", attributes["name"], cluster.Name) + } + + type clusterTestField struct { + tf_attr string + gcp_attr interface{} + } + + var igUrls []string + if igUrls, err = getInstanceGroupUrlsFromManagerUrls(config, cluster.InstanceGroupUrls); err != nil { + return err + } + clusterTests := []clusterTestField{ + {"initial_node_count", strconv.FormatInt(cluster.InitialNodeCount, 10)}, + {"master_auth.0.client_certificate", cluster.MasterAuth.ClientCertificate}, + {"master_auth.0.client_key", cluster.MasterAuth.ClientKey}, + {"master_auth.0.cluster_ca_certificate", cluster.MasterAuth.ClusterCaCertificate}, + {"master_auth.0.password", cluster.MasterAuth.Password}, + {"master_auth.0.username", cluster.MasterAuth.Username}, + {"zone", cluster.Zone}, + {"cluster_ipv4_cidr", cluster.ClusterIpv4Cidr}, + {"description", cluster.Description}, + {"endpoint", cluster.Endpoint}, + {"instance_group_urls", igUrls}, + {"logging_service", cluster.LoggingService}, + {"monitoring_service", cluster.MonitoringService}, + {"subnetwork", cluster.Subnetwork}, + {"node_config.0.machine_type", cluster.NodeConfig.MachineType}, + {"node_config.0.disk_size_gb", strconv.FormatInt(cluster.NodeConfig.DiskSizeGb, 10)}, + {"node_config.0.local_ssd_count", strconv.FormatInt(cluster.NodeConfig.LocalSsdCount, 10)}, + {"node_config.0.oauth_scopes", cluster.NodeConfig.OauthScopes}, + {"node_config.0.service_account", cluster.NodeConfig.ServiceAccount}, + {"node_config.0.metadata", cluster.NodeConfig.Metadata}, + {"node_config.0.image_type", cluster.NodeConfig.ImageType}, + {"node_version", cluster.CurrentNodeVersion}, + } + + // Remove Zone from additional_zones since that's what the resource writes in state + additionalZones := []string{} + for _, location := range cluster.Locations { + if location != cluster.Zone { + additionalZones = append(additionalZones, location) + } + } + clusterTests = append(clusterTests, clusterTestField{"additional_zones", additionalZones}) + + // AddonsConfig is neither Required or Computed, so the API may return nil for it + if cluster.AddonsConfig != nil { + if cluster.AddonsConfig.HttpLoadBalancing != nil { + clusterTests = append(clusterTests, clusterTestField{"addons_config.0.http_load_balancing.0.disabled", strconv.FormatBool(cluster.AddonsConfig.HttpLoadBalancing.Disabled)}) + } + if cluster.AddonsConfig.HorizontalPodAutoscaling != nil { + clusterTests = append(clusterTests, clusterTestField{"addons_config.0.horizontal_pod_autoscaling.0.disabled", strconv.FormatBool(cluster.AddonsConfig.HorizontalPodAutoscaling.Disabled)}) + } + } + + for _, attrs := range clusterTests { + if c := checkMatch(attributes, attrs.tf_attr, attrs.gcp_attr); c != "" { + return fmt.Errorf(c) + } + } + + // Network has to be done separately in order to normalize the two values + tf, err := getNetworkNameFromSelfLink(attributes["network"]) + if err != nil { + return err + } + gcp, err := getNetworkNameFromSelfLink(cluster.Network) + if err != nil { + return err + } + if tf != gcp { + return fmt.Errorf(matchError("network", tf, gcp)) } return nil } } -func testAccCheckContainerClusterAdditionalZonesExist(n string, num int) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - additionalZonesSize, err := strconv.Atoi(rs.Primary.Attributes["additional_zones.#"]) - if err != nil { - return err - } - if additionalZonesSize != num { - return fmt.Errorf("number of additional zones did not match %d, was %d", num, additionalZonesSize) - } - - return nil +func getResourceAttributes(n string, s *terraform.State) (map[string]string, error) { + rs, ok := s.RootModule().Resources[n] + if !ok { + return nil, fmt.Errorf("Not found: %s", n) } + + if rs.Primary.ID == "" { + return nil, fmt.Errorf("No ID is set") + } + + return rs.Primary.Attributes, nil +} + +func checkMatch(attributes map[string]string, attr string, gcp interface{}) string { + if gcpList, ok := gcp.([]string); ok { + return checkListMatch(attributes, attr, gcpList) + } + if gcpMap, ok := gcp.(map[string]string); ok { + return checkMapMatch(attributes, attr, gcpMap) + } + tf := attributes[attr] + if tf != gcp { + return matchError(attr, tf, gcp) + } + return "" +} + +func checkListMatch(attributes map[string]string, attr string, gcpList []string) string { + num, err := strconv.Atoi(attributes[attr+".#"]) + if err != nil { + return fmt.Sprintf("Error in number conversion for attribute %s: %s", attr, err) + } + if num != len(gcpList) { + return fmt.Sprintf("Cluster has mismatched %s size.\nTF Size: %d\nGCP Size: %d", attr, num, len(gcpList)) + } + + for i, gcp := range gcpList { + if tf := attributes[fmt.Sprintf("%s.%d", attr, i)]; tf != gcp { + return matchError(fmt.Sprintf("%s[%d]", attr, i), tf, gcp) + } + } + + return "" +} + +func checkMapMatch(attributes map[string]string, attr string, gcpMap map[string]string) string { + num, err := strconv.Atoi(attributes[attr+".%"]) + if err != nil { + return fmt.Sprintf("Error in number conversion for attribute %s: %s", attr, err) + } + if num != len(gcpMap) { + return fmt.Sprintf("Cluster has mismatched %s size.\nTF Size: %d\nGCP Size: %d", attr, num, len(gcpMap)) + } + + for k, gcp := range gcpMap { + if tf := attributes[fmt.Sprintf("%s.%s", attr, k)]; tf != gcp { + return matchError(fmt.Sprintf("%s[%s]", attr, k), tf, gcp) + } + } + + return "" +} + +func matchError(attr, tf string, gcp interface{}) string { + return fmt.Sprintf("Cluster has mismatched %s.\nTF State: %+v\nGCP State: %+v", attr, tf, gcp) } var testAccContainerCluster_basic = fmt.Sprintf(` @@ -236,14 +366,20 @@ resource "google_container_cluster" "with_node_config" { } node_config { - machine_type = "g1-small" + machine_type = "n1-standard-1" disk_size_gb = 15 + local_ssd_count = 1 oauth_scopes = [ "https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring" ] + service_account = "default" + metadata { + foo = "bar" + } + image_type = "CONTAINER_VM" } }`, acctest.RandString(10)) @@ -296,3 +432,49 @@ resource "google_container_cluster" "with_net_ref_by_name" { network = "${google_compute_network.container_network.name}" }`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) + +var testAccContainerCluster_backendRef = fmt.Sprintf(` +resource "google_compute_backend_service" "my-backend-service" { + name = "terraform-test-%s" + port_name = "http" + protocol = "HTTP" + + backend { + group = "${element(google_container_cluster.primary.instance_group_urls, 1)}" + } + + health_checks = ["${google_compute_http_health_check.default.self_link}"] +} + +resource "google_compute_http_health_check" "default" { + name = "terraform-test-%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} + +resource "google_container_cluster" "primary" { + name = "terraform-test-%s" + zone = "us-central1-a" + initial_node_count = 3 + + additional_zones = [ + "us-central1-b", + "us-central1-c", + ] + + master_auth { + username = "mr.yoda" + password = "adoy.rm" + } + + node_config { + oauth_scopes = [ + "https://www.googleapis.com/auth/compute", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring", + ] + } +} +`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) diff --git a/builtin/providers/google/resource_google_project.go b/builtin/providers/google/resource_google_project.go index b4bcb9c4f6..9b947a66c6 100644 --- a/builtin/providers/google/resource_google_project.go +++ b/builtin/providers/google/resource_google_project.go @@ -1,7 +1,6 @@ package google import ( - "encoding/json" "fmt" "log" "net/http" @@ -16,13 +15,6 @@ import ( // resourceGoogleProject returns a *schema.Resource that allows a customer // to declare a Google Cloud Project resource. -// -// This example shows a project with a policy declared in config: -// -// resource "google_project" "my-project" { -// project = "a-project-id" -// policy = "${data.google_iam_policy.admin.policy}" -// } func resourceGoogleProject() *schema.Resource { return &schema.Resource{ SchemaVersion: 1, @@ -39,22 +31,15 @@ func resourceGoogleProject() *schema.Resource { Schema: map[string]*schema.Schema{ "id": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - Deprecated: "The id field has unexpected behaviour and probably doesn't do what you expect. See https://www.terraform.io/docs/providers/google/r/google_project.html#id-field for more information. Please use project_id instead; future versions of Terraform will remove the id field.", + Type: schema.TypeString, + Optional: true, + Computed: true, + Removed: "The id field has been removed. Use project_id instead.", }, "project_id": &schema.Schema{ Type: schema.TypeString, - Optional: true, + Required: true, ForceNew: true, - DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { - // This suppresses the diff if project_id is not set - if new == "" { - return true - } - return false - }, }, "skip_delete": &schema.Schema{ Type: schema.TypeBool, @@ -63,26 +48,23 @@ func resourceGoogleProject() *schema.Resource { }, "name": &schema.Schema{ Type: schema.TypeString, - Optional: true, - Computed: true, + Required: true, }, "org_id": &schema.Schema{ Type: schema.TypeString, - Optional: true, - Computed: true, + Required: true, ForceNew: true, }, "policy_data": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - Deprecated: "Use the 'google_project_iam_policy' resource to define policies for a Google Project", - DiffSuppressFunc: jsonPolicyDiffSuppress, + Type: schema.TypeString, + Optional: true, + Computed: true, + Removed: "Use the 'google_project_iam_policy' resource to define policies for a Google Project", }, "policy_etag": &schema.Schema{ - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the the 'google_project_iam_policy' resource to define policies for a Google Project", + Type: schema.TypeString, + Computed: true, + Removed: "Use the the 'google_project_iam_policy' resource to define policies for a Google Project", }, "number": &schema.Schema{ Type: schema.TypeString, @@ -102,27 +84,6 @@ func resourceGoogleProjectCreate(d *schema.ResourceData, meta interface{}) error var pid string var err error pid = d.Get("project_id").(string) - if pid == "" { - pid, err = getProject(d, config) - if err != nil { - return fmt.Errorf("Error getting project ID: %v", err) - } - if pid == "" { - return fmt.Errorf("'project_id' must be set in the config") - } - } - - // we need to check if name and org_id are set, and throw an error if they aren't - // we can't just set these as required on the object, however, as that would break - // all configs that used previous iterations of the resource. - // TODO(paddy): remove this for 0.9 and set these attributes as required. - name, org_id := d.Get("name").(string), d.Get("org_id").(string) - if name == "" { - return fmt.Errorf("`name` must be set in the config if you're creating a project.") - } - if org_id == "" { - return fmt.Errorf("`org_id` must be set in the config if you're creating a project.") - } log.Printf("[DEBUG]: Creating new project %q", pid) project := &cloudresourcemanager.Project{ @@ -147,37 +108,6 @@ func resourceGoogleProjectCreate(d *schema.ResourceData, meta interface{}) error return waitErr } - // Apply the IAM policy if it is set - if pString, ok := d.GetOk("policy_data"); ok { - // The policy string is just a marshaled cloudresourcemanager.Policy. - // Unmarshal it to a struct. - var policy cloudresourcemanager.Policy - if err := json.Unmarshal([]byte(pString.(string)), &policy); err != nil { - return err - } - log.Printf("[DEBUG] Got policy from config: %#v", policy.Bindings) - - // Retrieve existing IAM policy from project. This will be merged - // with the policy defined here. - p, err := getProjectIamPolicy(pid, config) - if err != nil { - return err - } - log.Printf("[DEBUG] Got existing bindings from project: %#v", p.Bindings) - - // Merge the existing policy bindings with those defined in this manifest. - p.Bindings = mergeBindings(append(p.Bindings, policy.Bindings...)) - - // Apply the merged policy - log.Printf("[DEBUG] Setting new policy for project: %#v", p) - _, err = config.clientResourceManager.Projects.SetIamPolicy(pid, - &cloudresourcemanager.SetIamPolicyRequest{Policy: p}).Do() - - if err != nil { - return fmt.Errorf("Error applying IAM policy for project %q: %s", pid, err) - } - } - // Set the billing account if v, ok := d.GetOk("billing_account"); ok { name := v.(string) @@ -242,6 +172,7 @@ func resourceGoogleProjectRead(d *schema.ResourceData, meta interface{}) error { func prefixedProject(pid string) string { return "projects/" + pid } + func resourceGoogleProjectUpdate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) pid := d.Id() @@ -282,7 +213,7 @@ func resourceGoogleProjectUpdate(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error updating billing account %q for project %q: %v", name, prefixedProject(pid), err) } } - return updateProjectIamPolicy(d, config, pid) + return nil } func resourceGoogleProjectDelete(d *schema.ResourceData, meta interface{}) error { @@ -298,97 +229,3 @@ func resourceGoogleProjectDelete(d *schema.ResourceData, meta interface{}) error d.SetId("") return nil } - -func updateProjectIamPolicy(d *schema.ResourceData, config *Config, pid string) error { - // Policy has changed - if ok := d.HasChange("policy_data"); ok { - // The policy string is just a marshaled cloudresourcemanager.Policy. - // Unmarshal it to a struct that contains the old and new policies - oldP, newP := d.GetChange("policy_data") - oldPString := oldP.(string) - newPString := newP.(string) - - // JSON Unmarshaling would fail - if oldPString == "" { - oldPString = "{}" - } - if newPString == "" { - newPString = "{}" - } - - log.Printf("[DEBUG]: Old policy: %q\nNew policy: %q", oldPString, newPString) - - var oldPolicy, newPolicy cloudresourcemanager.Policy - if err := json.Unmarshal([]byte(newPString), &newPolicy); err != nil { - return err - } - if err := json.Unmarshal([]byte(oldPString), &oldPolicy); err != nil { - return err - } - - // Find any Roles and Members that were removed (i.e., those that are present - // in the old but absent in the new - oldMap := rolesToMembersMap(oldPolicy.Bindings) - newMap := rolesToMembersMap(newPolicy.Bindings) - deleted := make(map[string]map[string]bool) - - // Get each role and its associated members in the old state - for role, members := range oldMap { - // Initialize map for role - if _, ok := deleted[role]; !ok { - deleted[role] = make(map[string]bool) - } - // The role exists in the new state - if _, ok := newMap[role]; ok { - // Check each memeber - for member, _ := range members { - // Member does not exist in new state, so it was deleted - if _, ok = newMap[role][member]; !ok { - deleted[role][member] = true - } - } - } else { - // This indicates an entire role was deleted. Mark all members - // for delete. - for member, _ := range members { - deleted[role][member] = true - } - } - } - log.Printf("[DEBUG] Roles and Members to be deleted: %#v", deleted) - - // Retrieve existing IAM policy from project. This will be merged - // with the policy in the current state - // TODO(evanbrown): Add an 'authoritative' flag that allows policy - // in manifest to overwrite existing policy. - p, err := getProjectIamPolicy(pid, config) - if err != nil { - return err - } - log.Printf("[DEBUG] Got existing bindings from project: %#v", p.Bindings) - - // Merge existing policy with policy in the current state - log.Printf("[DEBUG] Merging new bindings from project: %#v", newPolicy.Bindings) - mergedBindings := mergeBindings(append(p.Bindings, newPolicy.Bindings...)) - - // Remove any roles and members that were explicitly deleted - mergedBindingsMap := rolesToMembersMap(mergedBindings) - for role, members := range deleted { - for member, _ := range members { - delete(mergedBindingsMap[role], member) - } - } - - p.Bindings = rolesToMembersBinding(mergedBindingsMap) - dump, _ := json.MarshalIndent(p.Bindings, " ", " ") - log.Printf("[DEBUG] Setting new policy for project: %#v:\n%s", p, string(dump)) - - _, err = config.clientResourceManager.Projects.SetIamPolicy(pid, - &cloudresourcemanager.SetIamPolicyRequest{Policy: p}).Do() - - if err != nil { - return fmt.Errorf("Error applying IAM policy for project %q: %s", pid, err) - } - } - return nil -} diff --git a/builtin/providers/google/resource_google_project_iam_policy.go b/builtin/providers/google/resource_google_project_iam_policy.go index cf9c87ef8a..4b2ec79b79 100644 --- a/builtin/providers/google/resource_google_project_iam_policy.go +++ b/builtin/providers/google/resource_google_project_iam_policy.go @@ -373,6 +373,8 @@ func jsonPolicyDiffSuppress(k, old, new string, d *schema.ResourceData) bool { log.Printf("[ERROR] Could not unmarshal new policy %s: %v", new, err) return false } + oldPolicy.Bindings = mergeBindings(oldPolicy.Bindings) + newPolicy.Bindings = mergeBindings(newPolicy.Bindings) if newPolicy.Etag != oldPolicy.Etag { return false } diff --git a/builtin/providers/google/resource_google_project_iam_policy_test.go b/builtin/providers/google/resource_google_project_iam_policy_test.go index 59903ca8ae..24052c9613 100644 --- a/builtin/providers/google/resource_google_project_iam_policy_test.go +++ b/builtin/providers/google/resource_google_project_iam_policy_test.go @@ -254,53 +254,99 @@ func TestAccGoogleProjectIamPolicy_basic(t *testing.T) { }) } -func testAccCheckGoogleProjectIamPolicyIsMerged(projectRes, policyRes, pid string) resource.TestCheckFunc { +// Test that a non-collapsed IAM policy doesn't perpetually diff +func TestAccGoogleProjectIamPolicy_expanded(t *testing.T) { + pid := "terraform-" + acctest.RandString(10) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccGoogleProjectAssociatePolicyExpanded(pid, pname, org), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleProjectIamPolicyExists("google_project_iam_policy.acceptance", "data.google_iam_policy.expanded", pid), + ), + }, + }, + }) +} + +func getStatePrimaryResource(s *terraform.State, res, expectedID string) (*terraform.InstanceState, error) { + // Get the project resource + resource, ok := s.RootModule().Resources[res] + if !ok { + return nil, fmt.Errorf("Not found: %s", res) + } + if resource.Primary.Attributes["id"] != expectedID && expectedID != "" { + return nil, fmt.Errorf("Expected project %q to match ID %q in state", resource.Primary.ID, expectedID) + } + return resource.Primary, nil +} + +func getGoogleProjectIamPolicyFromResource(resource *terraform.InstanceState) (cloudresourcemanager.Policy, error) { + var p cloudresourcemanager.Policy + ps, ok := resource.Attributes["policy_data"] + if !ok { + return p, fmt.Errorf("Resource %q did not have a 'policy_data' attribute. Attributes were %#v", resource.ID, resource.Attributes) + } + if err := json.Unmarshal([]byte(ps), &p); err != nil { + return p, fmt.Errorf("Could not unmarshal %s:\n: %v", ps, err) + } + return p, nil +} + +func getGoogleProjectIamPolicyFromState(s *terraform.State, res, expectedID string) (cloudresourcemanager.Policy, error) { + project, err := getStatePrimaryResource(s, res, expectedID) + if err != nil { + return cloudresourcemanager.Policy{}, err + } + return getGoogleProjectIamPolicyFromResource(project) +} + +func compareBindings(a, b []*cloudresourcemanager.Binding) bool { + a = mergeBindings(a) + b = mergeBindings(b) + sort.Sort(sortableBindings(a)) + sort.Sort(sortableBindings(b)) + return reflect.DeepEqual(derefBindings(a), derefBindings(b)) +} + +func testAccCheckGoogleProjectIamPolicyExists(projectRes, policyRes, pid string) resource.TestCheckFunc { return func(s *terraform.State) error { - // Get the project resource - project, ok := s.RootModule().Resources[projectRes] - if !ok { - return fmt.Errorf("Not found: %s", projectRes) + projectPolicy, err := getGoogleProjectIamPolicyFromState(s, projectRes, pid) + if err != nil { + return fmt.Errorf("Error retrieving IAM policy for project from state: %s", err) } - // The project ID should match the config's project ID - if project.Primary.ID != pid { - return fmt.Errorf("Expected project %q to match ID %q in state", pid, project.Primary.ID) - } - - var projectP, policyP cloudresourcemanager.Policy - // The project should have a policy - ps, ok := project.Primary.Attributes["policy_data"] - if !ok { - return fmt.Errorf("Project resource %q did not have a 'policy_data' attribute. Attributes were %#v", project.Primary.Attributes["id"], project.Primary.Attributes) - } - if err := json.Unmarshal([]byte(ps), &projectP); err != nil { - return fmt.Errorf("Could not unmarshal %s:\n: %v", ps, err) - } - - // The data policy resource should have a policy - policy, ok := s.RootModule().Resources[policyRes] - if !ok { - return fmt.Errorf("Not found: %s", policyRes) - } - ps, ok = policy.Primary.Attributes["policy_data"] - if !ok { - return fmt.Errorf("Data policy resource %q did not have a 'policy_data' attribute. Attributes were %#v", policy.Primary.Attributes["id"], project.Primary.Attributes) - } - if err := json.Unmarshal([]byte(ps), &policyP); err != nil { - return err + policyPolicy, err := getGoogleProjectIamPolicyFromState(s, policyRes, "") + if err != nil { + return fmt.Errorf("Error retrieving IAM policy for data_policy from state: %s", err) } // The bindings in both policies should be identical - sort.Sort(sortableBindings(projectP.Bindings)) - sort.Sort(sortableBindings(policyP.Bindings)) - if !reflect.DeepEqual(derefBindings(projectP.Bindings), derefBindings(policyP.Bindings)) { - return fmt.Errorf("Project and data source policies do not match: project policy is %+v, data resource policy is %+v", derefBindings(projectP.Bindings), derefBindings(policyP.Bindings)) + if !compareBindings(projectPolicy.Bindings, policyPolicy.Bindings) { + return fmt.Errorf("Project and data source policies do not match: project policy is %+v, data resource policy is %+v", derefBindings(projectPolicy.Bindings), derefBindings(policyPolicy.Bindings)) + } + return nil + } +} + +func testAccCheckGoogleProjectIamPolicyIsMerged(projectRes, policyRes, pid string) resource.TestCheckFunc { + return func(s *terraform.State) error { + err := testAccCheckGoogleProjectIamPolicyExists(projectRes, policyRes, pid)(s) + if err != nil { + return err + } + + projectPolicy, err := getGoogleProjectIamPolicyFromState(s, projectRes, pid) + if err != nil { + return fmt.Errorf("Error retrieving IAM policy for project from state: %s", err) } // Merge the project policy in Terraform state with the policy the project had before the config was applied - expected := make([]*cloudresourcemanager.Binding, 0) + var expected []*cloudresourcemanager.Binding expected = append(expected, originalPolicy.Bindings...) - expected = append(expected, projectP.Bindings...) - expectedM := mergeBindings(expected) + expected = append(expected, projectPolicy.Bindings...) + expected = mergeBindings(expected) // Retrieve the actual policy from the project c := testAccProvider.Meta().(*Config) @@ -308,13 +354,9 @@ func testAccCheckGoogleProjectIamPolicyIsMerged(projectRes, policyRes, pid strin if err != nil { return fmt.Errorf("Failed to retrieve IAM Policy for project %q: %s", pid, err) } - actualM := mergeBindings(actual.Bindings) - - sort.Sort(sortableBindings(actualM)) - sort.Sort(sortableBindings(expectedM)) // The bindings should match, indicating the policy was successfully applied and merged - if !reflect.DeepEqual(derefBindings(actualM), derefBindings(expectedM)) { - return fmt.Errorf("Actual and expected project policies do not match: actual policy is %+v, expected policy is %+v", derefBindings(actualM), derefBindings(expectedM)) + if !compareBindings(actual.Bindings, expected) { + return fmt.Errorf("Actual and expected project policies do not match: actual policy is %+v, expected policy is %+v", derefBindings(actual.Bindings), derefBindings(expected)) } return nil @@ -591,8 +633,8 @@ func testAccGoogleProjectAssociatePolicyBasic(pid, name, org string) string { return fmt.Sprintf(` resource "google_project" "acceptance" { project_id = "%s" - name = "%s" - org_id = "%s" + name = "%s" + org_id = "%s" } resource "google_project_iam_policy" "acceptance" { project = "${google_project.acceptance.id}" @@ -620,8 +662,8 @@ func testAccGoogleProject_create(pid, name, org string) string { return fmt.Sprintf(` resource "google_project" "acceptance" { project_id = "%s" - name = "%s" - org_id = "%s" + name = "%s" + org_id = "%s" }`, pid, name, org) } @@ -629,8 +671,37 @@ func testAccGoogleProject_createBilling(pid, name, org, billing string) string { return fmt.Sprintf(` resource "google_project" "acceptance" { project_id = "%s" - name = "%s" - org_id = "%s" - billing_account = "%s" + name = "%s" + org_id = "%s" + billing_account = "%s" }`, pid, name, org, billing) } + +func testAccGoogleProjectAssociatePolicyExpanded(pid, name, org string) string { + return fmt.Sprintf(` +resource "google_project" "acceptance" { + project_id = "%s" + name = "%s" + org_id = "%s" +} +resource "google_project_iam_policy" "acceptance" { + project = "${google_project.acceptance.id}" + policy_data = "${data.google_iam_policy.expanded.policy_data}" + authoritative = false +} +data "google_iam_policy" "expanded" { + binding { + role = "roles/viewer" + members = [ + "user:paddy@carvers.co", + ] + } + + binding { + role = "roles/viewer" + members = [ + "user:paddy@hashicorp.com", + ] + } +}`, pid, name, org) +} diff --git a/builtin/providers/google/resource_google_project_test.go b/builtin/providers/google/resource_google_project_test.go index 8381cb3347..fea4c74655 100644 --- a/builtin/providers/google/resource_google_project_test.go +++ b/builtin/providers/google/resource_google_project_test.go @@ -205,44 +205,16 @@ func testAccCheckGoogleProjectHasMoreBindingsThan(pid string, count int) resourc } } -func testAccGoogleProjectImportExisting(pid string) string { - return fmt.Sprintf(` -resource "google_project" "acceptance" { - project_id = "%s" - -} -`, pid) -} - -func testAccGoogleProjectImportExistingWithIam(pid string) string { - return fmt.Sprintf(` -resource "google_project" "acceptance" { - project_id = "%v" - policy_data = "${data.google_iam_policy.admin.policy_data}" -} -data "google_iam_policy" "admin" { - binding { - role = "roles/storage.objectViewer" - members = [ - "user:evanbrown@google.com", - ] - } - binding { - role = "roles/compute.instanceAdmin" - members = [ - "user:evanbrown@google.com", - "user:evandbrown@gmail.com", - ] - } -}`, pid) -} - func testAccGoogleProject_toMerge(pid, name, org string) string { return fmt.Sprintf(` resource "google_project" "acceptance" { project_id = "%s" name = "%s" org_id = "%s" +} + +resource "google_project_iam_policy" "acceptance" { + project = "${google_project.acceptance.project_id}" policy_data = "${data.google_iam_policy.acceptance.policy_data}" } diff --git a/builtin/providers/heroku/resource_heroku_addon.go b/builtin/providers/heroku/resource_heroku_addon.go index 3555fbea44..ca9123514d 100644 --- a/builtin/providers/heroku/resource_heroku_addon.go +++ b/builtin/providers/heroku/resource_heroku_addon.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "log" "strings" @@ -23,18 +24,18 @@ func resourceHerokuAddon() *schema.Resource { Delete: resourceHerokuAddonDelete, Schema: map[string]*schema.Schema{ - "app": &schema.Schema{ + "app": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "plan": &schema.Schema{ + "plan": { Type: schema.TypeString, Required: true, }, - "config": &schema.Schema{ + "config": { Type: schema.TypeList, Optional: true, ForceNew: true, @@ -43,12 +44,12 @@ func resourceHerokuAddon() *schema.Resource { }, }, - "provider_id": &schema.Schema{ + "provider_id": { Type: schema.TypeString, Computed: true, }, - "config_vars": &schema.Schema{ + "config_vars": { Type: schema.TypeList, Computed: true, Elem: &schema.Schema{ @@ -66,7 +67,7 @@ func resourceHerokuAddonCreate(d *schema.ResourceData, meta interface{}) error { client := meta.(*heroku.Service) app := d.Get("app").(string) - opts := heroku.AddonCreateOpts{Plan: d.Get("plan").(string)} + opts := heroku.AddOnCreateOpts{Plan: d.Get("plan").(string)} if v := d.Get("config"); v != nil { config := make(map[string]string) @@ -80,7 +81,7 @@ func resourceHerokuAddonCreate(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Addon create configuration: %#v, %#v", app, opts) - a, err := client.AddonCreate(app, opts) + a, err := client.AddOnCreate(context.TODO(), app, opts) if err != nil { return err } @@ -129,8 +130,8 @@ func resourceHerokuAddonUpdate(d *schema.ResourceData, meta interface{}) error { app := d.Get("app").(string) if d.HasChange("plan") { - ad, err := client.AddonUpdate( - app, d.Id(), heroku.AddonUpdateOpts{Plan: d.Get("plan").(string)}) + ad, err := client.AddOnUpdate( + context.TODO(), app, d.Id(), heroku.AddOnUpdateOpts{Plan: d.Get("plan").(string)}) if err != nil { return err } @@ -148,7 +149,7 @@ func resourceHerokuAddonDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[INFO] Deleting Addon: %s", d.Id()) // Destroy the app - err := client.AddonDelete(d.Get("app").(string), d.Id()) + _, err := client.AddOnDelete(context.TODO(), d.Get("app").(string), d.Id()) if err != nil { return fmt.Errorf("Error deleting addon: %s", err) } @@ -157,8 +158,8 @@ func resourceHerokuAddonDelete(d *schema.ResourceData, meta interface{}) error { return nil } -func resourceHerokuAddonRetrieve(app string, id string, client *heroku.Service) (*heroku.Addon, error) { - addon, err := client.AddonInfo(app, id) +func resourceHerokuAddonRetrieve(app string, id string, client *heroku.Service) (*heroku.AddOnInfoResult, error) { + addon, err := client.AddOnInfo(context.TODO(), app, id) if err != nil { return nil, fmt.Errorf("Error retrieving addon: %s", err) diff --git a/builtin/providers/heroku/resource_heroku_addon_test.go b/builtin/providers/heroku/resource_heroku_addon_test.go index c707e0ed63..2ff61eff19 100644 --- a/builtin/providers/heroku/resource_heroku_addon_test.go +++ b/builtin/providers/heroku/resource_heroku_addon_test.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "testing" @@ -11,7 +12,7 @@ import ( ) func TestAccHerokuAddon_Basic(t *testing.T) { - var addon heroku.Addon + var addon heroku.AddOnInfoResult appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ @@ -19,7 +20,7 @@ func TestAccHerokuAddon_Basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuAddonDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuAddonConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), @@ -38,7 +39,7 @@ func TestAccHerokuAddon_Basic(t *testing.T) { // GH-198 func TestAccHerokuAddon_noPlan(t *testing.T) { - var addon heroku.Addon + var addon heroku.AddOnInfoResult appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ @@ -46,7 +47,7 @@ func TestAccHerokuAddon_noPlan(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuAddonDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuAddonConfig_no_plan(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), @@ -57,7 +58,7 @@ func TestAccHerokuAddon_noPlan(t *testing.T) { "heroku_addon.foobar", "plan", "memcachier"), ), }, - resource.TestStep{ + { Config: testAccCheckHerokuAddonConfig_no_plan(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), @@ -80,7 +81,7 @@ func testAccCheckHerokuAddonDestroy(s *terraform.State) error { continue } - _, err := client.AddonInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + _, err := client.AddOnInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err == nil { return fmt.Errorf("Addon still exists") @@ -90,7 +91,7 @@ func testAccCheckHerokuAddonDestroy(s *terraform.State) error { return nil } -func testAccCheckHerokuAddonAttributes(addon *heroku.Addon, n string) resource.TestCheckFunc { +func testAccCheckHerokuAddonAttributes(addon *heroku.AddOnInfoResult, n string) resource.TestCheckFunc { return func(s *terraform.State) error { if addon.Plan.Name != n { @@ -101,7 +102,7 @@ func testAccCheckHerokuAddonAttributes(addon *heroku.Addon, n string) resource.T } } -func testAccCheckHerokuAddonExists(n string, addon *heroku.Addon) resource.TestCheckFunc { +func testAccCheckHerokuAddonExists(n string, addon *heroku.AddOnInfoResult) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -115,7 +116,7 @@ func testAccCheckHerokuAddonExists(n string, addon *heroku.Addon) resource.TestC client := testAccProvider.Meta().(*heroku.Service) - foundAddon, err := client.AddonInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + foundAddon, err := client.AddOnInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err != nil { return err diff --git a/builtin/providers/heroku/resource_heroku_app.go b/builtin/providers/heroku/resource_heroku_app.go index b63be836bf..20a6c9c0d0 100644 --- a/builtin/providers/heroku/resource_heroku_app.go +++ b/builtin/providers/heroku/resource_heroku_app.go @@ -1,11 +1,12 @@ package heroku import ( + "context" "fmt" "log" "github.com/cyberdelia/heroku-go/v3" - "github.com/hashicorp/go-multierror" + multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/schema" ) @@ -38,7 +39,7 @@ func (a *application) Update() error { var err error if !a.Organization { - app, err := a.Client.AppInfo(a.Id) + app, err := a.Client.AppInfo(context.TODO(), a.Id) if err != nil { errs = append(errs, err) } else { @@ -50,7 +51,7 @@ func (a *application) Update() error { a.App.WebURL = app.WebURL } } else { - app, err := a.Client.OrganizationAppInfo(a.Id) + app, err := a.Client.OrganizationAppInfo(context.TODO(), a.Id) if err != nil { errs = append(errs, err) } else { @@ -90,25 +91,25 @@ func resourceHerokuApp() *schema.Resource { Delete: resourceHerokuAppDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "region": &schema.Schema{ + "region": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "stack": &schema.Schema{ + "stack": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "config_vars": &schema.Schema{ + "config_vars": { Type: schema.TypeList, Optional: true, Elem: &schema.Schema{ @@ -116,43 +117,43 @@ func resourceHerokuApp() *schema.Resource { }, }, - "all_config_vars": &schema.Schema{ + "all_config_vars": { Type: schema.TypeMap, Computed: true, }, - "git_url": &schema.Schema{ + "git_url": { Type: schema.TypeString, Computed: true, }, - "web_url": &schema.Schema{ + "web_url": { Type: schema.TypeString, Computed: true, }, - "heroku_hostname": &schema.Schema{ + "heroku_hostname": { Type: schema.TypeString, Computed: true, }, - "organization": &schema.Schema{ + "organization": { Type: schema.TypeList, Optional: true, ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "locked": &schema.Schema{ + "locked": { Type: schema.TypeBool, Optional: true, }, - "personal": &schema.Schema{ + "personal": { Type: schema.TypeBool, Optional: true, }, @@ -199,7 +200,7 @@ func resourceHerokuAppCreate(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Creating Heroku app...") - a, err := client.AppCreate(opts) + a, err := client.AppCreate(context.TODO(), opts) if err != nil { return err } @@ -263,7 +264,7 @@ func resourceHerokuOrgAppCreate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Creating Heroku app...") - a, err := client.OrganizationAppCreate(opts) + a, err := client.OrganizationAppCreate(context.TODO(), opts) if err != nil { return err } @@ -287,7 +288,7 @@ func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error { configVars := make(map[string]string) care := make(map[string]struct{}) for _, v := range d.Get("config_vars").([]interface{}) { - for k, _ := range v.(map[string]interface{}) { + for k := range v.(map[string]interface{}) { care[k] = struct{}{} } } @@ -347,7 +348,7 @@ func resourceHerokuAppUpdate(d *schema.ResourceData, meta interface{}) error { Name: &v, } - renamedApp, err := client.AppUpdate(d.Id(), opts) + renamedApp, err := client.AppUpdate(context.TODO(), d.Id(), opts) if err != nil { return err } @@ -380,7 +381,7 @@ func resourceHerokuAppDelete(d *schema.ResourceData, meta interface{}) error { client := meta.(*heroku.Service) log.Printf("[INFO] Deleting App: %s", d.Id()) - err := client.AppDelete(d.Id()) + _, err := client.AppDelete(context.TODO(), d.Id()) if err != nil { return fmt.Errorf("Error deleting App: %s", err) } @@ -402,13 +403,20 @@ func resourceHerokuAppRetrieve(id string, organization bool, client *heroku.Serv } func retrieveConfigVars(id string, client *heroku.Service) (map[string]string, error) { - vars, err := client.ConfigVarInfo(id) + vars, err := client.ConfigVarInfoForApp(context.TODO(), id) if err != nil { return nil, err } - return vars, nil + nonNullVars := map[string]string{} + for k, v := range vars { + if v != nil { + nonNullVars[k] = *v + } + } + + return nonNullVars, nil } // Updates the config vars for from an expanded configuration. @@ -421,7 +429,7 @@ func updateConfigVars( for _, v := range o { if v != nil { - for k, _ := range v.(map[string]interface{}) { + for k := range v.(map[string]interface{}) { vars[k] = nil } } @@ -436,7 +444,7 @@ func updateConfigVars( } log.Printf("[INFO] Updating config vars: *%#v", vars) - if _, err := client.ConfigVarUpdate(id, vars); err != nil { + if _, err := client.ConfigVarUpdate(context.TODO(), id, vars); err != nil { return fmt.Errorf("Error updating config vars: %s", err) } diff --git a/builtin/providers/heroku/resource_heroku_app_test.go b/builtin/providers/heroku/resource_heroku_app_test.go index cc3dd08eec..caeade8f2c 100644 --- a/builtin/providers/heroku/resource_heroku_app_test.go +++ b/builtin/providers/heroku/resource_heroku_app_test.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "os" "testing" @@ -12,7 +13,7 @@ import ( ) func TestAccHerokuApp_Basic(t *testing.T) { - var app heroku.App + var app heroku.AppInfoResult appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ @@ -20,7 +21,7 @@ func TestAccHerokuApp_Basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuAppDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuAppConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), @@ -36,7 +37,7 @@ func TestAccHerokuApp_Basic(t *testing.T) { } func TestAccHerokuApp_NameChange(t *testing.T) { - var app heroku.App + var app heroku.AppInfoResult appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) appName2 := fmt.Sprintf("%s-v2", appName) @@ -45,7 +46,7 @@ func TestAccHerokuApp_NameChange(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuAppDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuAppConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), @@ -56,7 +57,7 @@ func TestAccHerokuApp_NameChange(t *testing.T) { "heroku_app.foobar", "config_vars.0.FOO", "bar"), ), }, - resource.TestStep{ + { Config: testAccCheckHerokuAppConfig_updated(appName2), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), @@ -74,7 +75,7 @@ func TestAccHerokuApp_NameChange(t *testing.T) { } func TestAccHerokuApp_NukeVars(t *testing.T) { - var app heroku.App + var app heroku.AppInfoResult appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ @@ -82,7 +83,7 @@ func TestAccHerokuApp_NukeVars(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuAppDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuAppConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), @@ -93,7 +94,7 @@ func TestAccHerokuApp_NukeVars(t *testing.T) { "heroku_app.foobar", "config_vars.0.FOO", "bar"), ), }, - resource.TestStep{ + { Config: testAccCheckHerokuAppConfig_no_vars(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExists("heroku_app.foobar", &app), @@ -123,7 +124,7 @@ func TestAccHerokuApp_Organization(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuAppDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuAppConfig_organization(appName, org), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuAppExistsOrg("heroku_app.foobar", &app), @@ -142,7 +143,7 @@ func testAccCheckHerokuAppDestroy(s *terraform.State) error { continue } - _, err := client.AppInfo(rs.Primary.ID) + _, err := client.AppInfo(context.TODO(), rs.Primary.ID) if err == nil { return fmt.Errorf("App still exists") @@ -152,7 +153,7 @@ func testAccCheckHerokuAppDestroy(s *terraform.State) error { return nil } -func testAccCheckHerokuAppAttributes(app *heroku.App, appName string) resource.TestCheckFunc { +func testAccCheckHerokuAppAttributes(app *heroku.AppInfoResult, appName string) resource.TestCheckFunc { return func(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) @@ -168,12 +169,12 @@ func testAccCheckHerokuAppAttributes(app *heroku.App, appName string) resource.T return fmt.Errorf("Bad name: %s", app.Name) } - vars, err := client.ConfigVarInfo(app.Name) + vars, err := client.ConfigVarInfoForApp(context.TODO(), app.Name) if err != nil { return err } - if vars["FOO"] != "bar" { + if vars["FOO"] == nil || *vars["FOO"] != "bar" { return fmt.Errorf("Bad config vars: %v", vars) } @@ -181,7 +182,7 @@ func testAccCheckHerokuAppAttributes(app *heroku.App, appName string) resource.T } } -func testAccCheckHerokuAppAttributesUpdated(app *heroku.App, appName string) resource.TestCheckFunc { +func testAccCheckHerokuAppAttributesUpdated(app *heroku.AppInfoResult, appName string) resource.TestCheckFunc { return func(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) @@ -189,17 +190,17 @@ func testAccCheckHerokuAppAttributesUpdated(app *heroku.App, appName string) res return fmt.Errorf("Bad name: %s", app.Name) } - vars, err := client.ConfigVarInfo(app.Name) + vars, err := client.ConfigVarInfoForApp(context.TODO(), app.Name) if err != nil { return err } // Make sure we kept the old one - if vars["FOO"] != "bing" { + if vars["FOO"] == nil || *vars["FOO"] != "bing" { return fmt.Errorf("Bad config vars: %v", vars) } - if vars["BAZ"] != "bar" { + if vars["BAZ"] == nil || *vars["BAZ"] != "bar" { return fmt.Errorf("Bad config vars: %v", vars) } @@ -208,7 +209,7 @@ func testAccCheckHerokuAppAttributesUpdated(app *heroku.App, appName string) res } } -func testAccCheckHerokuAppAttributesNoVars(app *heroku.App, appName string) resource.TestCheckFunc { +func testAccCheckHerokuAppAttributesNoVars(app *heroku.AppInfoResult, appName string) resource.TestCheckFunc { return func(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) @@ -216,7 +217,7 @@ func testAccCheckHerokuAppAttributesNoVars(app *heroku.App, appName string) reso return fmt.Errorf("Bad name: %s", app.Name) } - vars, err := client.ConfigVarInfo(app.Name) + vars, err := client.ConfigVarInfoForApp(context.TODO(), app.Name) if err != nil { return err } @@ -249,12 +250,12 @@ func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, appName str return fmt.Errorf("Bad org: %v", app.Organization) } - vars, err := client.ConfigVarInfo(app.Name) + vars, err := client.ConfigVarInfoForApp(context.TODO(), app.Name) if err != nil { return err } - if vars["FOO"] != "bar" { + if vars["FOO"] == nil || *vars["FOO"] != "bar" { return fmt.Errorf("Bad config vars: %v", vars) } @@ -262,7 +263,7 @@ func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, appName str } } -func testAccCheckHerokuAppExists(n string, app *heroku.App) resource.TestCheckFunc { +func testAccCheckHerokuAppExists(n string, app *heroku.AppInfoResult) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -276,7 +277,7 @@ func testAccCheckHerokuAppExists(n string, app *heroku.App) resource.TestCheckFu client := testAccProvider.Meta().(*heroku.Service) - foundApp, err := client.AppInfo(rs.Primary.ID) + foundApp, err := client.AppInfo(context.TODO(), rs.Primary.ID) if err != nil { return err @@ -306,7 +307,7 @@ func testAccCheckHerokuAppExistsOrg(n string, app *heroku.OrganizationApp) resou client := testAccProvider.Meta().(*heroku.Service) - foundApp, err := client.OrganizationAppInfo(rs.Primary.ID) + foundApp, err := client.OrganizationAppInfo(context.TODO(), rs.Primary.ID) if err != nil { return err diff --git a/builtin/providers/heroku/resource_heroku_cert.go b/builtin/providers/heroku/resource_heroku_cert.go index d6c7a94901..a6390e4e20 100644 --- a/builtin/providers/heroku/resource_heroku_cert.go +++ b/builtin/providers/heroku/resource_heroku_cert.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "log" @@ -16,28 +17,28 @@ func resourceHerokuCert() *schema.Resource { Delete: resourceHerokuCertDelete, Schema: map[string]*schema.Schema{ - "app": &schema.Schema{ + "app": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "certificate_chain": &schema.Schema{ + "certificate_chain": { Type: schema.TypeString, Required: true, }, - "private_key": &schema.Schema{ + "private_key": { Type: schema.TypeString, Required: true, }, - "cname": &schema.Schema{ + "cname": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Computed: true, }, @@ -56,7 +57,7 @@ func resourceHerokuCertCreate(d *schema.ResourceData, meta interface{}) error { PrivateKey: d.Get("private_key").(string)} log.Printf("[DEBUG] SSL Certificate create configuration: %#v, %#v", app, opts) - a, err := client.SSLEndpointCreate(app, opts) + a, err := client.SSLEndpointCreate(context.TODO(), app, opts) if err != nil { return fmt.Errorf("Error creating SSL endpoint: %s", err) } @@ -92,7 +93,7 @@ func resourceHerokuCertUpdate(d *schema.ResourceData, meta interface{}) error { preprocess := true rollback := false ad, err := client.SSLEndpointUpdate( - app, d.Id(), heroku.SSLEndpointUpdateOpts{ + context.TODO(), app, d.Id(), heroku.SSLEndpointUpdateOpts{ CertificateChain: d.Get("certificate_chain").(*string), Preprocess: &preprocess, PrivateKey: d.Get("private_key").(*string), @@ -114,7 +115,7 @@ func resourceHerokuCertDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[INFO] Deleting SSL Cert: %s", d.Id()) // Destroy the app - err := client.SSLEndpointDelete(d.Get("app").(string), d.Id()) + _, err := client.SSLEndpointDelete(context.TODO(), d.Get("app").(string), d.Id()) if err != nil { return fmt.Errorf("Error deleting SSL Cert: %s", err) } @@ -123,8 +124,8 @@ func resourceHerokuCertDelete(d *schema.ResourceData, meta interface{}) error { return nil } -func resourceHerokuSSLCertRetrieve(app string, id string, client *heroku.Service) (*heroku.SSLEndpoint, error) { - addon, err := client.SSLEndpointInfo(app, id) +func resourceHerokuSSLCertRetrieve(app string, id string, client *heroku.Service) (*heroku.SSLEndpointInfoResult, error) { + addon, err := client.SSLEndpointInfo(context.TODO(), app, id) if err != nil { return nil, fmt.Errorf("Error retrieving SSL Cert: %s", err) diff --git a/builtin/providers/heroku/resource_heroku_cert_test.go b/builtin/providers/heroku/resource_heroku_cert_test.go index 1228c3d2ee..e40fe4b03c 100644 --- a/builtin/providers/heroku/resource_heroku_cert_test.go +++ b/builtin/providers/heroku/resource_heroku_cert_test.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "io/ioutil" "os" @@ -13,7 +14,7 @@ import ( ) func TestAccHerokuCert_Basic(t *testing.T) { - var endpoint heroku.SSLEndpoint + var endpoint heroku.SSLEndpointInfoResult wd, _ := os.Getwd() certificateChainFile := wd + "/test-fixtures/terraform.cert" certificateChainBytes, _ := ioutil.ReadFile(certificateChainFile) @@ -43,7 +44,7 @@ func TestAccHerokuCert_Basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuCertDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuCertConfig_basic, Check: resource.ComposeTestCheckFunc( testAccCheckHerokuCertExists("heroku_cert.ssl_certificate", &endpoint), @@ -65,7 +66,7 @@ func testAccCheckHerokuCertDestroy(s *terraform.State) error { continue } - _, err := client.SSLEndpointInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + _, err := client.SSLEndpointInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err == nil { return fmt.Errorf("Cerfificate still exists") @@ -75,7 +76,7 @@ func testAccCheckHerokuCertDestroy(s *terraform.State) error { return nil } -func testAccCheckHerokuCertificateChain(endpoint *heroku.SSLEndpoint, chain string) resource.TestCheckFunc { +func testAccCheckHerokuCertificateChain(endpoint *heroku.SSLEndpointInfoResult, chain string) resource.TestCheckFunc { return func(s *terraform.State) error { if endpoint.CertificateChain != chain { @@ -86,7 +87,7 @@ func testAccCheckHerokuCertificateChain(endpoint *heroku.SSLEndpoint, chain stri } } -func testAccCheckHerokuCertExists(n string, endpoint *heroku.SSLEndpoint) resource.TestCheckFunc { +func testAccCheckHerokuCertExists(n string, endpoint *heroku.SSLEndpointInfoResult) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -100,7 +101,7 @@ func testAccCheckHerokuCertExists(n string, endpoint *heroku.SSLEndpoint) resour client := testAccProvider.Meta().(*heroku.Service) - foundEndpoint, err := client.SSLEndpointInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + foundEndpoint, err := client.SSLEndpointInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err != nil { return err diff --git a/builtin/providers/heroku/resource_heroku_domain.go b/builtin/providers/heroku/resource_heroku_domain.go index f45d6795a0..da2a8ab17f 100644 --- a/builtin/providers/heroku/resource_heroku_domain.go +++ b/builtin/providers/heroku/resource_heroku_domain.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "log" @@ -15,19 +16,19 @@ func resourceHerokuDomain() *schema.Resource { Delete: resourceHerokuDomainDelete, Schema: map[string]*schema.Schema{ - "hostname": &schema.Schema{ + "hostname": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "app": &schema.Schema{ + "app": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "cname": &schema.Schema{ + "cname": { Type: schema.TypeString, Computed: true, }, @@ -43,7 +44,7 @@ func resourceHerokuDomainCreate(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Domain create configuration: %#v, %#v", app, hostname) - do, err := client.DomainCreate(app, heroku.DomainCreateOpts{Hostname: hostname}) + do, err := client.DomainCreate(context.TODO(), app, heroku.DomainCreateOpts{Hostname: hostname}) if err != nil { return err } @@ -62,7 +63,7 @@ func resourceHerokuDomainDelete(d *schema.ResourceData, meta interface{}) error log.Printf("[INFO] Deleting Domain: %s", d.Id()) // Destroy the domain - err := client.DomainDelete(d.Get("app").(string), d.Id()) + _, err := client.DomainDelete(context.TODO(), d.Get("app").(string), d.Id()) if err != nil { return fmt.Errorf("Error deleting domain: %s", err) } @@ -74,7 +75,7 @@ func resourceHerokuDomainRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*heroku.Service) app := d.Get("app").(string) - do, err := client.DomainInfo(app, d.Id()) + do, err := client.DomainInfo(context.TODO(), app, d.Id()) if err != nil { return fmt.Errorf("Error retrieving domain: %s", err) } diff --git a/builtin/providers/heroku/resource_heroku_domain_test.go b/builtin/providers/heroku/resource_heroku_domain_test.go index 2d600b4e85..9e1abe8623 100644 --- a/builtin/providers/heroku/resource_heroku_domain_test.go +++ b/builtin/providers/heroku/resource_heroku_domain_test.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "testing" @@ -11,7 +12,7 @@ import ( ) func TestAccHerokuDomain_Basic(t *testing.T) { - var domain heroku.Domain + var domain heroku.DomainInfoResult appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ @@ -19,7 +20,7 @@ func TestAccHerokuDomain_Basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuDomainDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuDomainConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuDomainExists("heroku_domain.foobar", &domain), @@ -45,7 +46,7 @@ func testAccCheckHerokuDomainDestroy(s *terraform.State) error { continue } - _, err := client.DomainInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + _, err := client.DomainInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err == nil { return fmt.Errorf("Domain still exists") @@ -55,7 +56,7 @@ func testAccCheckHerokuDomainDestroy(s *terraform.State) error { return nil } -func testAccCheckHerokuDomainAttributes(Domain *heroku.Domain) resource.TestCheckFunc { +func testAccCheckHerokuDomainAttributes(Domain *heroku.DomainInfoResult) resource.TestCheckFunc { return func(s *terraform.State) error { if Domain.Hostname != "terraform.example.com" { @@ -66,7 +67,7 @@ func testAccCheckHerokuDomainAttributes(Domain *heroku.Domain) resource.TestChec } } -func testAccCheckHerokuDomainExists(n string, Domain *heroku.Domain) resource.TestCheckFunc { +func testAccCheckHerokuDomainExists(n string, Domain *heroku.DomainInfoResult) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -80,7 +81,7 @@ func testAccCheckHerokuDomainExists(n string, Domain *heroku.Domain) resource.Te client := testAccProvider.Meta().(*heroku.Service) - foundDomain, err := client.DomainInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + foundDomain, err := client.DomainInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err != nil { return err diff --git a/builtin/providers/heroku/resource_heroku_drain.go b/builtin/providers/heroku/resource_heroku_drain.go index 6735cdb0fb..38b768d5ed 100644 --- a/builtin/providers/heroku/resource_heroku_drain.go +++ b/builtin/providers/heroku/resource_heroku_drain.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "log" "strings" @@ -18,19 +19,19 @@ func resourceHerokuDrain() *schema.Resource { Delete: resourceHerokuDrainDelete, Schema: map[string]*schema.Schema{ - "url": &schema.Schema{ + "url": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "app": &schema.Schema{ + "app": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "token": &schema.Schema{ + "token": { Type: schema.TypeString, Computed: true, }, @@ -48,9 +49,9 @@ func resourceHerokuDrainCreate(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Drain create configuration: %#v, %#v", app, url) - var dr *heroku.LogDrain + var dr *heroku.LogDrainCreateResult err := resource.Retry(2*time.Minute, func() *resource.RetryError { - d, err := client.LogDrainCreate(app, heroku.LogDrainCreateOpts{URL: url}) + d, err := client.LogDrainCreate(context.TODO(), app, heroku.LogDrainCreateOpts{URL: url}) if err != nil { if strings.Contains(err.Error(), retryableError) { return resource.RetryableError(err) @@ -78,7 +79,7 @@ func resourceHerokuDrainDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[INFO] Deleting drain: %s", d.Id()) // Destroy the drain - err := client.LogDrainDelete(d.Get("app").(string), d.Id()) + _, err := client.LogDrainDelete(context.TODO(), d.Get("app").(string), d.Id()) if err != nil { return fmt.Errorf("Error deleting drain: %s", err) } @@ -89,7 +90,7 @@ func resourceHerokuDrainDelete(d *schema.ResourceData, meta interface{}) error { func resourceHerokuDrainRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*heroku.Service) - dr, err := client.LogDrainInfo(d.Get("app").(string), d.Id()) + dr, err := client.LogDrainInfo(context.TODO(), d.Get("app").(string), d.Id()) if err != nil { return fmt.Errorf("Error retrieving drain: %s", err) } diff --git a/builtin/providers/heroku/resource_heroku_drain_test.go b/builtin/providers/heroku/resource_heroku_drain_test.go index 60db1db6ee..123160bd69 100644 --- a/builtin/providers/heroku/resource_heroku_drain_test.go +++ b/builtin/providers/heroku/resource_heroku_drain_test.go @@ -1,6 +1,7 @@ package heroku import ( + "context" "fmt" "testing" @@ -11,7 +12,7 @@ import ( ) func TestAccHerokuDrain_Basic(t *testing.T) { - var drain heroku.LogDrain + var drain heroku.LogDrainInfoResult appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) resource.Test(t, resource.TestCase{ @@ -19,7 +20,7 @@ func TestAccHerokuDrain_Basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckHerokuDrainDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckHerokuDrainConfig_basic(appName), Check: resource.ComposeTestCheckFunc( testAccCheckHerokuDrainExists("heroku_drain.foobar", &drain), @@ -42,7 +43,7 @@ func testAccCheckHerokuDrainDestroy(s *terraform.State) error { continue } - _, err := client.LogDrainInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + _, err := client.LogDrainInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err == nil { return fmt.Errorf("Drain still exists") @@ -52,7 +53,7 @@ func testAccCheckHerokuDrainDestroy(s *terraform.State) error { return nil } -func testAccCheckHerokuDrainAttributes(Drain *heroku.LogDrain) resource.TestCheckFunc { +func testAccCheckHerokuDrainAttributes(Drain *heroku.LogDrainInfoResult) resource.TestCheckFunc { return func(s *terraform.State) error { if Drain.URL != "syslog://terraform.example.com:1234" { @@ -67,7 +68,7 @@ func testAccCheckHerokuDrainAttributes(Drain *heroku.LogDrain) resource.TestChec } } -func testAccCheckHerokuDrainExists(n string, Drain *heroku.LogDrain) resource.TestCheckFunc { +func testAccCheckHerokuDrainExists(n string, Drain *heroku.LogDrainInfoResult) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -81,7 +82,7 @@ func testAccCheckHerokuDrainExists(n string, Drain *heroku.LogDrain) resource.Te client := testAccProvider.Meta().(*heroku.Service) - foundDrain, err := client.LogDrainInfo(rs.Primary.Attributes["app"], rs.Primary.ID) + foundDrain, err := client.LogDrainInfo(context.TODO(), rs.Primary.Attributes["app"], rs.Primary.ID) if err != nil { return err diff --git a/builtin/providers/ignition/resource_ignition_config_test.go b/builtin/providers/ignition/resource_ignition_config_test.go index 094ca31bc6..977ee7516b 100644 --- a/builtin/providers/ignition/resource_ignition_config_test.go +++ b/builtin/providers/ignition/resource_ignition_config_test.go @@ -3,6 +3,7 @@ package ignition import ( "encoding/json" "fmt" + "regexp" "testing" "github.com/coreos/ignition/config/types" @@ -67,6 +68,18 @@ func TestIngnitionFileAppend(t *testing.T) { }) } +func testIgnitionError(t *testing.T, input string, expectedErr *regexp.Regexp) { + resource.Test(t, resource.TestCase{ + Providers: testProviders, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(testTemplate, input), + ExpectError: expectedErr, + }, + }, + }) +} + func testIgnition(t *testing.T, input string, assert func(*types.Config) error) { check := func(s *terraform.State) error { got := s.RootModule().Outputs["rendered"].Value.(string) diff --git a/builtin/providers/ignition/resource_ignition_filesystem.go b/builtin/providers/ignition/resource_ignition_filesystem.go index 4f19b15e79..a26c3f7005 100644 --- a/builtin/providers/ignition/resource_ignition_filesystem.go +++ b/builtin/providers/ignition/resource_ignition_filesystem.go @@ -34,6 +34,11 @@ func resourceFilesystem() *schema.Resource { Required: true, ForceNew: true, }, + "create": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, "force": &schema.Schema{ Type: schema.TypeBool, Optional: true, @@ -84,14 +89,19 @@ func buildFilesystem(d *schema.ResourceData, c *cache) (string, error) { Format: types.FilesystemFormat(d.Get("mount.0.format").(string)), } + create, hasCreate := d.GetOk("mount.0.create") force, hasForce := d.GetOk("mount.0.force") options, hasOptions := d.GetOk("mount.0.options") - if hasOptions || hasForce { + if hasCreate || hasOptions || hasForce { mount.Create = &types.FilesystemCreate{ Force: force.(bool), Options: castSliceInterface(options.([]interface{})), } } + + if !create.(bool) && (hasForce || hasOptions) { + return "", fmt.Errorf("create should be true when force or options is used") + } } var path *types.Path diff --git a/builtin/providers/ignition/resource_ignition_filesystem_test.go b/builtin/providers/ignition/resource_ignition_filesystem_test.go index bd3926d4de..cfb6985547 100644 --- a/builtin/providers/ignition/resource_ignition_filesystem_test.go +++ b/builtin/providers/ignition/resource_ignition_filesystem_test.go @@ -2,6 +2,7 @@ package ignition import ( "fmt" + "regexp" "testing" "github.com/coreos/ignition/config/types" @@ -22,11 +23,21 @@ func TestIngnitionFilesystem(t *testing.T) { } } + data "ignition_filesystem" "baz" { + name = "baz" + mount { + device = "/baz" + format = "ext4" + create = true + } + } + data "ignition_filesystem" "bar" { name = "bar" mount { device = "/bar" format = "ext4" + create = true force = true options = ["rw"] } @@ -36,11 +47,12 @@ func TestIngnitionFilesystem(t *testing.T) { filesystems = [ "${data.ignition_filesystem.foo.id}", "${data.ignition_filesystem.qux.id}", + "${data.ignition_filesystem.baz.id}", "${data.ignition_filesystem.bar.id}", ] } `, func(c *types.Config) error { - if len(c.Storage.Filesystems) != 3 { + if len(c.Storage.Filesystems) != 4 { return fmt.Errorf("disks, found %d", len(c.Storage.Filesystems)) } @@ -75,6 +87,23 @@ func TestIngnitionFilesystem(t *testing.T) { } f = c.Storage.Filesystems[2] + if f.Name != "baz" { + return fmt.Errorf("name, found %q", f.Name) + } + + if f.Mount.Device != "/baz" { + return fmt.Errorf("mount.0.device, found %q", f.Mount.Device) + } + + if f.Mount.Format != "ext4" { + return fmt.Errorf("mount.0.format, found %q", f.Mount.Format) + } + + if f.Mount.Create.Force != false { + return fmt.Errorf("mount.0.force, found %t", f.Mount.Create.Force) + } + + f = c.Storage.Filesystems[3] if f.Name != "bar" { return fmt.Errorf("name, found %q", f.Name) } @@ -98,3 +127,22 @@ func TestIngnitionFilesystem(t *testing.T) { return nil }) } + +func TestIngnitionFilesystemMissingCreate(t *testing.T) { + testIgnitionError(t, ` + data "ignition_filesystem" "bar" { + name = "bar" + mount { + device = "/bar" + format = "ext4" + force = true + } + } + + data "ignition_config" "test" { + filesystems = [ + "${data.ignition_filesystem.bar.id}", + ] + } + `, regexp.MustCompile("create should be true when force or options is used")) +} diff --git a/builtin/providers/kubernetes/provider.go b/builtin/providers/kubernetes/provider.go new file mode 100644 index 0000000000..9d0d23cc30 --- /dev/null +++ b/builtin/providers/kubernetes/provider.go @@ -0,0 +1,172 @@ +package kubernetes + +import ( + "bytes" + "fmt" + "log" + "os" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" + "github.com/mitchellh/go-homedir" + kubernetes "k8s.io/kubernetes/pkg/client/clientset_generated/release_1_5" + "k8s.io/kubernetes/pkg/client/restclient" + "k8s.io/kubernetes/pkg/client/unversioned/clientcmd" + clientcmdapi "k8s.io/kubernetes/pkg/client/unversioned/clientcmd/api" +) + +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "host": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_HOST", ""), + Description: "The hostname (in form of URI) of Kubernetes master.", + }, + "username": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_USER", ""), + Description: "The username to use for HTTP basic authentication when accessing the Kubernetes master endpoint.", + }, + "password": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_PASSWORD", ""), + Description: "The password to use for HTTP basic authentication when accessing the Kubernetes master endpoint.", + }, + "insecure": { + Type: schema.TypeBool, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_INSECURE", false), + Description: "Whether server should be accessed without verifying the TLS certificate.", + }, + "client_certificate": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_CLIENT_CERT_DATA", ""), + Description: "PEM-encoded client certificate for TLS authentication.", + }, + "client_key": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_CLIENT_KEY_DATA", ""), + Description: "PEM-encoded client certificate key for TLS authentication.", + }, + "cluster_ca_certificate": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_CLUSTER_CA_CERT_DATA", ""), + Description: "PEM-encoded root certificates bundle for TLS authentication.", + }, + "config_path": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_CONFIG", "~/.kube/config"), + Description: "Path to the kube config file, defaults to ~/.kube/config", + }, + "config_context_auth_info": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_CTX_AUTH_INFO", ""), + Description: "", + }, + "config_context_cluster": { + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("KUBE_CTX_CLUSTER", ""), + Description: "", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "kubernetes_config_map": resourceKubernetesConfigMap(), + "kubernetes_namespace": resourceKubernetesNamespace(), + }, + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + // Config file loading + cfg, err := tryLoadingConfigFile(d) + if err != nil { + return nil, err + } + if cfg == nil { + cfg = &restclient.Config{} + } + + // Overriding with static configuration + cfg.UserAgent = fmt.Sprintf("HashiCorp/1.0 Terraform/%s", terraform.VersionString()) + + if v, ok := d.GetOk("host"); ok { + cfg.Host = v.(string) + } + if v, ok := d.GetOk("username"); ok { + cfg.Username = v.(string) + } + if v, ok := d.GetOk("password"); ok { + cfg.Password = v.(string) + } + if v, ok := d.GetOk("insecure"); ok { + cfg.Insecure = v.(bool) + } + if v, ok := d.GetOk("cluster_ca_certificate"); ok { + cfg.CAData = bytes.NewBufferString(v.(string)).Bytes() + } + if v, ok := d.GetOk("client_certificate"); ok { + cfg.CertData = bytes.NewBufferString(v.(string)).Bytes() + } + if v, ok := d.GetOk("client_key"); ok { + cfg.KeyData = bytes.NewBufferString(v.(string)).Bytes() + } + + k, err := kubernetes.NewForConfig(cfg) + if err != nil { + return nil, fmt.Errorf("Failed to configure: %s", err) + } + + return k, nil +} + +func tryLoadingConfigFile(d *schema.ResourceData) (*restclient.Config, error) { + path, err := homedir.Expand(d.Get("config_path").(string)) + if err != nil { + return nil, err + } + + loader := &clientcmd.ClientConfigLoadingRules{ + ExplicitPath: path, + } + overrides := &clientcmd.ConfigOverrides{} + ctxSuffix := "; no context" + authInfo, authInfoOk := d.GetOk("config_context_auth_info") + cluster, clusterOk := d.GetOk("config_context_cluster") + if authInfoOk || clusterOk { + overrides.Context = clientcmdapi.Context{} + if authInfoOk { + overrides.Context.AuthInfo = authInfo.(string) + } + if clusterOk { + overrides.Context.Cluster = cluster.(string) + } + ctxSuffix = fmt.Sprintf("; auth_info: %s, cluster: %s", + overrides.Context.AuthInfo, overrides.Context.Cluster) + } + log.Printf("[DEBUG] Using override context: %#v", *overrides) + + cc := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loader, overrides) + cfg, err := cc.ClientConfig() + if err != nil { + if pathErr, ok := err.(*os.PathError); ok && os.IsNotExist(pathErr.Err) { + log.Printf("[INFO] Unable to load config file as it doesn't exist at %q", path) + return nil, nil + } + return nil, fmt.Errorf("Failed to load config (%s%s): %s", path, ctxSuffix, err) + } + + log.Printf("[INFO] Successfully loaded config file (%s%s)", path, ctxSuffix) + return cfg, nil +} diff --git a/builtin/providers/kubernetes/provider_test.go b/builtin/providers/kubernetes/provider_test.go new file mode 100644 index 0000000000..fbea586a91 --- /dev/null +++ b/builtin/providers/kubernetes/provider_test.go @@ -0,0 +1,53 @@ +package kubernetes + +import ( + "os" + "strings" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "kubernetes": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + hasFileCfg := (os.Getenv("KUBE_CTX_AUTH_INFO") != "" && os.Getenv("KUBE_CTX_CLUSTER") != "") + hasStaticCfg := (os.Getenv("KUBE_HOST") != "" && + os.Getenv("KUBE_USER") != "" && + os.Getenv("KUBE_PASSWORD") != "" && + os.Getenv("KUBE_CLIENT_CERT_DATA") != "" && + os.Getenv("KUBE_CLIENT_KEY_DATA") != "" && + os.Getenv("KUBE_CLUSTER_CA_CERT_DATA") != "") + + if !hasFileCfg && !hasStaticCfg { + t.Fatalf("File config (KUBE_CTX_AUTH_INFO and KUBE_CTX_CLUSTER) or static configuration"+ + " (%s) must be set for acceptance tests", + strings.Join([]string{ + "KUBE_HOST", + "KUBE_USER", + "KUBE_PASSWORD", + "KUBE_CLIENT_CERT_DATA", + "KUBE_CLIENT_KEY_DATA", + "KUBE_CLUSTER_CA_CERT_DATA", + }, ", ")) + } +} diff --git a/builtin/providers/kubernetes/resource_kubernetes_config_map.go b/builtin/providers/kubernetes/resource_kubernetes_config_map.go new file mode 100644 index 0000000000..460ca638e7 --- /dev/null +++ b/builtin/providers/kubernetes/resource_kubernetes_config_map.go @@ -0,0 +1,125 @@ +package kubernetes + +import ( + "log" + + "github.com/hashicorp/terraform/helper/schema" + "k8s.io/kubernetes/pkg/api/errors" + api "k8s.io/kubernetes/pkg/api/v1" + kubernetes "k8s.io/kubernetes/pkg/client/clientset_generated/release_1_5" +) + +func resourceKubernetesConfigMap() *schema.Resource { + return &schema.Resource{ + Create: resourceKubernetesConfigMapCreate, + Read: resourceKubernetesConfigMapRead, + Exists: resourceKubernetesConfigMapExists, + Update: resourceKubernetesConfigMapUpdate, + Delete: resourceKubernetesConfigMapDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "metadata": namespacedMetadataSchema("config map", true), + "data": { + Type: schema.TypeMap, + Description: "A map of the configuration data.", + Optional: true, + }, + }, + } +} + +func resourceKubernetesConfigMapCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + metadata := expandMetadata(d.Get("metadata").([]interface{})) + cfgMap := api.ConfigMap{ + ObjectMeta: metadata, + Data: expandStringMap(d.Get("data").(map[string]interface{})), + } + log.Printf("[INFO] Creating new config map: %#v", cfgMap) + out, err := conn.CoreV1().ConfigMaps(metadata.Namespace).Create(&cfgMap) + if err != nil { + return err + } + log.Printf("[INFO] Submitted new config map: %#v", out) + d.SetId(buildId(out.ObjectMeta)) + + return resourceKubernetesConfigMapRead(d, meta) +} + +func resourceKubernetesConfigMapRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + namespace, name := idParts(d.Id()) + log.Printf("[INFO] Reading config map %s", name) + cfgMap, err := conn.CoreV1().ConfigMaps(namespace).Get(name) + if err != nil { + log.Printf("[DEBUG] Received error: %#v", err) + return err + } + log.Printf("[INFO] Received config map: %#v", cfgMap) + err = d.Set("metadata", flattenMetadata(cfgMap.ObjectMeta)) + if err != nil { + return err + } + d.Set("data", cfgMap.Data) + + return nil +} + +func resourceKubernetesConfigMapUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + metadata := expandMetadata(d.Get("metadata").([]interface{})) + namespace, name := idParts(d.Id()) + // This is necessary in case the name is generated + metadata.Name = name + + cfgMap := api.ConfigMap{ + ObjectMeta: metadata, + Data: expandStringMap(d.Get("data").(map[string]interface{})), + } + log.Printf("[INFO] Updating config map: %#v", cfgMap) + out, err := conn.CoreV1().ConfigMaps(namespace).Update(&cfgMap) + if err != nil { + return err + } + log.Printf("[INFO] Submitted updated config map: %#v", out) + d.SetId(buildId(out.ObjectMeta)) + + return resourceKubernetesConfigMapRead(d, meta) +} + +func resourceKubernetesConfigMapDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + namespace, name := idParts(d.Id()) + log.Printf("[INFO] Deleting config map: %#v", name) + err := conn.CoreV1().ConfigMaps(namespace).Delete(name, &api.DeleteOptions{}) + if err != nil { + return err + } + + log.Printf("[INFO] Config map %s deleted", name) + + d.SetId("") + return nil +} + +func resourceKubernetesConfigMapExists(d *schema.ResourceData, meta interface{}) (bool, error) { + conn := meta.(*kubernetes.Clientset) + + namespace, name := idParts(d.Id()) + log.Printf("[INFO] Checking config map %s", name) + _, err := conn.CoreV1().ConfigMaps(namespace).Get(name) + if err != nil { + if statusErr, ok := err.(*errors.StatusError); ok && statusErr.ErrStatus.Code == 404 { + return false, nil + } + log.Printf("[DEBUG] Received error: %#v", err) + } + return true, err +} diff --git a/builtin/providers/kubernetes/resource_kubernetes_config_map_test.go b/builtin/providers/kubernetes/resource_kubernetes_config_map_test.go new file mode 100644 index 0000000000..e3d0e50975 --- /dev/null +++ b/builtin/providers/kubernetes/resource_kubernetes_config_map_test.go @@ -0,0 +1,284 @@ +package kubernetes + +import ( + "fmt" + "reflect" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + api "k8s.io/kubernetes/pkg/api/v1" + kubernetes "k8s.io/kubernetes/pkg/client/clientset_generated/release_1_5" +) + +func TestAccKubernetesConfigMap_basic(t *testing.T) { + var conf api.ConfigMap + name := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "kubernetes_config_map.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesConfigMapDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesConfigMapConfig_basic(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesConfigMapExists("kubernetes_config_map.test", &conf), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.%", "2"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.TestAnnotationOne", "one"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.TestAnnotationTwo", "two"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{"TestAnnotationOne": "one", "TestAnnotationTwo": "two"}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.%", "3"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.TestLabelOne", "one"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.TestLabelTwo", "two"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.TestLabelThree", "three"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{"TestLabelOne": "one", "TestLabelTwo": "two", "TestLabelThree": "three"}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.name", name), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.uid"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.%", "2"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.one", "first"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.two", "second"), + testAccCheckConfigMapData(&conf, map[string]string{"one": "first", "two": "second"}), + ), + }, + { + Config: testAccKubernetesConfigMapConfig_modified(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesConfigMapExists("kubernetes_config_map.test", &conf), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.%", "2"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.TestAnnotationOne", "one"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.Different", "1234"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{"TestAnnotationOne": "one", "Different": "1234"}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.%", "2"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.TestLabelOne", "one"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.TestLabelThree", "three"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{"TestLabelOne": "one", "TestLabelThree": "three"}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.name", name), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.uid"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.%", "3"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.one", "first"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.two", "second"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.nine", "ninth"), + testAccCheckConfigMapData(&conf, map[string]string{"one": "first", "two": "second", "nine": "ninth"}), + ), + }, + { + Config: testAccKubernetesConfigMapConfig_noData(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesConfigMapExists("kubernetes_config_map.test", &conf), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.%", "0"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.%", "0"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.name", name), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.uid"), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "data.%", "0"), + testAccCheckConfigMapData(&conf, map[string]string{}), + ), + }, + }, + }) +} + +func TestAccKubernetesConfigMap_importBasic(t *testing.T) { + resourceName := "kubernetes_config_map.test" + name := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesConfigMapDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesConfigMapConfig_basic(name), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccKubernetesConfigMap_generatedName(t *testing.T) { + var conf api.ConfigMap + prefix := "tf-acc-test-gen-" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "kubernetes_config_map.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesConfigMapDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesConfigMapConfig_generatedName(prefix), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesConfigMapExists("kubernetes_config_map.test", &conf), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.annotations.%", "0"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.labels.%", "0"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_config_map.test", "metadata.0.generate_name", prefix), + resource.TestMatchResourceAttr("kubernetes_config_map.test", "metadata.0.name", regexp.MustCompile("^"+prefix)), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_config_map.test", "metadata.0.uid"), + ), + }, + }, + }) +} + +func TestAccKubernetesConfigMap_importGeneratedName(t *testing.T) { + resourceName := "kubernetes_config_map.test" + prefix := "tf-acc-test-gen-import-" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesConfigMapDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesConfigMapConfig_generatedName(prefix), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckConfigMapData(m *api.ConfigMap, expected map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if len(expected) == 0 && len(m.Data) == 0 { + return nil + } + if !reflect.DeepEqual(m.Data, expected) { + return fmt.Errorf("%s data don't match.\nExpected: %q\nGiven: %q", + m.Name, expected, m.Data) + } + return nil + } +} + +func testAccCheckKubernetesConfigMapDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*kubernetes.Clientset) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "kubernetes_config_map" { + continue + } + namespace, name := idParts(rs.Primary.ID) + resp, err := conn.CoreV1().ConfigMaps(namespace).Get(name) + if err == nil { + if resp.Name == rs.Primary.ID { + return fmt.Errorf("Config Map still exists: %s", rs.Primary.ID) + } + } + } + + return nil +} + +func testAccCheckKubernetesConfigMapExists(n string, obj *api.ConfigMap) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*kubernetes.Clientset) + namespace, name := idParts(rs.Primary.ID) + out, err := conn.CoreV1().ConfigMaps(namespace).Get(name) + if err != nil { + return err + } + + *obj = *out + return nil + } +} + +func testAccKubernetesConfigMapConfig_basic(name string) string { + return fmt.Sprintf(` +resource "kubernetes_config_map" "test" { + metadata { + annotations { + TestAnnotationOne = "one" + TestAnnotationTwo = "two" + } + labels { + TestLabelOne = "one" + TestLabelTwo = "two" + TestLabelThree = "three" + } + name = "%s" + } + data { + one = "first" + two = "second" + } +}`, name) +} + +func testAccKubernetesConfigMapConfig_modified(name string) string { + return fmt.Sprintf(` +resource "kubernetes_config_map" "test" { + metadata { + annotations { + TestAnnotationOne = "one" + Different = "1234" + } + labels { + TestLabelOne = "one" + TestLabelThree = "three" + } + name = "%s" + } + data { + one = "first" + two = "second" + nine = "ninth" + } +}`, name) +} + +func testAccKubernetesConfigMapConfig_noData(name string) string { + return fmt.Sprintf(` +resource "kubernetes_config_map" "test" { + metadata { + name = "%s" + } +}`, name) +} + +func testAccKubernetesConfigMapConfig_generatedName(prefix string) string { + return fmt.Sprintf(` +resource "kubernetes_config_map" "test" { + metadata { + generate_name = "%s" + } + data { + one = "first" + two = "second" + } +}`, prefix) +} diff --git a/builtin/providers/kubernetes/resource_kubernetes_namespace.go b/builtin/providers/kubernetes/resource_kubernetes_namespace.go new file mode 100644 index 0000000000..9e6160e51b --- /dev/null +++ b/builtin/providers/kubernetes/resource_kubernetes_namespace.go @@ -0,0 +1,143 @@ +package kubernetes + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "k8s.io/kubernetes/pkg/api/errors" + api "k8s.io/kubernetes/pkg/api/v1" + kubernetes "k8s.io/kubernetes/pkg/client/clientset_generated/release_1_5" +) + +func resourceKubernetesNamespace() *schema.Resource { + return &schema.Resource{ + Create: resourceKubernetesNamespaceCreate, + Read: resourceKubernetesNamespaceRead, + Exists: resourceKubernetesNamespaceExists, + Update: resourceKubernetesNamespaceUpdate, + Delete: resourceKubernetesNamespaceDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "metadata": metadataSchema("namespace"), + }, + } +} + +func resourceKubernetesNamespaceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + metadata := expandMetadata(d.Get("metadata").([]interface{})) + namespace := api.Namespace{ + ObjectMeta: metadata, + } + log.Printf("[INFO] Creating new namespace: %#v", namespace) + out, err := conn.CoreV1().Namespaces().Create(&namespace) + if err != nil { + return err + } + log.Printf("[INFO] Submitted new namespace: %#v", out) + d.SetId(out.Name) + + return resourceKubernetesNamespaceRead(d, meta) +} + +func resourceKubernetesNamespaceRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + name := d.Id() + log.Printf("[INFO] Reading namespace %s", name) + namespace, err := conn.CoreV1().Namespaces().Get(name) + if err != nil { + log.Printf("[DEBUG] Received error: %#v", err) + return err + } + log.Printf("[INFO] Received namespace: %#v", namespace) + err = d.Set("metadata", flattenMetadata(namespace.ObjectMeta)) + if err != nil { + return err + } + + return nil +} + +func resourceKubernetesNamespaceUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + metadata := expandMetadata(d.Get("metadata").([]interface{})) + // This is necessary in case the name is generated + metadata.Name = d.Id() + + namespace := api.Namespace{ + ObjectMeta: metadata, + } + log.Printf("[INFO] Updating namespace: %#v", namespace) + out, err := conn.CoreV1().Namespaces().Update(&namespace) + if err != nil { + return err + } + log.Printf("[INFO] Submitted updated namespace: %#v", out) + d.SetId(out.Name) + + return resourceKubernetesNamespaceRead(d, meta) +} + +func resourceKubernetesNamespaceDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*kubernetes.Clientset) + + name := d.Id() + log.Printf("[INFO] Deleting namespace: %#v", name) + err := conn.CoreV1().Namespaces().Delete(name, &api.DeleteOptions{}) + if err != nil { + return err + } + + stateConf := &resource.StateChangeConf{ + Target: []string{}, + Pending: []string{"Terminating"}, + Timeout: 5 * time.Minute, + Refresh: func() (interface{}, string, error) { + out, err := conn.CoreV1().Namespaces().Get(name) + if err != nil { + if statusErr, ok := err.(*errors.StatusError); ok && statusErr.ErrStatus.Code == 404 { + return nil, "", nil + } + log.Printf("[ERROR] Received error: %#v", err) + return out, "Error", err + } + + statusPhase := fmt.Sprintf("%v", out.Status.Phase) + log.Printf("[DEBUG] Namespace %s status received: %#v", out.Name, statusPhase) + return out, statusPhase, nil + }, + } + _, err = stateConf.WaitForState() + if err != nil { + return err + } + log.Printf("[INFO] Namespace %s deleted", name) + + d.SetId("") + return nil +} + +func resourceKubernetesNamespaceExists(d *schema.ResourceData, meta interface{}) (bool, error) { + conn := meta.(*kubernetes.Clientset) + + name := d.Id() + log.Printf("[INFO] Checking namespace %s", name) + _, err := conn.CoreV1().Namespaces().Get(name) + if err != nil { + if statusErr, ok := err.(*errors.StatusError); ok && statusErr.ErrStatus.Code == 404 { + return false, nil + } + log.Printf("[DEBUG] Received error: %#v", err) + } + log.Printf("[INFO] Namespace %s exists", name) + return true, err +} diff --git a/builtin/providers/kubernetes/resource_kubernetes_namespace_test.go b/builtin/providers/kubernetes/resource_kubernetes_namespace_test.go new file mode 100644 index 0000000000..561f8a01f2 --- /dev/null +++ b/builtin/providers/kubernetes/resource_kubernetes_namespace_test.go @@ -0,0 +1,272 @@ +package kubernetes + +import ( + "fmt" + "reflect" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + api "k8s.io/kubernetes/pkg/api/v1" + kubernetes "k8s.io/kubernetes/pkg/client/clientset_generated/release_1_5" +) + +func TestAccKubernetesNamespace_basic(t *testing.T) { + var conf api.Namespace + nsName := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "kubernetes_namespace.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesNamespaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesNamespaceConfig_basic(nsName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesNamespaceExists("kubernetes_namespace.test", &conf), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.%", "2"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.TestAnnotationOne", "one"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.TestAnnotationTwo", "two"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{"TestAnnotationOne": "one", "TestAnnotationTwo": "two"}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.%", "3"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.TestLabelOne", "one"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.TestLabelTwo", "two"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.TestLabelThree", "three"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{"TestLabelOne": "one", "TestLabelTwo": "two", "TestLabelThree": "three"}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.name", nsName), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.uid"), + ), + }, + { + Config: testAccKubernetesNamespaceConfig_smallerLists(nsName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesNamespaceExists("kubernetes_namespace.test", &conf), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.%", "2"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.TestAnnotationOne", "one"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.Different", "1234"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{"TestAnnotationOne": "one", "Different": "1234"}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.%", "2"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.TestLabelOne", "one"), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.TestLabelThree", "three"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{"TestLabelOne": "one", "TestLabelThree": "three"}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.name", nsName), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.uid"), + ), + }, + { + Config: testAccKubernetesNamespaceConfig_noLists(nsName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesNamespaceExists("kubernetes_namespace.test", &conf), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.%", "0"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.%", "0"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.name", nsName), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.uid"), + ), + }, + }, + }) +} + +func TestAccKubernetesNamespace_importBasic(t *testing.T) { + resourceName := "kubernetes_namespace.test" + nsName := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesNamespaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesNamespaceConfig_basic(nsName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccKubernetesNamespace_generatedName(t *testing.T) { + var conf api.Namespace + prefix := "tf-acc-test-gen-" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "kubernetes_namespace.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesNamespaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesNamespaceConfig_generatedName(prefix), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckKubernetesNamespaceExists("kubernetes_namespace.test", &conf), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.annotations.%", "0"), + testAccCheckMetaAnnotations(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.labels.%", "0"), + testAccCheckMetaLabels(&conf.ObjectMeta, map[string]string{}), + resource.TestCheckResourceAttr("kubernetes_namespace.test", "metadata.0.generate_name", prefix), + resource.TestMatchResourceAttr("kubernetes_namespace.test", "metadata.0.name", regexp.MustCompile("^"+prefix)), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.generation"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.resource_version"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.self_link"), + resource.TestCheckResourceAttrSet("kubernetes_namespace.test", "metadata.0.uid"), + ), + }, + }, + }) +} + +func TestAccKubernetesNamespace_importGeneratedName(t *testing.T) { + resourceName := "kubernetes_namespace.test" + prefix := "tf-acc-test-gen-import-" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKubernetesNamespaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKubernetesNamespaceConfig_generatedName(prefix), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckMetaAnnotations(om *api.ObjectMeta, expected map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if len(expected) == 0 && len(om.Annotations) == 0 { + return nil + } + if !reflect.DeepEqual(om.Annotations, expected) { + return fmt.Errorf("%s annotations don't match.\nExpected: %q\nGiven: %q", + om.Name, expected, om.Annotations) + } + return nil + } +} + +func testAccCheckMetaLabels(om *api.ObjectMeta, expected map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if len(expected) == 0 && len(om.Labels) == 0 { + return nil + } + if !reflect.DeepEqual(om.Labels, expected) { + return fmt.Errorf("%s labels don't match.\nExpected: %q\nGiven: %q", + om.Name, expected, om.Labels) + } + return nil + } +} + +func testAccCheckKubernetesNamespaceDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*kubernetes.Clientset) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "kubernetes_namespace" { + continue + } + + resp, err := conn.CoreV1().Namespaces().Get(rs.Primary.ID) + if err == nil { + if resp.Name == rs.Primary.ID { + return fmt.Errorf("Namespace still exists: %s", rs.Primary.ID) + } + } + } + + return nil +} + +func testAccCheckKubernetesNamespaceExists(n string, obj *api.Namespace) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*kubernetes.Clientset) + out, err := conn.CoreV1().Namespaces().Get(rs.Primary.ID) + if err != nil { + return err + } + + *obj = *out + return nil + } +} + +func testAccKubernetesNamespaceConfig_basic(nsName string) string { + return fmt.Sprintf(` +resource "kubernetes_namespace" "test" { + metadata { + annotations { + TestAnnotationOne = "one" + TestAnnotationTwo = "two" + } + labels { + TestLabelOne = "one" + TestLabelTwo = "two" + TestLabelThree = "three" + } + name = "%s" + } +}`, nsName) +} + +func testAccKubernetesNamespaceConfig_smallerLists(nsName string) string { + return fmt.Sprintf(` +resource "kubernetes_namespace" "test" { + metadata { + annotations { + TestAnnotationOne = "one" + Different = "1234" + } + labels { + TestLabelOne = "one" + TestLabelThree = "three" + } + name = "%s" + } +}`, nsName) +} + +func testAccKubernetesNamespaceConfig_noLists(nsName string) string { + return fmt.Sprintf(` +resource "kubernetes_namespace" "test" { + metadata { + name = "%s" + } +}`, nsName) +} + +func testAccKubernetesNamespaceConfig_generatedName(prefix string) string { + return fmt.Sprintf(` +resource "kubernetes_namespace" "test" { + metadata { + generate_name = "%s" + } +}`, prefix) +} diff --git a/builtin/providers/kubernetes/schema_metadata.go b/builtin/providers/kubernetes/schema_metadata.go new file mode 100644 index 0000000000..27644f83ad --- /dev/null +++ b/builtin/providers/kubernetes/schema_metadata.go @@ -0,0 +1,106 @@ +package kubernetes + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" +) + +func metadataFields(objectName string) map[string]*schema.Schema { + return map[string]*schema.Schema{ + "annotations": { + Type: schema.TypeMap, + Description: fmt.Sprintf("An unstructured key value map stored with the %s that may be used to store arbitrary metadata. More info: http://kubernetes.io/docs/user-guide/annotations", objectName), + Optional: true, + ValidateFunc: validateAnnotations, + }, + "generation": { + Type: schema.TypeInt, + Description: "A sequence number representing a specific generation of the desired state.", + Computed: true, + }, + "labels": { + Type: schema.TypeMap, + Description: fmt.Sprintf("Map of string keys and values that can be used to organize and categorize (scope and select) the %s. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels", objectName), + Optional: true, + ValidateFunc: validateLabels, + }, + "name": { + Type: schema.TypeString, + Description: fmt.Sprintf("Name of the %s, must be unique. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names", objectName), + Optional: true, + ForceNew: true, + Computed: true, + ValidateFunc: validateName, + ConflictsWith: []string{"metadata.generate_name"}, + }, + "resource_version": { + Type: schema.TypeString, + Description: fmt.Sprintf("An opaque value that represents the internal version of this %s that can be used by clients to determine when %s has changed. Read more: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency", objectName, objectName), + Computed: true, + }, + "self_link": { + Type: schema.TypeString, + Description: fmt.Sprintf("A URL representing this %s.", objectName), + Computed: true, + }, + "uid": { + Type: schema.TypeString, + Description: fmt.Sprintf("The unique in time and space value for this %s. More info: http://kubernetes.io/docs/user-guide/identifiers#uids", objectName), + Computed: true, + }, + } +} + +func metadataSchema(objectName string) *schema.Schema { + fields := metadataFields(objectName) + fields["generate_name"] = &schema.Schema{ + Type: schema.TypeString, + Description: "Prefix, used by the server, to generate a unique name ONLY IF the `name` field has not been provided. This value will also be combined with a unique suffix. Read more: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#idempotency", + Optional: true, + ForceNew: true, + ValidateFunc: validateGenerateName, + ConflictsWith: []string{"metadata.name"}, + } + + return &schema.Schema{ + Type: schema.TypeList, + Description: fmt.Sprintf("Standard %s's metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata", objectName), + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: fields, + }, + } +} + +func namespacedMetadataSchema(objectName string, generatableName bool) *schema.Schema { + fields := metadataFields(objectName) + fields["namespace"] = &schema.Schema{ + Type: schema.TypeString, + Description: fmt.Sprintf("Namespace defines the space within which name of the %s must be unique.", objectName), + Optional: true, + ForceNew: true, + Default: "default", + } + if generatableName { + fields["generate_name"] = &schema.Schema{ + Type: schema.TypeString, + Description: "Prefix, used by the server, to generate a unique name ONLY IF the `name` field has not been provided. This value will also be combined with a unique suffix. Read more: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#idempotency", + Optional: true, + ForceNew: true, + ValidateFunc: validateGenerateName, + ConflictsWith: []string{"metadata.name"}, + } + } + + return &schema.Schema{ + Type: schema.TypeList, + Description: fmt.Sprintf("Standard %s's metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata", objectName), + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: fields, + }, + } +} diff --git a/builtin/providers/kubernetes/structures.go b/builtin/providers/kubernetes/structures.go new file mode 100644 index 0000000000..8b98cee327 --- /dev/null +++ b/builtin/providers/kubernetes/structures.go @@ -0,0 +1,66 @@ +package kubernetes + +import ( + "fmt" + "strings" + + api "k8s.io/kubernetes/pkg/api/v1" +) + +func idParts(id string) (string, string) { + parts := strings.Split(id, "/") + return parts[0], parts[1] +} + +func buildId(meta api.ObjectMeta) string { + return meta.Namespace + "/" + meta.Name +} + +func expandMetadata(in []interface{}) api.ObjectMeta { + meta := api.ObjectMeta{} + if len(in) < 1 { + return meta + } + m := in[0].(map[string]interface{}) + + meta.Annotations = expandStringMap(m["annotations"].(map[string]interface{})) + meta.Labels = expandStringMap(m["labels"].(map[string]interface{})) + + if v, ok := m["generate_name"]; ok { + meta.GenerateName = v.(string) + } + if v, ok := m["name"]; ok { + meta.Name = v.(string) + } + if v, ok := m["namespace"]; ok { + meta.Namespace = v.(string) + } + + return meta +} + +func expandStringMap(m map[string]interface{}) map[string]string { + result := make(map[string]string) + for k, v := range m { + result[k] = v.(string) + } + return result +} + +func flattenMetadata(meta api.ObjectMeta) []map[string]interface{} { + m := make(map[string]interface{}) + m["annotations"] = meta.Annotations + m["generate_name"] = meta.GenerateName + m["labels"] = meta.Labels + m["name"] = meta.Name + m["resource_version"] = meta.ResourceVersion + m["self_link"] = meta.SelfLink + m["uid"] = fmt.Sprintf("%v", meta.UID) + m["generation"] = meta.Generation + + if meta.Namespace != "" { + m["namespace"] = meta.Namespace + } + + return []map[string]interface{}{m} +} diff --git a/builtin/providers/kubernetes/test-infra/main.tf b/builtin/providers/kubernetes/test-infra/main.tf new file mode 100644 index 0000000000..d09e4c2ca9 --- /dev/null +++ b/builtin/providers/kubernetes/test-infra/main.tf @@ -0,0 +1,63 @@ +provider "google" { + // Provider settings to be provided via ENV variables +} + +data "google_compute_zones" "available" {} + +resource "random_id" "cluster_name" { + byte_length = 10 +} +resource "random_id" "username" { + byte_length = 14 +} +resource "random_id" "password" { + byte_length = 16 +} + +resource "google_container_cluster" "primary" { + name = "tf-acc-test-${random_id.cluster_name.hex}" + zone = "${data.google_compute_zones.available.names[0]}" + initial_node_count = 3 + + additional_zones = [ + "${data.google_compute_zones.available.names[1]}" + ] + + master_auth { + username = "${random_id.username.hex}" + password = "${random_id.password.hex}" + } + + node_config { + oauth_scopes = [ + "https://www.googleapis.com/auth/compute", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring" + ] + } +} + +output "endpoint" { + value = "${google_container_cluster.primary.endpoint}" +} + +output "username" { + value = "${google_container_cluster.primary.master_auth.0.username}" +} + +output "password" { + value = "${google_container_cluster.primary.master_auth.0.password}" +} + +output "client_certificate_b64" { + value = "${google_container_cluster.primary.master_auth.0.client_certificate}" +} + +output "client_key_b64" { + value = "${google_container_cluster.primary.master_auth.0.client_key}" +} + +output "cluster_ca_certificate_b64" { + value = "${google_container_cluster.primary.master_auth.0.cluster_ca_certificate}" +} diff --git a/builtin/providers/kubernetes/validators.go b/builtin/providers/kubernetes/validators.go new file mode 100644 index 0000000000..22309a34e2 --- /dev/null +++ b/builtin/providers/kubernetes/validators.go @@ -0,0 +1,60 @@ +package kubernetes + +import ( + "fmt" + "strings" + + apiValidation "k8s.io/kubernetes/pkg/api/validation" + utilValidation "k8s.io/kubernetes/pkg/util/validation" +) + +func validateAnnotations(value interface{}, key string) (ws []string, es []error) { + m := value.(map[string]interface{}) + for k, _ := range m { + errors := utilValidation.IsQualifiedName(strings.ToLower(k)) + if len(errors) > 0 { + for _, e := range errors { + es = append(es, fmt.Errorf("%s (%q) %s", key, k, e)) + } + } + } + return +} + +func validateName(value interface{}, key string) (ws []string, es []error) { + v := value.(string) + + errors := apiValidation.NameIsDNSLabel(v, false) + if len(errors) > 0 { + for _, err := range errors { + es = append(es, fmt.Errorf("%s %s", key, err)) + } + } + return +} + +func validateGenerateName(value interface{}, key string) (ws []string, es []error) { + v := value.(string) + + errors := apiValidation.NameIsDNSLabel(v, true) + if len(errors) > 0 { + for _, err := range errors { + es = append(es, fmt.Errorf("%s %s", key, err)) + } + } + return +} + +func validateLabels(value interface{}, key string) (ws []string, es []error) { + m := value.(map[string]interface{}) + for k, v := range m { + for _, msg := range utilValidation.IsQualifiedName(k) { + es = append(es, fmt.Errorf("%s (%q) %s", key, k, msg)) + } + val := v.(string) + for _, msg := range utilValidation.IsValidLabelValue(val) { + es = append(es, fmt.Errorf("%s (%q) %s", key, val, msg)) + } + } + return +} diff --git a/builtin/providers/librato/resource_librato_alert.go b/builtin/providers/librato/resource_librato_alert.go index ac61f45448..88ca52dc52 100644 --- a/builtin/providers/librato/resource_librato_alert.go +++ b/builtin/providers/librato/resource_librato_alert.go @@ -5,6 +5,7 @@ import ( "fmt" "log" "math" + "reflect" "strconv" "time" @@ -27,10 +28,6 @@ func resourceLibratoAlert() *schema.Resource { Required: true, ForceNew: false, }, - "id": &schema.Schema{ - Type: schema.TypeInt, - Computed: true, - }, "description": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -214,6 +211,7 @@ func resourceLibratoAlertCreate(d *schema.ResourceData, meta interface{}) error if err != nil { return fmt.Errorf("Error creating Librato alert %s: %s", *alert.Name, err) } + log.Printf("[INFO] Created Librato alert: %s", *alertResult) resource.Retry(1*time.Minute, func() *resource.RetryError { _, _, err := client.Alerts.Get(*alertResult.ID) @@ -226,7 +224,9 @@ func resourceLibratoAlertCreate(d *schema.ResourceData, meta interface{}) error return nil }) - return resourceLibratoAlertReadResult(d, alertResult) + d.SetId(strconv.FormatUint(uint64(*alertResult.ID), 10)) + + return resourceLibratoAlertRead(d, meta) } func resourceLibratoAlertRead(d *schema.ResourceData, meta interface{}) error { @@ -236,6 +236,7 @@ func resourceLibratoAlertRead(d *schema.ResourceData, meta interface{}) error { return err } + log.Printf("[INFO] Reading Librato Alert: %d", id) alert, _, err := client.Alerts.Get(uint(id)) if err != nil { if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 { @@ -244,23 +245,22 @@ func resourceLibratoAlertRead(d *schema.ResourceData, meta interface{}) error { } return fmt.Errorf("Error reading Librato Alert %s: %s", d.Id(), err) } + log.Printf("[INFO] Received Librato Alert: %s", *alert) return resourceLibratoAlertReadResult(d, alert) } func resourceLibratoAlertReadResult(d *schema.ResourceData, alert *librato.Alert) error { - d.SetId(strconv.FormatUint(uint64(*alert.ID), 10)) - d.Set("id", *alert.ID) d.Set("name", *alert.Name) d.Set("description", *alert.Description) d.Set("active", *alert.Active) d.Set("rearm_seconds", *alert.RearmSeconds) services := resourceLibratoAlertServicesGather(d, alert.Services.([]interface{})) - d.Set("services", services) + d.Set("services", schema.NewSet(schema.HashString, services)) conditions := resourceLibratoAlertConditionsGather(d, alert.Conditions) - d.Set("condition", conditions) + d.Set("condition", schema.NewSet(resourceLibratoAlertConditionsHash, conditions)) attributes := resourceLibratoAlertAttributesGather(d, alert.Attributes) d.Set("attributes", attributes) @@ -268,8 +268,8 @@ func resourceLibratoAlertReadResult(d *schema.ResourceData, alert *librato.Alert return nil } -func resourceLibratoAlertServicesGather(d *schema.ResourceData, services []interface{}) []string { - retServices := make([]string, 0, len(services)) +func resourceLibratoAlertServicesGather(d *schema.ResourceData, services []interface{}) []interface{} { + retServices := make([]interface{}, 0, len(services)) for _, s := range services { serviceData := s.(map[string]interface{}) @@ -280,8 +280,8 @@ func resourceLibratoAlertServicesGather(d *schema.ResourceData, services []inter return retServices } -func resourceLibratoAlertConditionsGather(d *schema.ResourceData, conditions []librato.AlertCondition) []map[string]interface{} { - retConditions := make([]map[string]interface{}, 0, len(conditions)) +func resourceLibratoAlertConditionsGather(d *schema.ResourceData, conditions []librato.AlertCondition) []interface{} { + retConditions := make([]interface{}, 0, len(conditions)) for _, c := range conditions { condition := make(map[string]interface{}) if c.Type != nil { @@ -300,7 +300,7 @@ func resourceLibratoAlertConditionsGather(d *schema.ResourceData, conditions []l condition["detect_reset"] = *c.MetricName } if c.Duration != nil { - condition["duration"] = *c.Duration + condition["duration"] = int(*c.Duration) } if c.SummaryFunction != nil { condition["summary_function"] = *c.SummaryFunction @@ -334,16 +334,25 @@ func resourceLibratoAlertUpdate(d *schema.ResourceData, meta interface{}) error return err } + // Just to have whole object for comparison before/after update + fullAlert, _, err := client.Alerts.Get(uint(alertID)) + if err != nil { + return err + } + alert := new(librato.Alert) alert.Name = librato.String(d.Get("name").(string)) if d.HasChange("description") { alert.Description = librato.String(d.Get("description").(string)) + fullAlert.Description = alert.Description } if d.HasChange("active") { alert.Active = librato.Bool(d.Get("active").(bool)) + fullAlert.Active = alert.Active } if d.HasChange("rearm_seconds") { alert.RearmSeconds = librato.Uint(uint(d.Get("rearm_seconds").(int))) + fullAlert.RearmSeconds = alert.RearmSeconds } if d.HasChange("services") { vs := d.Get("services").(*schema.Set) @@ -352,6 +361,7 @@ func resourceLibratoAlertUpdate(d *schema.ResourceData, meta interface{}) error services[i] = librato.String(serviceData.(string)) } alert.Services = services + fullAlert.RearmSeconds = alert.RearmSeconds } vs := d.Get("condition").(*schema.Set) @@ -382,6 +392,7 @@ func resourceLibratoAlertUpdate(d *schema.ResourceData, meta interface{}) error } conditions[i] = condition alert.Conditions = conditions + fullAlert.Conditions = conditions } if d.HasChange("attributes") { attributeData := d.Get("attributes").([]interface{}) @@ -397,14 +408,42 @@ func resourceLibratoAlertUpdate(d *schema.ResourceData, meta interface{}) error attributes.RunbookURL = librato.String(v) } alert.Attributes = attributes + fullAlert.Attributes = attributes } } + log.Printf("[INFO] Updating Librato alert: %s", alert) _, err = client.Alerts.Edit(uint(alertID), alert) if err != nil { return fmt.Errorf("Error updating Librato alert: %s", err) } + log.Printf("[INFO] Updated Librato alert %d", alertID) + + // Wait for propagation since Librato updates are eventually consistent + wait := resource.StateChangeConf{ + Pending: []string{fmt.Sprintf("%t", false)}, + Target: []string{fmt.Sprintf("%t", true)}, + Timeout: 5 * time.Minute, + MinTimeout: 2 * time.Second, + ContinuousTargetOccurence: 5, + Refresh: func() (interface{}, string, error) { + log.Printf("[DEBUG] Checking if Librato Alert %d was updated yet", alertID) + changedAlert, _, err := client.Alerts.Get(uint(alertID)) + if err != nil { + return changedAlert, "", err + } + isEqual := reflect.DeepEqual(*fullAlert, *changedAlert) + log.Printf("[DEBUG] Updated Librato Alert %d match: %t", alertID, isEqual) + return changedAlert, fmt.Sprintf("%t", isEqual), nil + }, + } + + _, err = wait.WaitForState() + if err != nil { + return fmt.Errorf("Failed updating Librato Alert %d: %s", alertID, err) + } + return resourceLibratoAlertRead(d, meta) } diff --git a/builtin/providers/librato/resource_librato_service.go b/builtin/providers/librato/resource_librato_service.go index 786d8c7d8e..e289fee0d7 100644 --- a/builtin/providers/librato/resource_librato_service.go +++ b/builtin/providers/librato/resource_librato_service.go @@ -4,6 +4,7 @@ import ( "encoding/json" "fmt" "log" + "reflect" "strconv" "time" @@ -124,6 +125,7 @@ func resourceLibratoServiceRead(d *schema.ResourceData, meta interface{}) error return err } + log.Printf("[INFO] Reading Librato Service: %d", id) service, _, err := client.Services.Get(uint(id)) if err != nil { if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 { @@ -132,6 +134,7 @@ func resourceLibratoServiceRead(d *schema.ResourceData, meta interface{}) error } return fmt.Errorf("Error reading Librato Service %s: %s", d.Id(), err) } + log.Printf("[INFO] Received Librato Service: %s", service) return resourceLibratoServiceReadResult(d, service) } @@ -155,12 +158,20 @@ func resourceLibratoServiceUpdate(d *schema.ResourceData, meta interface{}) erro return err } + // Just to have whole object for comparison before/after update + fullService, _, err := client.Services.Get(uint(serviceID)) + if err != nil { + return err + } + service := new(librato.Service) if d.HasChange("type") { service.Type = librato.String(d.Get("type").(string)) + fullService.Type = service.Type } if d.HasChange("title") { service.Title = librato.String(d.Get("title").(string)) + fullService.Title = service.Title } if d.HasChange("settings") { res, err := resourceLibratoServicesExpandSettings(normalizeJson(d.Get("settings").(string))) @@ -168,12 +179,39 @@ func resourceLibratoServiceUpdate(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error expanding Librato service settings: %s", err) } service.Settings = res + fullService.Settings = res } + log.Printf("[INFO] Updating Librato Service %d: %s", serviceID, service) _, err = client.Services.Edit(uint(serviceID), service) if err != nil { return fmt.Errorf("Error updating Librato service: %s", err) } + log.Printf("[INFO] Updated Librato Service %d", serviceID) + + // Wait for propagation since Librato updates are eventually consistent + wait := resource.StateChangeConf{ + Pending: []string{fmt.Sprintf("%t", false)}, + Target: []string{fmt.Sprintf("%t", true)}, + Timeout: 5 * time.Minute, + MinTimeout: 2 * time.Second, + ContinuousTargetOccurence: 5, + Refresh: func() (interface{}, string, error) { + log.Printf("[DEBUG] Checking if Librato Service %d was updated yet", serviceID) + changedService, _, err := client.Services.Get(uint(serviceID)) + if err != nil { + return changedService, "", err + } + isEqual := reflect.DeepEqual(*fullService, *changedService) + log.Printf("[DEBUG] Updated Librato Service %d match: %t", serviceID, isEqual) + return changedService, fmt.Sprintf("%t", isEqual), nil + }, + } + + _, err = wait.WaitForState() + if err != nil { + return fmt.Errorf("Failed updating Librato Service %d: %s", serviceID, err) + } return resourceLibratoServiceRead(d, meta) } diff --git a/builtin/providers/librato/resource_librato_space_chart.go b/builtin/providers/librato/resource_librato_space_chart.go index dea499974d..a010efc9f4 100644 --- a/builtin/providers/librato/resource_librato_space_chart.go +++ b/builtin/providers/librato/resource_librato_space_chart.go @@ -5,6 +5,7 @@ import ( "fmt" "log" "math" + "reflect" "strconv" "time" @@ -339,9 +340,16 @@ func resourceLibratoSpaceChartUpdate(d *schema.ResourceData, meta interface{}) e return err } + // Just to have whole object for comparison before/after update + fullChart, _, err := client.Spaces.GetChart(spaceID, uint(chartID)) + if err != nil { + return err + } + spaceChart := new(librato.SpaceChart) if d.HasChange("name") { spaceChart.Name = librato.String(d.Get("name").(string)) + fullChart.Name = spaceChart.Name } if d.HasChange("min") { if math.IsNaN(d.Get("min").(float64)) { @@ -349,6 +357,7 @@ func resourceLibratoSpaceChartUpdate(d *schema.ResourceData, meta interface{}) e } else { spaceChart.Min = librato.Float(d.Get("min").(float64)) } + fullChart.Min = spaceChart.Min } if d.HasChange("max") { if math.IsNaN(d.Get("max").(float64)) { @@ -356,12 +365,15 @@ func resourceLibratoSpaceChartUpdate(d *schema.ResourceData, meta interface{}) e } else { spaceChart.Max = librato.Float(d.Get("max").(float64)) } + fullChart.Max = spaceChart.Max } if d.HasChange("label") { spaceChart.Label = librato.String(d.Get("label").(string)) + fullChart.Label = spaceChart.Label } if d.HasChange("related_space") { spaceChart.RelatedSpace = librato.Uint(d.Get("related_space").(uint)) + fullChart.RelatedSpace = spaceChart.RelatedSpace } if d.HasChange("stream") { vs := d.Get("stream").(*schema.Set) @@ -405,6 +417,7 @@ func resourceLibratoSpaceChartUpdate(d *schema.ResourceData, meta interface{}) e streams[i] = stream } spaceChart.Streams = streams + fullChart.Streams = streams } _, err = client.Spaces.EditChart(spaceID, uint(chartID), spaceChart) @@ -412,6 +425,30 @@ func resourceLibratoSpaceChartUpdate(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error updating Librato space chart %s: %s", *spaceChart.Name, err) } + // Wait for propagation since Librato updates are eventually consistent + wait := resource.StateChangeConf{ + Pending: []string{fmt.Sprintf("%t", false)}, + Target: []string{fmt.Sprintf("%t", true)}, + Timeout: 5 * time.Minute, + MinTimeout: 2 * time.Second, + ContinuousTargetOccurence: 5, + Refresh: func() (interface{}, string, error) { + log.Printf("[DEBUG] Checking if Librato Space Chart %d was updated yet", chartID) + changedChart, _, err := client.Spaces.GetChart(spaceID, uint(chartID)) + if err != nil { + return changedChart, "", err + } + isEqual := reflect.DeepEqual(*fullChart, *changedChart) + log.Printf("[DEBUG] Updated Librato Space Chart %d match: %t", chartID, isEqual) + return changedChart, fmt.Sprintf("%t", isEqual), nil + }, + } + + _, err = wait.WaitForState() + if err != nil { + return fmt.Errorf("Failed updating Librato Space Chart %d: %s", chartID, err) + } + return resourceLibratoSpaceChartRead(d, meta) } diff --git a/builtin/providers/mysql/resource_grant.go b/builtin/providers/mysql/resource_grant.go index 5513628850..0414fe4418 100644 --- a/builtin/providers/mysql/resource_grant.go +++ b/builtin/providers/mysql/resource_grant.go @@ -88,7 +88,18 @@ func CreateGrant(d *schema.ResourceData, meta interface{}) error { } func ReadGrant(d *schema.ResourceData, meta interface{}) error { - // At this time, all attributes are supplied by the user + conn := meta.(*providerConfiguration).Conn + + stmtSQL := fmt.Sprintf("SHOW GRANTS FOR '%s'@'%s'", + d.Get("user").(string), + d.Get("host").(string)) + + log.Println("Executing statement:", stmtSQL) + + _, _, err := conn.Query(stmtSQL) + if err != nil { + d.SetId("") + } return nil } diff --git a/builtin/providers/mysql/resource_user.go b/builtin/providers/mysql/resource_user.go index 7cdf5b8123..ce9bec1186 100644 --- a/builtin/providers/mysql/resource_user.go +++ b/builtin/providers/mysql/resource_user.go @@ -95,7 +95,21 @@ func UpdateUser(d *schema.ResourceData, meta interface{}) error { } func ReadUser(d *schema.ResourceData, meta interface{}) error { - // At this time, all attributes are supplied by the user + conn := meta.(*providerConfiguration).Conn + + stmtSQL := fmt.Sprintf("SELECT USER FROM mysql.user WHERE USER='%s'", + d.Get("user").(string)) + + log.Println("Executing statement:", stmtSQL) + + rows, _, err := conn.Query(stmtSQL) + log.Println("Returned rows:", len(rows)) + if err != nil { + return err + } + if len(rows) == 0 { + d.SetId("") + } return nil } diff --git a/builtin/providers/ns1/config.go b/builtin/providers/ns1/config.go new file mode 100644 index 0000000000..92f0806870 --- /dev/null +++ b/builtin/providers/ns1/config.go @@ -0,0 +1,46 @@ +package ns1 + +import ( + "crypto/tls" + "errors" + "log" + "net/http" + + ns1 "gopkg.in/ns1/ns1-go.v2/rest" +) + +type Config struct { + Key string + Endpoint string + IgnoreSSL bool +} + +// Client() returns a new NS1 client. +func (c *Config) Client() (*ns1.Client, error) { + httpClient := &http.Client{} + decos := []func(*ns1.Client){} + + if c.Key == "" { + return nil, errors.New(`No valid credential sources found for NS1 Provider. + Please see https://terraform.io/docs/providers/ns1/index.html for more information on + providing credentials for the NS1 Provider`) + } + + decos = append(decos, ns1.SetAPIKey(c.Key)) + if c.Endpoint != "" { + decos = append(decos, ns1.SetEndpoint(c.Endpoint)) + } + if c.IgnoreSSL { + tr := &http.Transport{ + TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, + } + httpClient.Transport = tr + } + + client := ns1.NewClient(httpClient, decos...) + client.RateLimitStrategySleep() + + log.Printf("[INFO] NS1 Client configured for Endpoint: %s", client.Endpoint.String()) + + return client, nil +} diff --git a/builtin/providers/ns1/provider.go b/builtin/providers/ns1/provider.go index 2f0e383445..ab0f546113 100644 --- a/builtin/providers/ns1/provider.go +++ b/builtin/providers/ns1/provider.go @@ -1,13 +1,8 @@ package ns1 import ( - "crypto/tls" - "net/http" - "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" - - ns1 "gopkg.in/ns1/ns1-go.v2/rest" ) // Provider returns a terraform.ResourceProvider. @@ -49,22 +44,18 @@ func Provider() terraform.ResourceProvider { } func ns1Configure(d *schema.ResourceData) (interface{}, error) { - httpClient := &http.Client{} - decos := []func(*ns1.Client){} - decos = append(decos, ns1.SetAPIKey(d.Get("apikey").(string))) - if v, ok := d.GetOk("endpoint"); ok { - decos = append(decos, ns1.SetEndpoint(v.(string))) - } - if _, ok := d.GetOk("ignore_ssl"); ok { - tr := &http.Transport{ - TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, - } - httpClient.Transport = tr + config := Config{ + Key: d.Get("apikey").(string), } - n := ns1.NewClient(httpClient, decos...) - n.RateLimitStrategySleep() - return n, nil + if v, ok := d.GetOk("endpoint"); ok { + config.Endpoint = v.(string) + } + if v, ok := d.GetOk("ignore_ssl"); ok { + config.IgnoreSSL = v.(bool) + } + + return config.Client() } var descriptions map[string]string diff --git a/builtin/providers/ns1/resource_record_test.go b/builtin/providers/ns1/resource_record_test.go index 294e747069..e73a143cc4 100644 --- a/builtin/providers/ns1/resource_record_test.go +++ b/builtin/providers/ns1/resource_record_test.go @@ -120,9 +120,9 @@ func testAccCheckRecordDestroy(s *terraform.State) error { } } - foundRecord, _, err := client.Records.Get(recordDomain, recordZone, recordType) - if err != nil { - return fmt.Errorf("Record still exists: %#v", foundRecord) + foundRecord, _, err := client.Records.Get(recordZone, recordDomain, recordType) + if err != ns1.ErrRecordMissing { + return fmt.Errorf("Record still exists: %#v %#v", foundRecord, err) } return nil diff --git a/builtin/providers/ns1/resource_user.go b/builtin/providers/ns1/resource_user.go index 0add2b4291..012d021f9c 100644 --- a/builtin/providers/ns1/resource_user.go +++ b/builtin/providers/ns1/resource_user.go @@ -28,14 +28,7 @@ func userResource() *schema.Resource { "notify": &schema.Schema{ Type: schema.TypeMap, Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "billing": &schema.Schema{ - Type: schema.TypeBool, - Required: true, - }, - }, - }, + Elem: schema.TypeBool, }, "teams": &schema.Schema{ Type: schema.TypeList, diff --git a/builtin/providers/ns1/resource_user_test.go b/builtin/providers/ns1/resource_user_test.go new file mode 100644 index 0000000000..b32d7e4537 --- /dev/null +++ b/builtin/providers/ns1/resource_user_test.go @@ -0,0 +1,102 @@ +package ns1 + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + ns1 "gopkg.in/ns1/ns1-go.v2/rest" + "gopkg.in/ns1/ns1-go.v2/rest/model/account" +) + +func TestAccUser_basic(t *testing.T) { + var user account.User + rString := acctest.RandStringFromCharSet(15, acctest.CharSetAlphaNum) + name := fmt.Sprintf("terraform acc test user %s", rString) + username := fmt.Sprintf("tf_acc_test_user_%s", rString) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckUserDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccUserBasic(rString), + Check: resource.ComposeTestCheckFunc( + testAccCheckUserExists("ns1_user.u", &user), + resource.TestCheckResourceAttr("ns1_user.u", "email", "tf_acc_test_ns1@hashicorp.com"), + resource.TestCheckResourceAttr("ns1_user.u", "name", name), + resource.TestCheckResourceAttr("ns1_user.u", "teams.#", "1"), + resource.TestCheckResourceAttr("ns1_user.u", "notify.%", "1"), + resource.TestCheckResourceAttr("ns1_user.u", "notify.billing", "true"), + resource.TestCheckResourceAttr("ns1_user.u", "username", username), + ), + }, + }, + }) +} + +func testAccCheckUserDestroy(s *terraform.State) error { + client := testAccProvider.Meta().(*ns1.Client) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "ns1_user" { + continue + } + + user, _, err := client.Users.Get(rs.Primary.Attributes["id"]) + if err == nil { + return fmt.Errorf("User still exists: %#v: %#v", err, user.Name) + } + } + + return nil +} + +func testAccCheckUserExists(n string, user *account.User) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*ns1.Client) + + foundUser, _, err := client.Users.Get(rs.Primary.ID) + if err != nil { + return err + } + + if foundUser.Username != rs.Primary.ID { + return fmt.Errorf("User not found (%#v != %s)", foundUser, rs.Primary.ID) + } + + *user = *foundUser + + return nil + } +} + +func testAccUserBasic(rString string) string { + return fmt.Sprintf(`resource "ns1_team" "t" { + name = "terraform acc test team %s" +} + +resource "ns1_user" "u" { + name = "terraform acc test user %s" + username = "tf_acc_test_user_%s" + email = "tf_acc_test_ns1@hashicorp.com" + teams = ["${ns1_team.t.id}"] + notify { + billing = true + } +} +`, rString, rString, rString) +} diff --git a/builtin/providers/openstack/data_source_openstack_networking_network_v2.go b/builtin/providers/openstack/data_source_openstack_networking_network_v2.go index 53e7e1a9ff..f7615c41a1 100644 --- a/builtin/providers/openstack/data_source_openstack_networking_network_v2.go +++ b/builtin/providers/openstack/data_source_openstack_networking_network_v2.go @@ -17,6 +17,10 @@ func dataSourceNetworkingNetworkV2() *schema.Resource { Read: dataSourceNetworkingNetworkV2Read, Schema: map[string]*schema.Schema{ + "network_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, "name": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -56,6 +60,7 @@ func dataSourceNetworkingNetworkV2Read(d *schema.ResourceData, meta interface{}) networkingClient, err := config.networkingV2Client(GetRegion(d)) listOpts := networks.ListOpts{ + ID: d.Get("network_id").(string), Name: d.Get("name").(string), TenantID: d.Get("tenant_id").(string), Status: "ACTIVE", diff --git a/builtin/providers/openstack/data_source_openstack_networking_network_v2_test.go b/builtin/providers/openstack/data_source_openstack_networking_network_v2_test.go index e3dfc860dc..db721d15a7 100644 --- a/builtin/providers/openstack/data_source_openstack_networking_network_v2_test.go +++ b/builtin/providers/openstack/data_source_openstack_networking_network_v2_test.go @@ -52,6 +52,28 @@ func TestAccOpenStackNetworkingNetworkV2DataSource_subnet(t *testing.T) { }) } +func TestAccOpenStackNetworkingNetworkV2DataSource_networkID(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccOpenStackNetworkingNetworkV2DataSource_network, + }, + resource.TestStep{ + Config: testAccOpenStackNetworkingNetworkV2DataSource_networkID, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingNetworkV2DataSourceID("data.openstack_networking_network_v2.net"), + resource.TestCheckResourceAttr( + "data.openstack_networking_network_v2.net", "name", "tf_test_network"), + resource.TestCheckResourceAttr( + "data.openstack_networking_network_v2.net", "admin_state_up", "true"), + ), + }, + }, + }) +} + func testAccCheckNetworkingNetworkV2DataSourceID(n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -96,3 +118,11 @@ data "openstack_networking_network_v2" "net" { matching_subnet_cidr = "${openstack_networking_subnet_v2.subnet.cidr}" } `, testAccOpenStackNetworkingNetworkV2DataSource_network) + +var testAccOpenStackNetworkingNetworkV2DataSource_networkID = fmt.Sprintf(` +%s + +data "openstack_networking_network_v2" "net" { + network_id = "${openstack_networking_network_v2.net.id}" +} +`, testAccOpenStackNetworkingNetworkV2DataSource_network) diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2.go index 94b501ef84..4dd28e7bc9 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2.go @@ -19,6 +19,11 @@ func resourceBlockStorageVolumeAttachV2() *schema.Resource { Read: resourceBlockStorageVolumeAttachV2Read, Delete: resourceBlockStorageVolumeAttachV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -231,7 +236,7 @@ func resourceBlockStorageVolumeAttachV2Create(d *schema.ResourceData, meta inter Pending: []string{"available", "attaching"}, Target: []string{"in-use"}, Refresh: VolumeV2StateRefreshFunc(client, volumeId), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -369,7 +374,7 @@ func resourceBlockStorageVolumeAttachV2Delete(d *schema.ResourceData, meta inter Pending: []string{"in-use", "attaching", "detaching"}, Target: []string{"available"}, Refresh: VolumeV2StateRefreshFunc(client, volumeId), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2_test.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2_test.go index 53fd72fc85..d6b54c4476 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_attach_v2_test.go @@ -29,6 +29,24 @@ func TestAccBlockStorageVolumeAttachV2_basic(t *testing.T) { }) } +func TestAccBlockStorageVolumeAttachV2_timeout(t *testing.T) { + var va volumes.Attachment + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBlockStorageVolumeAttachV2Destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccBlockStorageVolumeAttachV2_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckBlockStorageVolumeAttachV2Exists("openstack_blockstorage_volume_attach_v2.va_1", &va), + ), + }, + }, + }) +} + func testAccCheckBlockStorageVolumeAttachV2Destroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) client, err := config.blockStorageV2Client(OS_REGION_NAME) @@ -124,3 +142,25 @@ resource "openstack_blockstorage_volume_attach_v2" "va_1" { platform = "x86_64" } ` + +const testAccBlockStorageVolumeAttachV2_timeout = ` +resource "openstack_blockstorage_volume_v2" "volume_1" { + name = "volume_1" + size = 1 +} + +resource "openstack_blockstorage_volume_attach_v2" "va_1" { + volume_id = "${openstack_blockstorage_volume_v2.volume_1.id}" + device = "auto" + + host_name = "devstack" + ip_address = "192.168.255.10" + initiator = "iqn.1993-08.org.debian:01:e9861fb1859" + os_type = "linux2" + platform = "x86_64" + + timeouts { + create = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go index ed3ef561d4..8c84a08e86 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go @@ -24,6 +24,11 @@ func resourceBlockStorageVolumeV1() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -139,7 +144,7 @@ func resourceBlockStorageVolumeV1Create(d *schema.ResourceData, meta interface{} Pending: []string{"downloading", "creating"}, Target: []string{"available"}, Refresh: VolumeV1StateRefreshFunc(blockStorageClient, v.ID), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -278,7 +283,7 @@ func resourceBlockStorageVolumeV1Delete(d *schema.ResourceData, meta interface{} Pending: []string{"deleting", "downloading", "available"}, Target: []string{"deleted"}, Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1_test.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1_test.go index 85d82f489d..7dd16169e6 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1_test.go @@ -61,6 +61,24 @@ func TestAccBlockStorageV1Volume_image(t *testing.T) { }) } +func TestAccBlockStorageV1Volume_timeout(t *testing.T) { + var volume volumes.Volume + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBlockStorageV1VolumeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccBlockStorageV1Volume_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckBlockStorageV1VolumeExists("openstack_blockstorage_volume_v1.volume_1", &volume), + ), + }, + }, + }) +} + func testAccCheckBlockStorageV1VolumeDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) blockStorageClient, err := config.blockStorageV1Client(OS_REGION_NAME) @@ -188,3 +206,16 @@ resource "openstack_blockstorage_volume_v1" "volume_1" { image_id = "%s" } `, OS_IMAGE_ID) + +const testAccBlockStorageV1Volume_timeout = ` +resource "openstack_blockstorage_volume_v1" "volume_1" { + name = "volume_1" + description = "first test volume" + size = 1 + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2.go index 3a889c3019..5944cac04d 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2.go @@ -24,6 +24,11 @@ func resourceBlockStorageVolumeV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -151,7 +156,7 @@ func resourceBlockStorageVolumeV2Create(d *schema.ResourceData, meta interface{} Pending: []string{"downloading", "creating"}, Target: []string{"available"}, Refresh: VolumeV2StateRefreshFunc(blockStorageClient, v.ID), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -289,7 +294,7 @@ func resourceBlockStorageVolumeV2Delete(d *schema.ResourceData, meta interface{} Pending: []string{"deleting", "downloading", "available"}, Target: []string{"deleted"}, Refresh: VolumeV2StateRefreshFunc(blockStorageClient, d.Id()), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2_test.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2_test.go index 43a1289793..a9991a71e6 100644 --- a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v2_test.go @@ -61,6 +61,24 @@ func TestAccBlockStorageV2Volume_image(t *testing.T) { }) } +func TestAccBlockStorageV2Volume_timeout(t *testing.T) { + var volume volumes.Volume + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBlockStorageV2VolumeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccBlockStorageV2Volume_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckBlockStorageV2VolumeExists("openstack_blockstorage_volume_v2.volume_1", &volume), + ), + }, + }, + }) +} + func testAccCheckBlockStorageV2VolumeDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) blockStorageClient, err := config.blockStorageV2Client(OS_REGION_NAME) @@ -186,3 +204,16 @@ resource "openstack_blockstorage_volume_v2" "volume_1" { image_id = "%s" } `, OS_IMAGE_ID) + +const testAccBlockStorageV2Volume_timeout = ` +resource "openstack_blockstorage_volume_v2" "volume_1" { + name = "volume_1" + description = "first test volume" + size = 1 + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go index 210627da7b..1fa514c29d 100644 --- a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go @@ -10,6 +10,7 @@ import ( "time" "github.com/gophercloud/gophercloud" + "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/availabilityzones" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/bootfromvolume" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs" @@ -33,6 +34,12 @@ func resourceComputeInstanceV2() *schema.Resource { Update: resourceComputeInstanceV2Update, Delete: resourceComputeInstanceV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(30 * time.Minute), + Delete: schema.DefaultTimeout(30 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -103,6 +110,7 @@ func resourceComputeInstanceV2() *schema.Resource { Type: schema.TypeString, Optional: true, ForceNew: true, + Computed: true, }, "network": &schema.Schema{ Type: schema.TypeList, @@ -322,6 +330,11 @@ func resourceComputeInstanceV2() *schema.Resource { Optional: true, Default: false, }, + "force_delete": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, }, } } @@ -450,7 +463,7 @@ func resourceComputeInstanceV2Create(d *schema.ResourceData, meta interface{}) e Pending: []string{"BUILD"}, Target: []string{"ACTIVE"}, Refresh: ServerV2StateRefreshFunc(computeClient, server.ID), - Timeout: 30 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -571,6 +584,21 @@ func resourceComputeInstanceV2Read(d *schema.ResourceData, meta interface{}) err return err } + // Build a custom struct for the availability zone extension + var serverWithAZ struct { + servers.Server + availabilityzones.ServerExt + } + + // Do another Get so the above work is not disturbed. + err = servers.Get(computeClient, d.Id()).ExtractInto(&serverWithAZ) + if err != nil { + return CheckDeleted(d, err, "server") + } + + // Set the availability zone + d.Set("availability_zone", serverWithAZ.AvailabilityZone) + return nil } @@ -786,7 +814,7 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e Pending: []string{"RESIZE"}, Target: []string{"VERIFY_RESIZE"}, Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), - Timeout: 30 * time.Minute, + Timeout: d.Timeout(schema.TimeoutUpdate), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -807,7 +835,7 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e Pending: []string{"VERIFY_RESIZE"}, Target: []string{"ACTIVE"}, Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), - Timeout: 30 * time.Minute, + Timeout: d.Timeout(schema.TimeoutUpdate), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -865,9 +893,18 @@ func resourceComputeInstanceV2Delete(d *schema.ResourceData, meta interface{}) e } } - err = servers.Delete(computeClient, d.Id()).ExtractErr() - if err != nil { - return fmt.Errorf("Error deleting OpenStack server: %s", err) + if d.Get("force_delete").(bool) { + log.Printf("[DEBUG] Force deleting OpenStack Instance %s", d.Id()) + err = servers.ForceDelete(computeClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack server: %s", err) + } + } else { + log.Printf("[DEBUG] Deleting OpenStack Instance %s", d.Id()) + err = servers.Delete(computeClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack server: %s", err) + } } // Wait for the instance to delete before moving on. @@ -877,7 +914,7 @@ func resourceComputeInstanceV2Delete(d *schema.ResourceData, meta interface{}) e Pending: []string{"ACTIVE", "SHUTOFF"}, Target: []string{"DELETED"}, Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), - Timeout: 30 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go index 3665ff99de..cfe807e4f2 100644 --- a/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go @@ -29,6 +29,8 @@ func TestAccComputeV2Instance_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckComputeV2InstanceExists("openstack_compute_instance_v2.instance_1", &instance), testAccCheckComputeV2InstanceMetadata(&instance, "foo", "bar"), + resource.TestCheckResourceAttr( + "openstack_compute_instance_v2.instance_1", "availability_zone", "nova"), ), }, }, @@ -620,6 +622,40 @@ func TestAccComputeV2Instance_metadataRemove(t *testing.T) { }) } +func TestAccComputeV2Instance_forceDelete(t *testing.T) { + var instance servers.Server + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2InstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Instance_forceDelete, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2InstanceExists("openstack_compute_instance_v2.instance_1", &instance), + ), + }, + }, + }) +} + +func TestAccComputeV2Instance_timeout(t *testing.T) { + var instance servers.Server + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2InstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Instance_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2InstanceExists("openstack_compute_instance_v2.instance_1", &instance), + ), + }, + }, + }) +} + func testAccCheckComputeV2InstanceDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) computeClient, err := config.computeV2Client(OS_REGION_NAME) @@ -1513,3 +1549,22 @@ resource "openstack_compute_instance_v2" "instance_1" { } } ` + +const testAccComputeV2Instance_forceDelete = ` +resource "openstack_compute_instance_v2" "instance_1" { + name = "instance_1" + security_groups = ["default"] + force_delete = true +} +` + +const testAccComputeV2Instance_timeout = ` +resource "openstack_compute_instance_v2" "instance_1" { + name = "instance_1" + security_groups = ["default"] + + timeouts { + create = "10m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go index dfedc04177..99887a2dac 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go @@ -24,6 +24,10 @@ func resourceComputeSecGroupV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -224,7 +228,7 @@ func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) e Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: SecGroupV2StateRefreshFunc(computeClient, d), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go index f0381e3373..f4a0d3ddc9 100644 --- a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go @@ -144,6 +144,24 @@ func TestAccComputeV2SecGroup_lowerCaseCIDR(t *testing.T) { }) } +func TestAccComputeV2SecGroup_timeout(t *testing.T) { + var secgroup secgroups.SecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2SecGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2SecGroup_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2SecGroupExists("openstack_compute_secgroup_v2.sg_1", &secgroup), + ), + }, + }, + }) +} + func testAccCheckComputeV2SecGroupDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) computeClient, err := config.computeV2Client(OS_REGION_NAME) @@ -373,3 +391,20 @@ resource "openstack_compute_secgroup_v2" "sg_1" { } } ` + +const testAccComputeV2SecGroup_timeout = ` +resource "openstack_compute_secgroup_v2" "sg_1" { + name = "sg_1" + description = "first test security group" + rule { + from_port = 0 + to_port = 0 + ip_protocol = "icmp" + cidr = "0.0.0.0/0" + } + + timeouts { + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2.go b/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2.go index 34404ee853..1eb8506e60 100644 --- a/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2.go +++ b/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2.go @@ -22,6 +22,11 @@ func resourceComputeVolumeAttachV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -82,7 +87,7 @@ func resourceComputeVolumeAttachV2Create(d *schema.ResourceData, meta interface{ Pending: []string{"ATTACHING"}, Target: []string{"ATTACHED"}, Refresh: resourceComputeVolumeAttachV2AttachFunc(computeClient, instanceId, attachment.ID), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 30 * time.Second, MinTimeout: 15 * time.Second, } @@ -145,7 +150,7 @@ func resourceComputeVolumeAttachV2Delete(d *schema.ResourceData, meta interface{ Pending: []string{""}, Target: []string{"DETACHED"}, Refresh: resourceComputeVolumeAttachV2DetachFunc(computeClient, instanceId, attachmentId), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 15 * time.Second, MinTimeout: 15 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2_test.go index a32e3ad3df..fb5b6baa38 100644 --- a/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_compute_volume_attach_v2_test.go @@ -47,6 +47,24 @@ func TestAccComputeV2VolumeAttach_device(t *testing.T) { }) } +func TestAccComputeV2VolumeAttach_timeout(t *testing.T) { + var va volumeattach.VolumeAttachment + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2VolumeAttachDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2VolumeAttach_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2VolumeAttachExists("openstack_compute_volume_attach_v2.va_1", &va), + ), + }, + }, + }) +} + func testAccCheckComputeV2VolumeAttachDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) computeClient, err := config.computeV2Client(OS_REGION_NAME) @@ -156,3 +174,25 @@ resource "openstack_compute_volume_attach_v2" "va_1" { device = "/dev/vdc" } ` + +const testAccComputeV2VolumeAttach_timeout = ` +resource "openstack_blockstorage_volume_v2" "volume_1" { + name = "volume_1" + size = 1 +} + +resource "openstack_compute_instance_v2" "instance_1" { + name = "instance_1" + security_groups = ["default"] +} + +resource "openstack_compute_volume_attach_v2" "va_1" { + instance_id = "${openstack_compute_instance_v2.instance_1.id}" + volume_id = "${openstack_blockstorage_volume_v2.volume_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go b/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go index 425a6c5e10..7fb5055eea 100644 --- a/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go +++ b/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go @@ -21,6 +21,12 @@ func resourceFWFirewallV1() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -94,7 +100,7 @@ func resourceFWFirewallV1Create(d *schema.ResourceData, meta interface{}) error Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForFirewallActive(networkingClient, firewall.ID), - Timeout: 30 * time.Second, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 0, MinTimeout: 2 * time.Second, } @@ -165,7 +171,7 @@ func resourceFWFirewallV1Update(d *schema.ResourceData, meta interface{}) error Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, Target: []string{"ACTIVE"}, Refresh: waitForFirewallActive(networkingClient, d.Id()), - Timeout: 30 * time.Second, + Timeout: d.Timeout(schema.TimeoutUpdate), Delay: 0, MinTimeout: 2 * time.Second, } @@ -189,11 +195,12 @@ func resourceFWFirewallV1Delete(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error creating OpenStack networking client: %s", err) } + // Ensure the firewall was fully created/updated before being deleted. stateConf := &resource.StateChangeConf{ Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, Target: []string{"ACTIVE"}, Refresh: waitForFirewallActive(networkingClient, d.Id()), - Timeout: 30 * time.Second, + Timeout: d.Timeout(schema.TimeoutUpdate), Delay: 0, MinTimeout: 2 * time.Second, } @@ -210,7 +217,7 @@ func resourceFWFirewallV1Delete(d *schema.ResourceData, meta interface{}) error Pending: []string{"DELETING"}, Target: []string{"DELETED"}, Refresh: waitForFirewallDeletion(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 0, MinTimeout: 2 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go b/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go index 3d8431f1bc..c476a77b75 100644 --- a/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go @@ -36,6 +36,24 @@ func TestAccFWFirewallV1_basic(t *testing.T) { }) } +func TestAccFWFirewallV1_timeout(t *testing.T) { + var policyID *string + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckFWFirewallV1Destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccFWFirewallV1_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWFirewallV1Exists("openstack_fw_firewall_v1.fw_1", "", "", policyID), + ), + }, + }, + }) +} + func testAccCheckFWFirewallV1Destroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -135,3 +153,19 @@ resource "openstack_fw_policy_v1" "policy_2" { name = "policy_2" } ` + +const testAccFWFirewallV1_timeout = ` +resource "openstack_fw_firewall_v1" "fw_1" { + policy_id = "${openstack_fw_policy_v1.policy_1.id}" + + timeouts { + create = "5m" + update = "5m" + delete = "5m" + } +} + +resource "openstack_fw_policy_v1" "policy_1" { + name = "policy_1" +} +` diff --git a/builtin/providers/openstack/resource_openstack_fw_policy_v1.go b/builtin/providers/openstack/resource_openstack_fw_policy_v1.go index 5488fa763c..a810e360e5 100644 --- a/builtin/providers/openstack/resource_openstack_fw_policy_v1.go +++ b/builtin/providers/openstack/resource_openstack_fw_policy_v1.go @@ -21,6 +21,10 @@ func resourceFWPolicyV1() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -194,7 +198,7 @@ func resourceFWPolicyV1Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForFirewallPolicyDeletion(networkingClient, d.Id()), - Timeout: 120 * time.Second, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 0, MinTimeout: 2 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_fw_policy_v1_test.go b/builtin/providers/openstack/resource_openstack_fw_policy_v1_test.go index d0dc43a79d..7302db3e32 100644 --- a/builtin/providers/openstack/resource_openstack_fw_policy_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_fw_policy_v1_test.go @@ -62,6 +62,23 @@ func TestAccFWPolicyV1_deleteRules(t *testing.T) { }) } +func TestAccFWPolicyV1_timeout(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckFWPolicyV1Destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccFWPolicyV1_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWPolicyV1Exists( + "openstack_fw_policy_v1.policy_1", "", "", 0), + ), + }, + }, + }) +} + func testAccCheckFWPolicyV1Destroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -172,3 +189,11 @@ resource "openstack_fw_rule_v1" "udp_deny" { action = "deny" } ` + +const testAccFWPolicyV1_timeout = ` +resource "openstack_fw_policy_v1" "policy_1" { + timeouts { + create = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_images_image_v2.go b/builtin/providers/openstack/resource_openstack_images_image_v2.go index 5b61e54bbc..483494334a 100644 --- a/builtin/providers/openstack/resource_openstack_images_image_v2.go +++ b/builtin/providers/openstack/resource_openstack_images_image_v2.go @@ -29,6 +29,10 @@ func resourceImagesImageV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "checksum": &schema.Schema{ Type: schema.TypeString, @@ -226,7 +230,7 @@ func resourceImagesImageV2Create(d *schema.ResourceData, meta interface{}) error Pending: []string{string(images.ImageStatusQueued), string(images.ImageStatusSaving)}, Target: []string{string(images.ImageStatusActive)}, Refresh: resourceImagesImageV2RefreshFunc(imageClient, d.Id(), fileSize, fileChecksum), - Timeout: 30 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_images_image_v2_test.go b/builtin/providers/openstack/resource_openstack_images_image_v2_test.go index cbaf716ba9..b1201040ee 100644 --- a/builtin/providers/openstack/resource_openstack_images_image_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_images_image_v2_test.go @@ -135,6 +135,24 @@ func TestAccImagesImageV2_visibility(t *testing.T) { }) } +func TestAccImagesImageV2_timeout(t *testing.T) { + var image images.Image + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckImagesImageV2Destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccImagesImageV2_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckImagesImageV2Exists("openstack_images_image_v2.image_1", &image), + ), + }, + }, + }) +} + func testAccCheckImagesImageV2Destroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) imageClient, err := config.imageV2Client(OS_REGION_NAME) @@ -326,3 +344,15 @@ var testAccImagesImageV2_visibility_2 = ` disk_format = "qcow2" visibility = "public" }` + +var testAccImagesImageV2_timeout = ` + resource "openstack_images_image_v2" "image_1" { + name = "Rancher TerraformAccTest" + image_source_url = "https://releases.rancher.com/os/latest/rancheros-openstack.img" + container_format = "bare" + disk_format = "qcow2" + + timeouts { + create = "10m" + } + }` diff --git a/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2.go b/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2.go index a4489e52a9..c4e17995fd 100644 --- a/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2.go +++ b/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2.go @@ -20,6 +20,11 @@ func resourceLoadBalancerV2() *schema.Resource { Update: resourceLoadBalancerV2Update, Delete: resourceLoadBalancerV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -138,7 +143,7 @@ func resourceLoadBalancerV2Create(d *schema.ResourceData, meta interface{}) erro Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForLoadBalancerActive(networkingClient, lb.ID), - Timeout: 20 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -245,7 +250,7 @@ func resourceLoadBalancerV2Delete(d *schema.ResourceData, meta interface{}) erro Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForLoadBalancerDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2_test.go b/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2_test.go index 6668943141..0b157b16ad 100644 --- a/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_loadbalancer_v2_test.go @@ -97,6 +97,24 @@ func TestAccLBV2LoadBalancer_secGroup(t *testing.T) { }) } +func TestAccLBV2LoadBalancer_timeout(t *testing.T) { + var lb loadbalancers.LoadBalancer + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV2LoadBalancerDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV2LoadBalancerConfig_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV2LoadBalancerExists("openstack_lb_loadbalancer_v2.loadbalancer_1", &lb), + ), + }, + }, + }) +} + func testAccCheckLBV2LoadBalancerDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -310,3 +328,28 @@ resource "openstack_lb_loadbalancer_v2" "loadbalancer_1" { depends_on = ["openstack_networking_secgroup_v2.secgroup_1"] } ` + +const testAccLBV2LoadBalancerConfig_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + name = "subnet_1" + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_lb_loadbalancer_v2" "loadbalancer_1" { + name = "loadbalancer_1" + loadbalancer_provider = "haproxy" + vip_subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_member_v1.go b/builtin/providers/openstack/resource_openstack_lb_member_v1.go index 44300b8dc5..e6dc3da9f4 100644 --- a/builtin/providers/openstack/resource_openstack_lb_member_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_member_v1.go @@ -22,6 +22,11 @@ func resourceLBMemberV1() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -91,7 +96,7 @@ func resourceLBMemberV1Create(d *schema.ResourceData, meta interface{}) error { Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE", "INACTIVE", "CREATED", "DOWN"}, Refresh: waitForLBMemberActive(networkingClient, m.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -181,7 +186,7 @@ func resourceLBMemberV1Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForLBMemberDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go index 099f500161..af840a5b59 100644 --- a/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go @@ -33,6 +33,24 @@ func TestAccLBV1Member_basic(t *testing.T) { }) } +func TestAccLBV1Member_timeout(t *testing.T) { + var member members.Member + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1MemberDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1Member_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV1MemberExists("openstack_lb_member_v1.member_1", &member), + ), + }, + }, + }) +} + func testAccCheckLBV1MemberDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -139,3 +157,35 @@ resource "openstack_lb_member_v1" "member_1" { pool_id = "${openstack_lb_pool_v1.pool_1.id}" } ` + +const testAccLBV1Member_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_lb_pool_v1" "pool_1" { + name = "pool_1" + protocol = "HTTP" + lb_method = "ROUND_ROBIN" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" +} + +resource "openstack_lb_member_v1" "member_1" { + address = "192.168.199.10" + port = 80 + admin_state_up = true + pool_id = "${openstack_lb_pool_v1.pool_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_member_v2.go b/builtin/providers/openstack/resource_openstack_lb_member_v2.go index 9145fcc0ee..61326bac3f 100644 --- a/builtin/providers/openstack/resource_openstack_lb_member_v2.go +++ b/builtin/providers/openstack/resource_openstack_lb_member_v2.go @@ -19,6 +19,11 @@ func resourceMemberV2() *schema.Resource { Update: resourceMemberV2Update, Delete: resourceMemberV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -124,15 +129,20 @@ func resourceMemberV2Create(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Attempting to create LBaaSV2 member") member, err = pools.CreateMember(networkingClient, poolID, createOpts).Extract() if err != nil { - if errCode, ok := err.(gophercloud.ErrUnexpectedResponseCode); ok { - if errCode.Actual == 409 || errCode.Actual == 500 { + switch errCode := err.(type) { + case gophercloud.ErrDefault500: + log.Printf("[DEBUG] OpenStack LBaaSV2 member is still creating.") + return resource.RetryableError(err) + case gophercloud.ErrUnexpectedResponseCode: + if errCode.Actual == 409 { log.Printf("[DEBUG] OpenStack LBaaSV2 member is still creating.") return resource.RetryableError(err) } - } - return resource.NonRetryableError(err) - } + default: + return resource.NonRetryableError(err) + } + } return nil }) @@ -147,7 +157,7 @@ func resourceMemberV2Create(d *schema.ResourceData, meta interface{}) error { Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForMemberActive(networkingClient, poolID, member.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -228,7 +238,7 @@ func resourceMemberV2Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForMemberDelete(networkingClient, d.Get("pool_id").(string), d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -271,19 +281,22 @@ func waitForMemberDelete(networkingClient *gophercloud.ServiceClient, poolID str log.Printf("[DEBUG] Openstack LBaaSV2 Member: %+v", member) err = pools.DeleteMember(networkingClient, poolID, memberID).ExtractErr() if err != nil { - if _, ok := err.(gophercloud.ErrDefault404); ok { + switch errCode := err.(type) { + case gophercloud.ErrDefault404: log.Printf("[DEBUG] Successfully deleted OpenStack LBaaSV2 Member %s", memberID) return member, "DELETED", nil - } - - if errCode, ok := err.(gophercloud.ErrUnexpectedResponseCode); ok { + case gophercloud.ErrDefault500: + log.Printf("[DEBUG] OpenStack LBaaSV2 Member (%s) is still in use.", memberID) + return member, "PENDING_DELETE", nil + case gophercloud.ErrUnexpectedResponseCode: if errCode.Actual == 409 { log.Printf("[DEBUG] OpenStack LBaaSV2 Member (%s) is still in use.", memberID) return member, "PENDING_DELETE", nil } - } - return member, "ACTIVE", err + default: + return member, "ACTIVE", err + } } log.Printf("[DEBUG] OpenStack LBaaSV2 Member %s still active.", memberID) diff --git a/builtin/providers/openstack/resource_openstack_lb_member_v2_test.go b/builtin/providers/openstack/resource_openstack_lb_member_v2_test.go index b75d401c9d..488b9fefc9 100644 --- a/builtin/providers/openstack/resource_openstack_lb_member_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_member_v2_test.go @@ -33,6 +33,24 @@ func TestAccLBV2Member_basic(t *testing.T) { }) } +func TestAccLBV2Member_timeout(t *testing.T) { + var member pools.Member + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV2MemberDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: TestAccLBV2MemberConfig_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV2MemberExists("openstack_lb_member_v2.member_1", &member), + ), + }, + }, + }) +} + func testAccCheckLBV2MemberDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -169,3 +187,48 @@ resource "openstack_lb_member_v2" "member_1" { subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" } ` + +const TestAccLBV2MemberConfig_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + name = "subnet_1" + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 +} + +resource "openstack_lb_loadbalancer_v2" "loadbalancer_1" { + name = "loadbalancer_1" + vip_subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" +} + +resource "openstack_lb_listener_v2" "listener_1" { + name = "listener_1" + protocol = "HTTP" + protocol_port = 8080 + loadbalancer_id = "${openstack_lb_loadbalancer_v2.loadbalancer_1.id}" +} + +resource "openstack_lb_pool_v2" "pool_1" { + name = "pool_1" + protocol = "HTTP" + lb_method = "ROUND_ROBIN" + listener_id = "${openstack_lb_listener_v2.listener_1.id}" +} + +resource "openstack_lb_member_v2" "member_1" { + address = "192.168.199.10" + protocol_port = 8080 + pool_id = "${openstack_lb_pool_v2.pool_1.id}" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go b/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go index 13c67cb961..26066cbea5 100644 --- a/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go @@ -23,6 +23,11 @@ func resourceLBMonitorV1() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -125,7 +130,7 @@ func resourceLBMonitorV1Create(d *schema.ResourceData, meta interface{}) error { Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForLBMonitorActive(networkingClient, m.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -216,7 +221,7 @@ func resourceLBMonitorV1Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForLBMonitorDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_lb_monitor_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_monitor_v1_test.go index 6b97ed44c2..3da3b66236 100644 --- a/builtin/providers/openstack/resource_openstack_lb_monitor_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_monitor_v1_test.go @@ -34,6 +34,24 @@ func TestAccLBV1Monitor_basic(t *testing.T) { }) } +func TestAccLBV1Monitor_timeout(t *testing.T) { + var monitor monitors.Monitor + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1MonitorDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1Monitor_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV1MonitorExists("openstack_lb_monitor_v1.monitor_1", &monitor), + ), + }, + }, + }) +} + func testAccCheckLBV1MonitorDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -89,7 +107,6 @@ func testAccCheckLBV1MonitorExists(n string, monitor *monitors.Monitor) resource const testAccLBV1Monitor_basic = ` resource "openstack_lb_monitor_v1" "monitor_1" { - region = "%s" type = "PING" delay = 30 timeout = 5 @@ -100,7 +117,6 @@ resource "openstack_lb_monitor_v1" "monitor_1" { const testAccLBV1Monitor_update = ` resource "openstack_lb_monitor_v1" "monitor_1" { - region = "%s" type = "PING" delay = 20 timeout = 5 @@ -108,3 +124,18 @@ resource "openstack_lb_monitor_v1" "monitor_1" { admin_state_up = "true" } ` + +const testAccLBV1Monitor_timeout = ` +resource "openstack_lb_monitor_v1" "monitor_1" { + type = "PING" + delay = 30 + timeout = 5 + max_retries = 3 + admin_state_up = "true" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_monitor_v2.go b/builtin/providers/openstack/resource_openstack_lb_monitor_v2.go index 736417c2d7..061c270e57 100644 --- a/builtin/providers/openstack/resource_openstack_lb_monitor_v2.go +++ b/builtin/providers/openstack/resource_openstack_lb_monitor_v2.go @@ -19,6 +19,11 @@ func resourceMonitorV2() *schema.Resource { Update: resourceMonitorV2Update, Delete: resourceMonitorV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -127,7 +132,7 @@ func resourceMonitorV2Create(d *schema.ResourceData, meta interface{}) error { Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForMonitorActive(networkingClient, monitor.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -226,7 +231,7 @@ func resourceMonitorV2Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForMonitorDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_lb_monitor_v2_test.go b/builtin/providers/openstack/resource_openstack_lb_monitor_v2_test.go index f293b4d678..a7f095301e 100644 --- a/builtin/providers/openstack/resource_openstack_lb_monitor_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_monitor_v2_test.go @@ -36,6 +36,24 @@ func TestAccLBV2Monitor_basic(t *testing.T) { }) } +func TestAccLBV2Monitor_timeout(t *testing.T) { + var monitor monitors.Monitor + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV2MonitorDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: TestAccLBV2MonitorConfig_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV2MonitorExists(t, "openstack_lb_monitor_v2.monitor_1", &monitor), + ), + }, + }, + }) +} + func testAccCheckLBV2MonitorDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -173,3 +191,50 @@ resource "openstack_lb_monitor_v2" "monitor_1" { pool_id = "${openstack_lb_pool_v2.pool_1.id}" } ` + +const TestAccLBV2MonitorConfig_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + name = "subnet_1" + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_lb_loadbalancer_v2" "loadbalancer_1" { + name = "loadbalancer_1" + vip_subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" +} + +resource "openstack_lb_listener_v2" "listener_1" { + name = "listener_1" + protocol = "HTTP" + protocol_port = 8080 + loadbalancer_id = "${openstack_lb_loadbalancer_v2.loadbalancer_1.id}" +} + +resource "openstack_lb_pool_v2" "pool_1" { + name = "pool_1" + protocol = "HTTP" + lb_method = "ROUND_ROBIN" + listener_id = "${openstack_lb_listener_v2.listener_1.id}" +} + +resource "openstack_lb_monitor_v2" "monitor_1" { + name = "monitor_1" + type = "PING" + delay = 20 + timeout = 10 + max_retries = 5 + pool_id = "${openstack_lb_pool_v2.pool_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go index a49acf74d6..eb0436ddf6 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go @@ -26,6 +26,11 @@ func resourceLBPoolV1() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -150,7 +155,7 @@ func resourceLBPoolV1Create(d *schema.ResourceData, meta interface{}) error { Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForLBPoolActive(networkingClient, p.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -331,7 +336,7 @@ func resourceLBPoolV1Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForLBPoolDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go index 353d6df21c..c21a74b0d2 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go @@ -85,6 +85,25 @@ func TestAccLBV1Pool_fullstack(t *testing.T) { }) } +func TestAccLBV1Pool_timeout(t *testing.T) { + var pool pools.Pool + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1PoolDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1Pool_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV1PoolExists("openstack_lb_pool_v1.pool_1", &pool), + resource.TestCheckResourceAttr("openstack_lb_pool_v1.pool_1", "lb_provider", "haproxy"), + ), + }, + }, + }) +} + func testAccCheckLBV1PoolDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -357,3 +376,29 @@ resource "openstack_lb_vip_v1" "vip_1" { pool_id = "${openstack_lb_pool_v1.pool_1.id}" } ` + +const testAccLBV1Pool_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_lb_pool_v1" "pool_1" { + name = "pool_1" + protocol = "HTTP" + lb_method = "ROUND_ROBIN" + lb_provider = "haproxy" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v2.go b/builtin/providers/openstack/resource_openstack_lb_pool_v2.go index a9dc8dea47..73742c6686 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v2.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v2.go @@ -19,6 +19,11 @@ func resourcePoolV2() *schema.Resource { Update: resourcePoolV2Update, Delete: resourcePoolV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -174,7 +179,7 @@ func resourcePoolV2Create(d *schema.ResourceData, meta interface{}) error { Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForPoolActive(networkingClient, pool.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -258,7 +263,7 @@ func resourcePoolV2Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForPoolDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v2_test.go b/builtin/providers/openstack/resource_openstack_lb_pool_v2_test.go index 286dda2a64..6af15374a1 100644 --- a/builtin/providers/openstack/resource_openstack_lb_pool_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v2_test.go @@ -33,6 +33,24 @@ func TestAccLBV2Pool_basic(t *testing.T) { }) } +func TestAccLBV2Pool_timeout(t *testing.T) { + var pool pools.Pool + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV2PoolDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: TestAccLBV2PoolConfig_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV2PoolExists("openstack_lb_pool_v2.pool_1", &pool), + ), + }, + }, + }) +} + func testAccCheckLBV2PoolDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -152,3 +170,41 @@ resource "openstack_lb_pool_v2" "pool_1" { listener_id = "${openstack_lb_listener_v2.listener_1.id}" } ` + +const TestAccLBV2PoolConfig_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + name = "subnet_1" + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_lb_loadbalancer_v2" "loadbalancer_1" { + name = "loadbalancer_1" + vip_subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" +} + +resource "openstack_lb_listener_v2" "listener_1" { + name = "listener_1" + protocol = "HTTP" + protocol_port = 8080 + loadbalancer_id = "${openstack_lb_loadbalancer_v2.loadbalancer_1.id}" +} + +resource "openstack_lb_pool_v2" "pool_1" { + name = "pool_1" + protocol = "HTTP" + lb_method = "ROUND_ROBIN" + listener_id = "${openstack_lb_listener_v2.listener_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_vip_v1.go b/builtin/providers/openstack/resource_openstack_lb_vip_v1.go index 39c935ff30..6e6d46d893 100644 --- a/builtin/providers/openstack/resource_openstack_lb_vip_v1.go +++ b/builtin/providers/openstack/resource_openstack_lb_vip_v1.go @@ -22,6 +22,11 @@ func resourceLBVipV1() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -139,7 +144,7 @@ func resourceLBVipV1Create(d *schema.ResourceData, meta interface{}) error { Pending: []string{"PENDING_CREATE"}, Target: []string{"ACTIVE"}, Refresh: waitForLBVIPActive(networkingClient, p.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -291,7 +296,7 @@ func resourceLBVipV1Delete(d *schema.ResourceData, meta interface{}) error { Pending: []string{"ACTIVE", "PENDING_DELETE"}, Target: []string{"DELETED"}, Refresh: waitForLBVIPDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go index 2d253c4ba4..8fda99c835 100644 --- a/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go +++ b/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go @@ -34,6 +34,24 @@ func TestAccLBV1VIP_basic(t *testing.T) { }) } +func TestAccLBV1VIP_timeout(t *testing.T) { + var vip vips.VirtualIP + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1VIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1VIP_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV1VIPExists("openstack_lb_vip_v1.vip_1", &vip), + ), + }, + }, + }) +} + func testAccCheckLBV1VIPDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -152,3 +170,41 @@ resource "openstack_lb_vip_v1" "vip_1" { } } ` + +const testAccLBV1VIP_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_lb_pool_v1" "pool_1" { + name = "pool_1" + protocol = "HTTP" + lb_method = "ROUND_ROBIN" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" +} + +resource "openstack_lb_vip_v1" "vip_1" { + name = "vip_1" + protocol = "HTTP" + port = 80 + admin_state_up = true + pool_id = "${openstack_lb_pool_v1.pool_1.id}" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + + persistence { + type = "SOURCE_IP" + } + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go index b22301c93f..9712dd1562 100644 --- a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go @@ -24,6 +24,11 @@ func resourceNetworkingFloatingIPV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -101,7 +106,7 @@ func resourceNetworkFloatingIPV2Create(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Target: []string{"ACTIVE"}, Refresh: waitForFloatingIPActive(networkingClient, floatingIP.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -175,7 +180,7 @@ func resourceNetworkFloatingIPV2Delete(d *schema.ResourceData, meta interface{}) Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForFloatingIPDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2_test.go index b8b31e8b91..1eefea90be 100644 --- a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2_test.go @@ -69,6 +69,24 @@ func TestAccNetworkingV2FloatingIP_fixedip_bind(t *testing.T) { }) } +func TestAccNetworkingV2FloatingIP_timeout(t *testing.T) { + var fip floatingips.FloatingIP + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2FloatingIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2FloatingIP_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2FloatingIPExists("openstack_networking_floatingip_v2.fip_1", &fip), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2FloatingIPDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -218,3 +236,12 @@ resource "openstack_networking_floatingip_v2" "fip_1" { fixed_ip = "${openstack_networking_port_v2.port_1.fixed_ip.1.ip_address}" } `, OS_EXTGW_ID, OS_POOL_NAME) + +const testAccNetworkingV2FloatingIP_timeout = ` +resource "openstack_networking_floatingip_v2" "fip_1" { + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_network_v2.go b/builtin/providers/openstack/resource_openstack_networking_network_v2.go index 96ff47d728..9f1af3e28f 100644 --- a/builtin/providers/openstack/resource_openstack_networking_network_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_network_v2.go @@ -23,6 +23,11 @@ func resourceNetworkingNetworkV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -108,7 +113,7 @@ func resourceNetworkingNetworkV2Create(d *schema.ResourceData, meta interface{}) Pending: []string{"BUILD"}, Target: []string{"ACTIVE"}, Refresh: waitForNetworkActive(networkingClient, n.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -196,7 +201,7 @@ func resourceNetworkingNetworkV2Delete(d *schema.ResourceData, meta interface{}) Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForNetworkDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go index 14415f884d..b2e9ac32b2 100644 --- a/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go @@ -90,6 +90,24 @@ func TestAccNetworkingV2Network_fullstack(t *testing.T) { }) } +func TestAccNetworkingV2Network_timeout(t *testing.T) { + var network networks.Network + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2NetworkDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Network_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2NetworkExists("openstack_networking_network_v2.network_1", &network), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2NetworkDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -225,3 +243,15 @@ resource "openstack_compute_instance_v2" "instance_1" { } } ` + +const testAccNetworkingV2Network_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_port_v2.go b/builtin/providers/openstack/resource_openstack_networking_port_v2.go index 6ad80d4a33..aea9cb8ddb 100644 --- a/builtin/providers/openstack/resource_openstack_networking_port_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_port_v2.go @@ -24,6 +24,11 @@ func resourceNetworkingPortV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -80,7 +85,7 @@ func resourceNetworkingPortV2() *schema.Resource { Computed: true, }, "fixed_ip": &schema.Schema{ - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, ForceNew: false, Computed: true, @@ -162,7 +167,7 @@ func resourceNetworkingPortV2Create(d *schema.ResourceData, meta interface{}) er stateConf := &resource.StateChangeConf{ Target: []string{"ACTIVE"}, Refresh: waitForNetworkPortActive(networkingClient, p.ID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -280,7 +285,7 @@ func resourceNetworkingPortV2Delete(d *schema.ResourceData, meta interface{}) er Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForNetworkPortDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -304,7 +309,7 @@ func resourcePortSecurityGroupsV2(d *schema.ResourceData) []string { } func resourcePortFixedIpsV2(d *schema.ResourceData) interface{} { - rawIP := d.Get("fixed_ip").([]interface{}) + rawIP := d.Get("fixed_ip").(*schema.Set).List() if len(rawIP) == 0 { return nil diff --git a/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go index 7eb2bdb07b..ec731fe7ae 100644 --- a/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_port_v2_test.go @@ -80,6 +80,50 @@ func TestAccNetworkingV2Port_allowedAddressPairs(t *testing.T) { }) } +func TestAccNetworkingV2Port_multipleFixedIPs(t *testing.T) { + var network networks.Network + var port ports.Port + var subnet subnets.Subnet + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2PortDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Port_multipleFixedIPs, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2SubnetExists("openstack_networking_subnet_v2.subnet_1", &subnet), + testAccCheckNetworkingV2NetworkExists("openstack_networking_network_v2.network_1", &network), + testAccCheckNetworkingV2PortExists("openstack_networking_port_v2.port_1", &port), + ), + }, + }, + }) +} + +func TestAccNetworkingV2Port_timeout(t *testing.T) { + var network networks.Network + var port ports.Port + var subnet subnets.Subnet + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2PortDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Port_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2SubnetExists("openstack_networking_subnet_v2.subnet_1", &subnet), + testAccCheckNetworkingV2NetworkExists("openstack_networking_network_v2.network_1", &network), + testAccCheckNetworkingV2PortExists("openstack_networking_port_v2.port_1", &port), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2PortDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -247,3 +291,68 @@ resource "openstack_networking_port_v2" "instance_port" { } } ` + +const testAccNetworkingV2Port_multipleFixedIPs = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + name = "subnet_1" + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_networking_port_v2" "port_1" { + name = "port_1" + admin_state_up = "true" + network_id = "${openstack_networking_network_v2.network_1.id}" + + fixed_ip { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + ip_address = "192.168.199.23" + } + + fixed_ip { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + ip_address = "192.168.199.20" + } + + fixed_ip { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + ip_address = "192.168.199.40" + } +} +` + +const testAccNetworkingV2Port_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + name = "subnet_1" + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} + +resource "openstack_networking_port_v2" "port_1" { + name = "port_1" + admin_state_up = "true" + network_id = "${openstack_networking_network_v2.network_1.id}" + + fixed_ip { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + ip_address = "192.168.199.23" + } + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go index 6ca90e8877..4a4ae8685f 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go @@ -19,6 +19,11 @@ func resourceNetworkingRouterInterfaceV2() *schema.Resource { Read: resourceNetworkingRouterInterfaceV2Read, Delete: resourceNetworkingRouterInterfaceV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -70,7 +75,7 @@ func resourceNetworkingRouterInterfaceV2Create(d *schema.ResourceData, meta inte Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"}, Target: []string{"ACTIVE"}, Refresh: waitForRouterInterfaceActive(networkingClient, n.PortID), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -115,7 +120,7 @@ func resourceNetworkingRouterInterfaceV2Delete(d *schema.ResourceData, meta inte Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForRouterInterfaceDelete(networkingClient, d), - Timeout: 5 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go index 0f41db18bb..c6289050ca 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go @@ -61,6 +61,29 @@ func TestAccNetworkingV2RouterInterface_basic_port(t *testing.T) { }) } +func TestAccNetworkingV2RouterInterface_timeout(t *testing.T) { + var network networks.Network + var router routers.Router + var subnet subnets.Subnet + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2RouterInterfaceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2RouterInterface_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2NetworkExists("openstack_networking_network_v2.network_1", &network), + testAccCheckNetworkingV2SubnetExists("openstack_networking_subnet_v2.subnet_1", &subnet), + testAccCheckNetworkingV2RouterExists("openstack_networking_router_v2.router_1", &router), + testAccCheckNetworkingV2RouterInterfaceExists("openstack_networking_router_interface_v2.int_1"), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2RouterInterfaceDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -168,3 +191,31 @@ resource "openstack_networking_port_v2" "port_1" { } } ` + +const testAccNetworkingV2RouterInterface_timeout = ` +resource "openstack_networking_router_v2" "router_1" { + name = "router_1" + admin_state_up = "true" +} + +resource "openstack_networking_router_interface_v2" "int_1" { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + router_id = "${openstack_networking_router_v2.router_1.id}" + + timeouts { + create = "5m" + delete = "5m" + } +} + +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + cidr = "192.168.199.0/24" + ip_version = 4 + network_id = "${openstack_networking_network_v2.network_1.id}" +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_router_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_v2.go index efbab162e9..d979a53e6c 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_v2.go @@ -19,6 +19,11 @@ func resourceNetworkingRouterV2() *schema.Resource { Update: resourceNetworkingRouterV2Update, Delete: resourceNetworkingRouterV2Delete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -108,7 +113,7 @@ func resourceNetworkingRouterV2Create(d *schema.ResourceData, meta interface{}) Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"}, Target: []string{"ACTIVE"}, Refresh: waitForRouterActive(networkingClient, n.ID), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -198,7 +203,7 @@ func resourceNetworkingRouterV2Delete(d *schema.ResourceData, meta interface{}) Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForRouterDelete(networkingClient, d.Id()), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_networking_router_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_router_v2_test.go index bff9c4fbd7..2c08f9b924 100644 --- a/builtin/providers/openstack/resource_openstack_networking_router_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_router_v2_test.go @@ -60,6 +60,24 @@ func TestAccNetworkingV2Router_update_external_gw(t *testing.T) { }) } +func TestAccNetworkingV2Router_timeout(t *testing.T) { + var router routers.Router + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2RouterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Router_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2RouterExists("openstack_networking_router_v2.router_1", &router), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2RouterDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -145,3 +163,16 @@ resource "openstack_networking_router_v2" "router_1" { external_gateway = "%s" } `, OS_EXTGW_ID) + +const testAccNetworkingV2Router_timeout = ` +resource "openstack_networking_router_v2" "router_1" { + name = "router_1" + admin_state_up = "true" + distributed = "false" + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2.go index bb7017c401..39a675967e 100644 --- a/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2.go @@ -22,6 +22,10 @@ func resourceNetworkingSecGroupRuleV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -185,7 +189,7 @@ func resourceNetworkingSecGroupRuleV2Delete(d *schema.ResourceData, meta interfa Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForSecGroupRuleDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2_test.go index bdc787c063..dae6dc3f7f 100644 --- a/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2_test.go @@ -63,6 +63,28 @@ func TestAccNetworkingV2SecGroupRule_lowerCaseCIDR(t *testing.T) { }) } +func TestAccNetworkingV2SecGroupRule_timeout(t *testing.T) { + var secgroup_1 groups.SecGroup + var secgroup_2 groups.SecGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2SecGroupRuleDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2SecGroupRule_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2SecGroupExists( + "openstack_networking_secgroup_v2.secgroup_1", &secgroup_1), + testAccCheckNetworkingV2SecGroupExists( + "openstack_networking_secgroup_v2.secgroup_2", &secgroup_2), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2SecGroupRuleDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -164,3 +186,43 @@ resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_1" { security_group_id = "${openstack_networking_secgroup_v2.secgroup_1.id}" } ` + +const testAccNetworkingV2SecGroupRule_timeout = ` +resource "openstack_networking_secgroup_v2" "secgroup_1" { + name = "secgroup_1" + description = "terraform security group rule acceptance test" +} + +resource "openstack_networking_secgroup_v2" "secgroup_2" { + name = "secgroup_2" + description = "terraform security group rule acceptance test" +} + +resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_1" { + direction = "ingress" + ethertype = "IPv4" + port_range_max = 22 + port_range_min = 22 + protocol = "tcp" + remote_ip_prefix = "0.0.0.0/0" + security_group_id = "${openstack_networking_secgroup_v2.secgroup_1.id}" + + timeouts { + create = "5m" + } +} + +resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_2" { + direction = "ingress" + ethertype = "IPv4" + port_range_max = 80 + port_range_min = 80 + protocol = "tcp" + remote_group_id = "${openstack_networking_secgroup_v2.secgroup_1.id}" + security_group_id = "${openstack_networking_secgroup_v2.secgroup_2.id}" + + timeouts { + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2.go index 0023193ab4..f76d24c577 100644 --- a/builtin/providers/openstack/resource_openstack_networking_secgroup_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2.go @@ -22,6 +22,10 @@ func resourceNetworkingSecGroupV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -130,7 +134,7 @@ func resourceNetworkingSecGroupV2Delete(d *schema.ResourceData, meta interface{} Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForSecGroupDelete(networkingClient, d.Id()), - Timeout: 2 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2_test.go index b4ac4b43e4..aa6e7fff85 100644 --- a/builtin/providers/openstack/resource_openstack_networking_secgroup_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2_test.go @@ -57,6 +57,25 @@ func TestAccNetworkingV2SecGroup_noDefaultRules(t *testing.T) { }) } +func TestAccNetworkingV2SecGroup_timeout(t *testing.T) { + var security_group groups.SecGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2SecGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2SecGroup_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2SecGroupExists( + "openstack_networking_secgroup_v2.secgroup_1", &security_group), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2SecGroupDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -143,3 +162,14 @@ resource "openstack_networking_secgroup_v2" "secgroup_1" { delete_default_rules = true } ` + +const testAccNetworkingV2SecGroup_timeout = ` +resource "openstack_networking_secgroup_v2" "secgroup_1" { + name = "security_group" + description = "terraform security group acceptance test" + + timeouts { + delete = "5m" + } +} +` diff --git a/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go b/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go index cbe88877ee..e0b2fcab96 100644 --- a/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go +++ b/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go @@ -22,6 +22,11 @@ func resourceNetworkingSubnetV2() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "region": &schema.Schema{ Type: schema.TypeString, @@ -178,7 +183,7 @@ func resourceNetworkingSubnetV2Create(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Target: []string{"ACTIVE"}, Refresh: waitForSubnetActive(networkingClient, s.ID), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutCreate), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } @@ -309,7 +314,7 @@ func resourceNetworkingSubnetV2Delete(d *schema.ResourceData, meta interface{}) Pending: []string{"ACTIVE"}, Target: []string{"DELETED"}, Refresh: waitForSubnetDelete(networkingClient, d.Id()), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 3 * time.Second, } diff --git a/builtin/providers/openstack/resource_openstack_networking_subnet_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_subnet_v2_test.go index 942d30755a..1c9d645c1c 100644 --- a/builtin/providers/openstack/resource_openstack_networking_subnet_v2_test.go +++ b/builtin/providers/openstack/resource_openstack_networking_subnet_v2_test.go @@ -119,6 +119,24 @@ func TestAccNetworkingV2Subnet_impliedGateway(t *testing.T) { }) } +func TestAccNetworkingV2Subnet_timeout(t *testing.T) { + var subnet subnets.Subnet + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2SubnetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Subnet_timeout, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2SubnetExists("openstack_networking_subnet_v2.subnet_1", &subnet), + ), + }, + }, + }) +} + func testAccCheckNetworkingV2SubnetDestroy(s *terraform.State) error { config := testAccProvider.Meta().(*Config) networkingClient, err := config.networkingV2Client(OS_REGION_NAME) @@ -262,3 +280,25 @@ resource "openstack_networking_subnet_v2" "subnet_1" { network_id = "${openstack_networking_network_v2.network_1.id}" } ` + +const testAccNetworkingV2Subnet_timeout = ` +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + cidr = "192.168.199.0/24" + network_id = "${openstack_networking_network_v2.network_1.id}" + + allocation_pools { + start = "192.168.199.100" + end = "192.168.199.200" + } + + timeouts { + create = "5m" + delete = "5m" + } +} +` diff --git a/builtin/providers/packet/resource_packet_ssh_key_test.go b/builtin/providers/packet/resource_packet_ssh_key_test.go index 43cd4a54b0..cfd85ae1ab 100644 --- a/builtin/providers/packet/resource_packet_ssh_key_test.go +++ b/builtin/providers/packet/resource_packet_ssh_key_test.go @@ -5,6 +5,7 @@ import ( "strings" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" "github.com/packethost/packngo" @@ -12,19 +13,19 @@ import ( func TestAccPacketSSHKey_Basic(t *testing.T) { var key packngo.SSHKey + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckPacketSSHKeyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccCheckPacketSSHKeyConfig_basic, + { + Config: testAccCheckPacketSSHKeyConfig_basic(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckPacketSSHKeyExists("packet_ssh_key.foobar", &key), - testAccCheckPacketSSHKeyAttributes(&key), resource.TestCheckResourceAttr( - "packet_ssh_key.foobar", "name", "foobar"), + "packet_ssh_key.foobar", "name", fmt.Sprintf("foobar-%d", rInt)), resource.TestCheckResourceAttr( "packet_ssh_key.foobar", "public_key", testAccValidPublicKey), ), @@ -48,15 +49,6 @@ func testAccCheckPacketSSHKeyDestroy(s *terraform.State) error { return nil } -func testAccCheckPacketSSHKeyAttributes(key *packngo.SSHKey) resource.TestCheckFunc { - return func(s *terraform.State) error { - if key.Label != "foobar" { - return fmt.Errorf("Bad name: %s", key.Label) - } - return nil - } -} - func testAccCheckPacketSSHKeyExists(n string, key *packngo.SSHKey) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -84,11 +76,13 @@ func testAccCheckPacketSSHKeyExists(n string, key *packngo.SSHKey) resource.Test } } -var testAccCheckPacketSSHKeyConfig_basic = fmt.Sprintf(` +func testAccCheckPacketSSHKeyConfig_basic(rInt int) string { + return fmt.Sprintf(` resource "packet_ssh_key" "foobar" { - name = "foobar" + name = "foobar-%d" public_key = "%s" -}`, testAccValidPublicKey) +}`, rInt, testAccValidPublicKey) +} var testAccValidPublicKey = strings.TrimSpace(` ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCKVmnMOlHKcZK8tpt3MP1lqOLAcqcJzhsvJcjscgVERRN7/9484SOBJ3HSKxxNG5JN8owAjy5f9yYwcUg+JaUVuytn5Pv3aeYROHGGg+5G346xaq3DAwX6Y5ykr2fvjObgncQBnuU5KHWCECO/4h8uWuwh/kfniXPVjFToc+gnkqA+3RKpAecZhFXwfalQ9mMuYGFxn+fwn8cYEApsJbsEmb0iJwPiZ5hjFC8wREuiTlhPHDgkBLOiycd20op2nXzDbHfCHInquEe/gYxEitALONxm0swBOwJZwlTDOB7C6y2dzlrtxr1L59m7pCkWI4EtTRLvleehBoj3u7jB4usR diff --git a/builtin/providers/pagerduty/config.go b/builtin/providers/pagerduty/config.go index 00f55e71cb..d8b083bfc9 100644 --- a/builtin/providers/pagerduty/config.go +++ b/builtin/providers/pagerduty/config.go @@ -1,6 +1,7 @@ package pagerduty import ( + "fmt" "log" "github.com/PagerDuty/go-pagerduty" @@ -8,13 +9,40 @@ import ( // Config defines the configuration options for the PagerDuty client type Config struct { + // The PagerDuty API V2 token Token string + + // Skip validation of the token against the PagerDuty API + SkipCredsValidation bool } +const invalidCreds = ` + +No valid credentials found for PagerDuty provider. +Please see https://www.terraform.io/docs/providers/pagerduty/index.html +for more information on providing credentials for this provider. +` + // Client returns a new PagerDuty client func (c *Config) Client() (*pagerduty.Client, error) { + // Validate that the PagerDuty token is set + if c.Token == "" { + return nil, fmt.Errorf(invalidCreds) + } + client := pagerduty.NewClient(c.Token) + if !c.SkipCredsValidation { + // Validate the credentials by calling the abilities endpoint, + // if we get a 401 response back we return an error to the user + if _, err := client.ListAbilities(); err != nil { + if isUnauthorized(err) { + return nil, fmt.Errorf(fmt.Sprintf("%s\n%s", err, invalidCreds)) + } + return nil, err + } + } + log.Printf("[INFO] PagerDuty client configured") return client, nil diff --git a/builtin/providers/pagerduty/config_test.go b/builtin/providers/pagerduty/config_test.go new file mode 100644 index 0000000000..4f43d736cd --- /dev/null +++ b/builtin/providers/pagerduty/config_test.go @@ -0,0 +1,28 @@ +package pagerduty + +import ( + "testing" +) + +// Test config with an empty token +func TestConfigEmptyToken(t *testing.T) { + config := Config{ + Token: "", + } + + if _, err := config.Client(); err == nil { + t.Fatalf("expected error, but got nil") + } +} + +// Test config with invalid token but with SkipCredsValidation +func TestConfigSkipCredsValidation(t *testing.T) { + config := Config{ + Token: "foo", + SkipCredsValidation: true, + } + + if _, err := config.Client(); err != nil { + t.Fatalf("error: expected the client to not fail: %v", err) + } +} diff --git a/builtin/providers/pagerduty/data_source_pagerduty_vendor.go b/builtin/providers/pagerduty/data_source_pagerduty_vendor.go index 2c3360c6a4..b3e3659ac6 100644 --- a/builtin/providers/pagerduty/data_source_pagerduty_vendor.go +++ b/builtin/providers/pagerduty/data_source_pagerduty_vendor.go @@ -15,16 +15,13 @@ func dataSourcePagerDutyVendor() *schema.Resource { Schema: map[string]*schema.Schema{ "name_regex": { - Type: schema.TypeString, - Optional: true, - Deprecated: "Use field name instead", - ConflictsWith: []string{"name"}, + Type: schema.TypeString, + Optional: true, + Removed: "Use `name` instead. This attribute will be removed in a future version", }, "name": { - Type: schema.TypeString, - Computed: true, - Optional: true, - ConflictsWith: []string{"name_regex"}, + Type: schema.TypeString, + Required: true, }, "type": { Type: schema.TypeString, @@ -37,19 +34,6 @@ func dataSourcePagerDutyVendor() *schema.Resource { func dataSourcePagerDutyVendorRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*pagerduty.Client) - // Check if we're doing a normal or legacy lookup - _, ok := d.GetOk("name") - _, legacyOk := d.GetOk("name_regex") - - if !ok && !legacyOk { - return fmt.Errorf("Either name or name_regex must be set") - } - - // If name_regex is set, we're doing a legacy lookup - if legacyOk { - return dataSourcePagerDutyVendorLegacyRead(d, meta) - } - log.Printf("[INFO] Reading PagerDuty vendor") searchName := d.Get("name").(string) @@ -84,50 +68,3 @@ func dataSourcePagerDutyVendorRead(d *schema.ResourceData, meta interface{}) err return nil } - -func dataSourcePagerDutyVendorLegacyRead(d *schema.ResourceData, meta interface{}) error { - client := meta.(*pagerduty.Client) - - log.Printf("[INFO] Reading PagerDuty vendor (legacy)") - - resp, err := getVendors(client) - - if err != nil { - return err - } - - r := regexp.MustCompile("(?i)" + d.Get("name_regex").(string)) - - var vendors []pagerduty.Vendor - var vendorNames []string - - for _, v := range resp { - if r.MatchString(v.Name) { - vendors = append(vendors, v) - vendorNames = append(vendorNames, v.Name) - } - } - - if len(vendors) == 0 { - return fmt.Errorf("Unable to locate any vendor using the regex string: %s", r.String()) - } else if len(vendors) > 1 { - return fmt.Errorf("Your query returned more than one result using the regex string: %#v. Found vendors: %#v", r.String(), vendorNames) - } - - vendor := vendors[0] - - genericServiceType := vendor.GenericServiceType - - switch { - case genericServiceType == "email": - genericServiceType = "generic_email_inbound_integration" - case genericServiceType == "api": - genericServiceType = "generic_events_api_inbound_integration" - } - - d.SetId(vendor.ID) - d.Set("name", vendor.Name) - d.Set("type", genericServiceType) - - return nil -} diff --git a/builtin/providers/pagerduty/data_source_pagerduty_vendor_test.go b/builtin/providers/pagerduty/data_source_pagerduty_vendor_test.go index c70d54ef16..59b639aa20 100644 --- a/builtin/providers/pagerduty/data_source_pagerduty_vendor_test.go +++ b/builtin/providers/pagerduty/data_source_pagerduty_vendor_test.go @@ -14,7 +14,7 @@ func TestAccDataSourcePagerDutyVendor_Basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckPagerDutyScheduleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourcePagerDutyVendorConfig, Check: resource.ComposeTestCheckFunc( testAccDataSourcePagerDutyVendor("data.pagerduty_vendor.foo"), @@ -24,22 +24,6 @@ func TestAccDataSourcePagerDutyVendor_Basic(t *testing.T) { }) } -func TestAccDataSourcePagerDutyVendorLegacy_Basic(t *testing.T) { - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckPagerDutyScheduleDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccDataSourcePagerDutyVendorLegacyConfig, - Check: resource.ComposeTestCheckFunc( - testAccDataSourcePagerDutyVendorLegacy("data.pagerduty_vendor.foo"), - ), - }, - }, - }) -} - func testAccDataSourcePagerDutyVendor(n string) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -66,40 +50,8 @@ func testAccDataSourcePagerDutyVendor(n string) resource.TestCheckFunc { } } -func testAccDataSourcePagerDutyVendorLegacy(n string) resource.TestCheckFunc { - return func(s *terraform.State) error { - - r := s.RootModule().Resources[n] - a := r.Primary.Attributes - - if a["id"] == "" { - return fmt.Errorf("Expected to get a vendor ID from PagerDuty") - } - - if a["id"] != "PAM4FGS" { - return fmt.Errorf("Expected the Datadog Vendor ID to be: PAM4FGS, but got: %s", a["id"]) - } - - if a["name"] != "Datadog" { - return fmt.Errorf("Expected the Datadog Vendor Name to be: Datadog, but got: %s", a["name"]) - } - - if a["type"] != "generic_events_api_inbound_integration" { - return fmt.Errorf("Expected the Datadog Vendor Type to be: generic_events_api_inbound_integration, but got: %s", a["type"]) - } - - return nil - } -} - const testAccDataSourcePagerDutyVendorConfig = ` data "pagerduty_vendor" "foo" { name = "cloudwatch" } ` - -const testAccDataSourcePagerDutyVendorLegacyConfig = ` -data "pagerduty_vendor" "foo" { - name_regex = "Datadog" -} -` diff --git a/builtin/providers/pagerduty/errors.go b/builtin/providers/pagerduty/errors.go index 2e8efee5b5..fc9d579d66 100644 --- a/builtin/providers/pagerduty/errors.go +++ b/builtin/providers/pagerduty/errors.go @@ -9,3 +9,7 @@ func isNotFound(err error) bool { return false } + +func isUnauthorized(err error) bool { + return strings.Contains(err.Error(), "HTTP response code: 401") +} diff --git a/builtin/providers/pagerduty/provider.go b/builtin/providers/pagerduty/provider.go index 55dc018198..96a2f338eb 100644 --- a/builtin/providers/pagerduty/provider.go +++ b/builtin/providers/pagerduty/provider.go @@ -16,6 +16,12 @@ func Provider() terraform.ResourceProvider { Required: true, DefaultFunc: schema.EnvDefaultFunc("PAGERDUTY_TOKEN", nil), }, + + "skip_credentials_validation": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, }, DataSourcesMap: map[string]*schema.Resource{ @@ -40,7 +46,11 @@ func Provider() terraform.ResourceProvider { } func providerConfigure(data *schema.ResourceData) (interface{}, error) { - config := Config{Token: data.Get("token").(string)} + config := Config{ + Token: data.Get("token").(string), + SkipCredsValidation: data.Get("skip_credentials_validation").(bool), + } + log.Println("[INFO] Initializing PagerDuty client") return config.Client() } diff --git a/builtin/providers/pagerduty/util.go b/builtin/providers/pagerduty/util.go index 20b1e70db0..68181f9a79 100644 --- a/builtin/providers/pagerduty/util.go +++ b/builtin/providers/pagerduty/util.go @@ -3,7 +3,6 @@ package pagerduty import ( "fmt" - pagerduty "github.com/PagerDuty/go-pagerduty" "github.com/hashicorp/terraform/helper/schema" ) @@ -25,44 +24,3 @@ func validateValueFunc(values []string) schema.SchemaValidateFunc { return } } - -// getVendors retrieves all PagerDuty vendors and returns a list of []pagerduty.Vendor -func getVendors(client *pagerduty.Client) ([]pagerduty.Vendor, error) { - var offset uint - var totalCount int - var vendors []pagerduty.Vendor - - for { - o := &pagerduty.ListVendorOptions{ - APIListObject: pagerduty.APIListObject{ - Limit: 100, - Total: 1, - Offset: offset, - }, - } - - resp, err := client.ListVendors(*o) - - if err != nil { - return nil, err - } - - for _, v := range resp.Vendors { - totalCount++ - vendors = append(vendors, v) - } - - rOffset := uint(resp.Offset) - returnedCount := uint(len(resp.Vendors)) - rTotal := uint(resp.Total) - - if resp.More && uint(totalCount) != uint(rTotal) { - offset = returnedCount + rOffset - continue - } - - break - } - - return vendors, nil -} diff --git a/builtin/providers/profitbricks/resource_profitbricks_server.go b/builtin/providers/profitbricks/resource_profitbricks_server.go index 021072a2a8..ff29aef035 100644 --- a/builtin/providers/profitbricks/resource_profitbricks_server.go +++ b/builtin/providers/profitbricks/resource_profitbricks_server.go @@ -449,37 +449,39 @@ func resourceProfitBricksServerRead(d *schema.ResourceData, meta interface{}) er serverId := d.Id() server := profitbricks.GetServer(dcId, serverId) - primarynic := d.Get("primary_nic").(string) d.Set("name", server.Properties.Name) d.Set("cores", server.Properties.Cores) d.Set("ram", server.Properties.Ram) d.Set("availability_zone", server.Properties.AvailabilityZone) - d.Set("primary_nic", primarynic) - nic := profitbricks.GetNic(dcId, serverId, primarynic) + if primarynic, ok := d.GetOk("primary_nic"); ok { + d.Set("primary_nic", primarynic.(string)) - if len(nic.Properties.Ips) > 0 { - d.Set("primary_ip", nic.Properties.Ips[0]) - } + nic := profitbricks.GetNic(dcId, serverId, primarynic.(string)) - if nRaw, ok := d.GetOk("nic"); ok { - log.Printf("[DEBUG] parsing nic") - - nicRaw := nRaw.(*schema.Set).List() - - for _, raw := range nicRaw { - - rawMap := raw.(map[string]interface{}) - - rawMap["lan"] = nic.Properties.Lan - rawMap["name"] = nic.Properties.Name - rawMap["dhcp"] = nic.Properties.Dhcp - rawMap["nat"] = nic.Properties.Nat - rawMap["firewall_active"] = nic.Properties.FirewallActive - rawMap["ips"] = nic.Properties.Ips + if len(nic.Properties.Ips) > 0 { + d.Set("primary_ip", nic.Properties.Ips[0]) + } + + if nRaw, ok := d.GetOk("nic"); ok { + log.Printf("[DEBUG] parsing nic") + + nicRaw := nRaw.(*schema.Set).List() + + for _, raw := range nicRaw { + + rawMap := raw.(map[string]interface{}) + + rawMap["lan"] = nic.Properties.Lan + rawMap["name"] = nic.Properties.Name + rawMap["dhcp"] = nic.Properties.Dhcp + rawMap["nat"] = nic.Properties.Nat + rawMap["firewall_active"] = nic.Properties.FirewallActive + rawMap["ips"] = nic.Properties.Ips + } + d.Set("nic", nicRaw) } - d.Set("nic", nicRaw) } if server.Properties.BootVolume != nil { diff --git a/builtin/providers/rancher/provider.go b/builtin/providers/rancher/provider.go index 13a6b7ce47..9c176943f5 100644 --- a/builtin/providers/rancher/provider.go +++ b/builtin/providers/rancher/provider.go @@ -49,7 +49,9 @@ func Provider() terraform.ResourceProvider { }, ResourcesMap: map[string]*schema.Resource{ + "rancher_certificate": resourceRancherCertificate(), "rancher_environment": resourceRancherEnvironment(), + "rancher_host": resourceRancherHost(), "rancher_registration_token": resourceRancherRegistrationToken(), "rancher_registry": resourceRancherRegistry(), "rancher_registry_credential": resourceRancherRegistryCredential(), diff --git a/builtin/providers/rancher/resource_rancher_certificate.go b/builtin/providers/rancher/resource_rancher_certificate.go new file mode 100644 index 0000000000..09692465c4 --- /dev/null +++ b/builtin/providers/rancher/resource_rancher_certificate.go @@ -0,0 +1,277 @@ +package rancher + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + rancher "github.com/rancher/go-rancher/client" +) + +func resourceRancherCertificate() *schema.Resource { + return &schema.Resource{ + Create: resourceRancherCertificateCreate, + Read: resourceRancherCertificateRead, + Update: resourceRancherCertificateUpdate, + Delete: resourceRancherCertificateDelete, + Importer: &schema.ResourceImporter{ + State: resourceRancherCertificateImport, + }, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "environment_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "cert": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "cert_chain": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "key": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "cn": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "algorithm": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "cert_fingerprint": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "expires_at": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "issued_at": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "issuer": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "key_size": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "serial_number": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "subject_alternative_names": &schema.Schema{ + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + }, + "version": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceRancherCertificateCreate(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO][rancher] Creating Certificate: %s", d.Id()) + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + name := d.Get("name").(string) + description := d.Get("description").(string) + cert := d.Get("cert").(string) + certChain := d.Get("cert_chain").(string) + key := d.Get("key").(string) + + certificate := rancher.Certificate{ + Name: name, + Description: description, + Cert: cert, + CertChain: certChain, + Key: key, + } + newCertificate, err := client.Certificate.Create(&certificate) + if err != nil { + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"active", "removed", "removing"}, + Target: []string{"active"}, + Refresh: CertificateStateRefreshFunc(client, newCertificate.Id), + Timeout: 10 * time.Minute, + Delay: 1 * time.Second, + MinTimeout: 3 * time.Second, + } + _, waitErr := stateConf.WaitForState() + if waitErr != nil { + return fmt.Errorf( + "Error waiting for registry credential (%s) to be created: %s", newCertificate.Id, waitErr) + } + + d.SetId(newCertificate.Id) + log.Printf("[INFO] Certificate ID: %s", d.Id()) + + return resourceRancherCertificateUpdate(d, meta) +} + +func resourceRancherCertificateRead(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO] Refreshing Certificate: %s", d.Id()) + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + certificate, err := client.Certificate.ById(d.Id()) + if err != nil { + return err + } + + log.Printf("[INFO] Certificate Name: %s", certificate.Name) + + d.Set("description", certificate.Description) + d.Set("name", certificate.Name) + + // Computed values + d.Set("cn", certificate.CN) + d.Set("algorithm", certificate.Algorithm) + d.Set("cert_fingerprint", certificate.CertFingerprint) + d.Set("expires_at", certificate.ExpiresAt) + d.Set("issued_at", certificate.IssuedAt) + d.Set("issuer", certificate.Issuer) + d.Set("key_size", certificate.KeySize) + d.Set("serial_number", certificate.SerialNumber) + d.Set("subject_alternative_names", certificate.SubjectAlternativeNames) + d.Set("version", certificate.Version) + + return nil +} + +func resourceRancherCertificateUpdate(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO] Updating Certificate: %s", d.Id()) + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + certificate, err := client.Certificate.ById(d.Id()) + if err != nil { + return err + } + + name := d.Get("name").(string) + description := d.Get("description").(string) + cert := d.Get("cert").(string) + certChain := d.Get("cert_chain").(string) + key := d.Get("key").(string) + + data := map[string]interface{}{ + "name": &name, + "description": &description, + "cert": &cert, + "cert_chain": &certChain, + "key": &key, + } + + var newCertificate rancher.Certificate + if err := client.Update("certificate", &certificate.Resource, data, &newCertificate); err != nil { + return err + } + + return resourceRancherCertificateRead(d, meta) +} + +func resourceRancherCertificateDelete(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO] Deleting Certificate: %s", d.Id()) + id := d.Id() + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + certificate, err := client.Certificate.ById(id) + if err != nil { + return err + } + + if err := client.Certificate.Delete(certificate); err != nil { + return fmt.Errorf("Error deleting Certificate: %s", err) + } + + log.Printf("[DEBUG] Waiting for certificate (%s) to be removed", id) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"active", "removed", "removing"}, + Target: []string{"removed"}, + Refresh: CertificateStateRefreshFunc(client, id), + Timeout: 10 * time.Minute, + Delay: 1 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, waitErr := stateConf.WaitForState() + if waitErr != nil { + return fmt.Errorf( + "Error waiting for certificate (%s) to be removed: %s", id, waitErr) + } + + d.SetId("") + return nil +} + +func resourceRancherCertificateImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + envID, resourceID := splitID(d.Id()) + d.SetId(resourceID) + if envID != "" { + d.Set("environment_id", envID) + } else { + client, err := meta.(*Config).GlobalClient() + if err != nil { + return []*schema.ResourceData{}, err + } + stack, err := client.Environment.ById(d.Id()) + if err != nil { + return []*schema.ResourceData{}, err + } + d.Set("environment_id", stack.AccountId) + } + return []*schema.ResourceData{d}, nil +} + +// CertificateStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch +// a Rancher Certificate. +func CertificateStateRefreshFunc(client *rancher.RancherClient, certificateID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + cert, err := client.Certificate.ById(certificateID) + + if err != nil { + return nil, "", err + } + + return cert, cert.State, nil + } +} diff --git a/builtin/providers/rancher/resource_rancher_host.go b/builtin/providers/rancher/resource_rancher_host.go new file mode 100644 index 0000000000..326423d40e --- /dev/null +++ b/builtin/providers/rancher/resource_rancher_host.go @@ -0,0 +1,200 @@ +package rancher + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + rancher "github.com/rancher/go-rancher/client" +) + +// ro_labels are used internally by Rancher +// They are not documented and should not be set in Terraform +var ro_labels = []string{ + "io.rancher.host.agent_image", + "io.rancher.host.docker_version", + "io.rancher.host.kvm", + "io.rancher.host.linux_kernel_version", +} + +func resourceRancherHost() *schema.Resource { + return &schema.Resource{ + Create: resourceRancherHostCreate, + Read: resourceRancherHostRead, + Update: resourceRancherHostUpdate, + Delete: resourceRancherHostDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "environment_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "hostname": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "labels": { + Type: schema.TypeMap, + Optional: true, + }, + }, + } +} + +func resourceRancherHostCreate(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO][rancher] Creating Host: %s", d.Id()) + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + hosts, _ := client.Host.List(NewListOpts()) + hostname := d.Get("hostname").(string) + var host rancher.Host + + for _, h := range hosts.Data { + if h.Hostname == hostname { + host = h + break + } + } + + if host.Hostname == "" { + return fmt.Errorf("Failed to find host %s", hostname) + } + + d.SetId(host.Id) + + return resourceRancherHostUpdate(d, meta) +} + +func resourceRancherHostRead(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO] Refreshing Host: %s", d.Id()) + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + host, err := client.Host.ById(d.Id()) + if err != nil { + return err + } + + log.Printf("[INFO] Host Name: %s", host.Name) + + d.Set("description", host.Description) + d.Set("name", host.Name) + d.Set("hostname", host.Hostname) + + labels := host.Labels + // Remove read-only labels + for _, lbl := range ro_labels { + delete(labels, lbl) + } + d.Set("labels", host.Labels) + + return nil +} + +func resourceRancherHostUpdate(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO] Updating Host: %s", d.Id()) + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + name := d.Get("name").(string) + description := d.Get("description").(string) + + // Process labels: merge ro_labels into new labels + labels := d.Get("labels").(map[string]interface{}) + host, err := client.Host.ById(d.Id()) + if err != nil { + return err + } + for _, lbl := range ro_labels { + labels[lbl] = host.Labels[lbl] + } + + data := map[string]interface{}{ + "name": &name, + "description": &description, + "labels": &labels, + } + + var newHost rancher.Host + if err := client.Update("host", &host.Resource, data, &newHost); err != nil { + return err + } + + return resourceRancherHostRead(d, meta) +} + +func resourceRancherHostDelete(d *schema.ResourceData, meta interface{}) error { + log.Printf("[INFO] Deleting Host: %s", d.Id()) + id := d.Id() + client, err := meta.(*Config).EnvironmentClient(d.Get("environment_id").(string)) + if err != nil { + return err + } + + host, err := client.Host.ById(id) + if err != nil { + return err + } + + if err := client.Host.Delete(host); err != nil { + return fmt.Errorf("Error deleting Host: %s", err) + } + + log.Printf("[DEBUG] Waiting for host (%s) to be removed", id) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"active", "removed", "removing"}, + Target: []string{"removed"}, + Refresh: HostStateRefreshFunc(client, id), + Timeout: 10 * time.Minute, + Delay: 1 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, waitErr := stateConf.WaitForState() + if waitErr != nil { + return fmt.Errorf( + "Error waiting for host (%s) to be removed: %s", id, waitErr) + } + + d.SetId("") + return nil +} + +// HostStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch +// a Rancher Host. +func HostStateRefreshFunc(client *rancher.RancherClient, hostID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + host, err := client.Host.ById(hostID) + + if err != nil { + return nil, "", err + } + + return host, host.State, nil + } +} diff --git a/builtin/providers/rancher/resource_rancher_stack.go b/builtin/providers/rancher/resource_rancher_stack.go index 4e53df457e..e8a1b10528 100644 --- a/builtin/providers/rancher/resource_rancher_stack.go +++ b/builtin/providers/rancher/resource_rancher_stack.go @@ -3,9 +3,11 @@ package rancher import ( "fmt" "log" + "reflect" "strings" "time" + compose "github.com/docker/libcompose/config" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/validation" @@ -41,12 +43,14 @@ func resourceRancherStack() *schema.Resource { ForceNew: true, }, "docker_compose": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: suppressComposeDiff, }, "rancher_compose": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: suppressComposeDiff, }, "environment": { Type: schema.TypeMap, @@ -72,12 +76,14 @@ func resourceRancherStack() *schema.Resource { Optional: true, }, "rendered_docker_compose": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + DiffSuppressFunc: suppressComposeDiff, }, "rendered_rancher_compose": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + DiffSuppressFunc: suppressComposeDiff, }, }, } @@ -183,6 +189,7 @@ func resourceRancherStackRead(d *schema.ResourceData, meta interface{}) error { } d.Set("start_on_create", stack.StartOnCreate) + d.Set("finish_upgrade", d.Get("finish_upgrade").(bool)) return nil } @@ -431,3 +438,19 @@ func makeStackData(d *schema.ResourceData, meta interface{}) (data map[string]in return data, nil } + +func suppressComposeDiff(k, old, new string, d *schema.ResourceData) bool { + cOld, err := compose.CreateConfig([]byte(old)) + if err != nil { + // TODO: log? + return false + } + + cNew, err := compose.CreateConfig([]byte(new)) + if err != nil { + // TODO: log? + return false + } + + return reflect.DeepEqual(cOld, cNew) +} diff --git a/builtin/providers/rancher/util.go b/builtin/providers/rancher/util.go index 60317bf578..efa4a4c3f7 100644 --- a/builtin/providers/rancher/util.go +++ b/builtin/providers/rancher/util.go @@ -37,3 +37,8 @@ func splitID(id string) (envID, resourceID string) { } return "", id } + +// NewListOpts wraps around client.NewListOpts() +func NewListOpts() *client.ListOpts { + return client.NewListOpts() +} diff --git a/builtin/providers/scaleway/resource_scaleway_ip.go b/builtin/providers/scaleway/resource_scaleway_ip.go index 96572e62b2..27cb6fb47b 100644 --- a/builtin/providers/scaleway/resource_scaleway_ip.go +++ b/builtin/providers/scaleway/resource_scaleway_ip.go @@ -2,6 +2,7 @@ package scaleway import ( "log" + "sync" "github.com/hashicorp/terraform/helper/schema" "github.com/scaleway/scaleway-cli/pkg/api" @@ -30,8 +31,12 @@ func resourceScalewayIP() *schema.Resource { } } +var mu = sync.Mutex{} + func resourceScalewayIPCreate(d *schema.ResourceData, m interface{}) error { scaleway := m.(*Client).scaleway + mu.Lock() + defer mu.Unlock() resp, err := scaleway.NewIP() if err != nil { return err diff --git a/builtin/providers/scaleway/resource_scaleway_ip_test.go b/builtin/providers/scaleway/resource_scaleway_ip_test.go index f32cae1f55..f3381cedfd 100644 --- a/builtin/providers/scaleway/resource_scaleway_ip_test.go +++ b/builtin/providers/scaleway/resource_scaleway_ip_test.go @@ -8,6 +8,23 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccScalewayIP_Count(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckScalewayIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckScalewayIPConfig_Count, + Check: resource.ComposeTestCheckFunc( + testAccCheckScalewayIPExists("scaleway_ip.base.0"), + testAccCheckScalewayIPExists("scaleway_ip.base.1"), + ), + }, + }, + }) +} + func TestAccScalewayIP_Basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -129,6 +146,12 @@ resource "scaleway_ip" "base" { } ` +var testAccCheckScalewayIPConfig_Count = ` +resource "scaleway_ip" "base" { + count = 2 +} +` + var testAccCheckScalewayIPAttachConfig = fmt.Sprintf(` resource "scaleway_server" "base" { name = "test" diff --git a/builtin/providers/scaleway/resource_scaleway_server.go b/builtin/providers/scaleway/resource_scaleway_server.go index 1d479144d1..57183c152d 100644 --- a/builtin/providers/scaleway/resource_scaleway_server.go +++ b/builtin/providers/scaleway/resource_scaleway_server.go @@ -88,6 +88,10 @@ func resourceScalewayServer() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "public_ipv6": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, "state": &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -194,6 +198,10 @@ func resourceScalewayServerRead(d *schema.ResourceData, m interface{}) error { d.Set("private_ip", server.PrivateIP) d.Set("public_ip", server.PublicAddress.IP) + if server.EnableIPV6 { + d.Set("public_ipv6", server.IPV6.Address) + } + d.Set("state", server.State) d.Set("state_detail", server.StateDetail) d.Set("tags", server.Tags) diff --git a/command/apply_test.go b/command/apply_test.go index 7f21aa1c2f..01c230326e 100644 --- a/command/apply_test.go +++ b/command/apply_test.go @@ -603,7 +603,6 @@ func TestApply_plan_backup(t *testing.T) { if err != nil { t.Fatal(err) } - args := []string{ "-state-out", statePath, "-backup", backupPath, @@ -1531,6 +1530,91 @@ func TestApply_disableBackup(t *testing.T) { } } +// Test that the Terraform env is passed through +func TestApply_terraformEnv(t *testing.T) { + statePath := testTempFile(t) + + p := testProvider() + ui := new(cli.MockUi) + c := &ApplyCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + args := []string{ + "-state", statePath, + testFixturePath("apply-terraform-env"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + expected := strings.TrimSpace(` + +Outputs: + +output = default + `) + testStateOutput(t, statePath, expected) +} + +// Test that the Terraform env is passed through +func TestApply_terraformEnvNonDefault(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + os.MkdirAll(td, 0755) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + // Create new env + { + ui := new(cli.MockUi) + newCmd := &EnvNewCommand{} + newCmd.Meta = Meta{Ui: ui} + if code := newCmd.Run([]string{"test"}); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter) + } + } + + // Switch to it + { + args := []string{"test"} + ui := new(cli.MockUi) + selCmd := &EnvSelectCommand{} + selCmd.Meta = Meta{Ui: ui} + if code := selCmd.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter) + } + } + + p := testProvider() + ui := new(cli.MockUi) + c := &ApplyCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + args := []string{ + testFixturePath("apply-terraform-env"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + statePath := filepath.Join("terraform.tfstate.d", "test", "terraform.tfstate") + expected := strings.TrimSpace(` + +Outputs: + +output = test + `) + testStateOutput(t, statePath, expected) +} + func testHttpServer(t *testing.T) net.Listener { ln, err := net.Listen("tcp", "127.0.0.1:0") if err != nil { diff --git a/command/env_command_test.go b/command/env_command_test.go index 356c8d66aa..0b28beb014 100644 --- a/command/env_command_test.go +++ b/command/env_command_test.go @@ -64,6 +64,16 @@ func TestEnv_createAndList(t *testing.T) { defer os.RemoveAll(td) defer testChdir(t, td)() + // make sure a vars file doesn't interfere + err := ioutil.WriteFile( + DefaultVarsFilename, + []byte(`foo = "bar"`), + 0644, + ) + if err != nil { + t.Fatal(err) + } + newCmd := &EnvNewCommand{} envs := []string{"test_a", "test_b", "test_c"} diff --git a/command/env_list.go b/command/env_list.go index 219b32bd05..12d768e802 100644 --- a/command/env_list.go +++ b/command/env_list.go @@ -19,6 +19,7 @@ func (c *EnvListCommand) Run(args []string) int { return 1 } + args = cmdFlags.Args() configPath, err := ModulePath(args) if err != nil { c.Ui.Error(err.Error()) diff --git a/command/hook_ui.go b/command/hook_ui.go index 1ebcc578da..1cc1a9bbe1 100644 --- a/command/hook_ui.go +++ b/command/hook_ui.go @@ -15,14 +15,15 @@ import ( "github.com/mitchellh/colorstring" ) -const periodicUiTimer = 10 * time.Second +const defaultPeriodicUiTimer = 10 * time.Second const maxIdLen = 20 type UiHook struct { terraform.NilHook - Colorize *colorstring.Colorize - Ui cli.Ui + Colorize *colorstring.Colorize + Ui cli.Ui + PeriodicUiTimer time.Duration l sync.Mutex once sync.Once @@ -38,6 +39,8 @@ type uiResourceState struct { Start time.Time DoneCh chan struct{} // To be used for cancellation + + done chan struct{} // used to coordinate tests } // uiResourceOp is an enum for operations on a resource @@ -145,6 +148,7 @@ func (h *UiHook) PreApply( Op: op, Start: time.Now().Round(time.Second), DoneCh: make(chan struct{}), + done: make(chan struct{}), } h.l.Lock() @@ -158,12 +162,13 @@ func (h *UiHook) PreApply( } func (h *UiHook) stillApplying(state uiResourceState) { + defer close(state.done) for { select { case <-state.DoneCh: return - case <-time.After(periodicUiTimer): + case <-time.After(h.PeriodicUiTimer): // Timer up, show status } @@ -330,6 +335,9 @@ func (h *UiHook) init() { if h.Colorize == nil { panic("colorize not given") } + if h.PeriodicUiTimer == 0 { + h.PeriodicUiTimer = defaultPeriodicUiTimer + } h.resources = make(map[string]uiResourceState) diff --git a/command/hook_ui_test.go b/command/hook_ui_test.go index 1c6476efe0..cf618969f0 100644 --- a/command/hook_ui_test.go +++ b/command/hook_ui_test.go @@ -1,10 +1,204 @@ package command import ( + "bytes" "fmt" "testing" + "time" + + "github.com/hashicorp/terraform/terraform" + "github.com/mitchellh/cli" + "github.com/mitchellh/colorstring" ) +func TestUiHookPreApply_periodicTimer(t *testing.T) { + ui := &cli.MockUi{ + InputReader: bytes.NewReader([]byte{}), + ErrorWriter: bytes.NewBuffer([]byte{}), + OutputWriter: bytes.NewBuffer([]byte{}), + } + h := &UiHook{ + Colorize: &colorstring.Colorize{ + Colors: colorstring.DefaultColors, + Disable: true, + Reset: true, + }, + Ui: ui, + PeriodicUiTimer: 1 * time.Second, + } + h.init() + h.resources = map[string]uiResourceState{ + "data.aws_availability_zones.available": uiResourceState{ + Op: uiResourceDestroy, + Start: time.Now(), + }, + } + + n := &terraform.InstanceInfo{ + Id: "data.aws_availability_zones.available", + ModulePath: []string{"root"}, + Type: "aws_availability_zones", + } + + s := &terraform.InstanceState{ + ID: "2017-03-05 10:56:59.298784526 +0000 UTC", + Attributes: map[string]string{ + "id": "2017-03-05 10:56:59.298784526 +0000 UTC", + "names.#": "4", + "names.0": "us-east-1a", + "names.1": "us-east-1b", + "names.2": "us-east-1c", + "names.3": "us-east-1d", + }, + } + d := &terraform.InstanceDiff{ + Destroy: true, + } + + action, err := h.PreApply(n, s, d) + if err != nil { + t.Fatal(err) + } + if action != terraform.HookActionContinue { + t.Fatalf("Expected hook to continue, given: %#v", action) + } + + time.Sleep(3100 * time.Millisecond) + + // stop the background writer + uiState := h.resources[n.HumanId()] + close(uiState.DoneCh) + <-uiState.done + + expectedOutput := `data.aws_availability_zones.available: Destroying... (ID: 2017-03-0...0000 UTC) +data.aws_availability_zones.available: Still destroying... (ID: 2017-03-0...0000 UTC, 1s elapsed) +data.aws_availability_zones.available: Still destroying... (ID: 2017-03-0...0000 UTC, 2s elapsed) +data.aws_availability_zones.available: Still destroying... (ID: 2017-03-0...0000 UTC, 3s elapsed) +` + output := ui.OutputWriter.String() + if output != expectedOutput { + t.Fatalf("Output didn't match.\nExpected: %q\nGiven: %q", expectedOutput, output) + } + + expectedErrOutput := "" + errOutput := ui.ErrorWriter.String() + if errOutput != expectedErrOutput { + t.Fatalf("Error output didn't match.\nExpected: %q\nGiven: %q", expectedErrOutput, errOutput) + } +} + +func TestUiHookPreApply_destroy(t *testing.T) { + ui := &cli.MockUi{ + InputReader: bytes.NewReader([]byte{}), + ErrorWriter: bytes.NewBuffer([]byte{}), + OutputWriter: bytes.NewBuffer([]byte{}), + } + h := &UiHook{ + Colorize: &colorstring.Colorize{ + Colors: colorstring.DefaultColors, + Disable: true, + Reset: true, + }, + Ui: ui, + } + h.init() + h.resources = map[string]uiResourceState{ + "data.aws_availability_zones.available": uiResourceState{ + Op: uiResourceDestroy, + Start: time.Now(), + }, + } + + n := &terraform.InstanceInfo{ + Id: "data.aws_availability_zones.available", + ModulePath: []string{"root"}, + Type: "aws_availability_zones", + } + + s := &terraform.InstanceState{ + ID: "2017-03-05 10:56:59.298784526 +0000 UTC", + Attributes: map[string]string{ + "id": "2017-03-05 10:56:59.298784526 +0000 UTC", + "names.#": "4", + "names.0": "us-east-1a", + "names.1": "us-east-1b", + "names.2": "us-east-1c", + "names.3": "us-east-1d", + }, + } + d := &terraform.InstanceDiff{ + Destroy: true, + } + + action, err := h.PreApply(n, s, d) + if err != nil { + t.Fatal(err) + } + if action != terraform.HookActionContinue { + t.Fatalf("Expected hook to continue, given: %#v", action) + } + + expectedOutput := "data.aws_availability_zones.available: Destroying... (ID: 2017-03-0...0000 UTC)\n" + output := ui.OutputWriter.String() + if output != expectedOutput { + t.Fatalf("Output didn't match.\nExpected: %q\nGiven: %q", expectedOutput, output) + } + + expectedErrOutput := "" + errOutput := ui.ErrorWriter.String() + if errOutput != expectedErrOutput { + t.Fatalf("Error output didn't match.\nExpected: %q\nGiven: %q", expectedErrOutput, errOutput) + } +} + +func TestUiHookPostApply_emptyState(t *testing.T) { + ui := &cli.MockUi{ + InputReader: bytes.NewReader([]byte{}), + ErrorWriter: bytes.NewBuffer([]byte{}), + OutputWriter: bytes.NewBuffer([]byte{}), + } + h := &UiHook{ + Colorize: &colorstring.Colorize{ + Colors: colorstring.DefaultColors, + Disable: true, + Reset: true, + }, + Ui: ui, + } + h.init() + h.resources = map[string]uiResourceState{ + "data.google_compute_zones.available": uiResourceState{ + Op: uiResourceDestroy, + Start: time.Now(), + }, + } + + n := &terraform.InstanceInfo{ + Id: "data.google_compute_zones.available", + ModulePath: []string{"root"}, + Type: "google_compute_zones", + } + action, err := h.PostApply(n, nil, nil) + if err != nil { + t.Fatal(err) + } + if action != terraform.HookActionContinue { + t.Fatalf("Expected hook to continue, given: %#v", action) + } + + expectedOutput := "data.google_compute_zones.available: Destruction complete\n" + output := ui.OutputWriter.String() + if output != expectedOutput { + t.Fatalf("Output didn't match.\nExpected: %q\nGiven: %q", expectedOutput, output) + } + + expectedErrOutput := "" + errOutput := ui.ErrorWriter.String() + if errOutput != expectedErrOutput { + t.Fatalf("Error output didn't match.\nExpected: %q\nGiven: %q", expectedErrOutput, errOutput) + } +} + func TestTruncateId(t *testing.T) { testCases := []struct { Input string diff --git a/command/init.go b/command/init.go index d20e883bc2..d2a9e835c8 100644 --- a/command/init.go +++ b/command/init.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config/module" + "github.com/hashicorp/terraform/helper/variables" ) // InitCommand is a Command implementation that takes a Terraform @@ -19,12 +20,15 @@ type InitCommand struct { func (c *InitCommand) Run(args []string) int { var flagBackend, flagGet bool - var flagConfigFile string + var flagConfigExtra map[string]interface{} + args = c.Meta.process(args, false) cmdFlags := c.flagSet("init") cmdFlags.BoolVar(&flagBackend, "backend", true, "") - cmdFlags.StringVar(&flagConfigFile, "backend-config", "", "") + cmdFlags.Var((*variables.FlagAny)(&flagConfigExtra), "backend-config", "") cmdFlags.BoolVar(&flagGet, "get", true, "") + cmdFlags.BoolVar(&c.forceInitCopy, "force-copy", false, "suppress prompts about copying state data") + cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } if err := cmdFlags.Parse(args); err != nil { return 1 @@ -138,9 +142,9 @@ func (c *InitCommand) Run(args []string) int { } opts := &BackendOpts{ - ConfigPath: path, - ConfigFile: flagConfigFile, - Init: true, + ConfigPath: path, + ConfigExtra: flagConfigExtra, + Init: true, } if _, err := c.Backend(opts); err != nil { c.Ui.Error(err.Error()) @@ -210,8 +214,12 @@ Options: -backend=true Configure the backend for this environment. - -backend-config=path A path to load additional configuration for the backend. - This is merged with what is in the configuration file. + -backend-config=path This can be either a path to an HCL file with key/value + assignments (same format as terraform.tfvars) or a + 'key=value' format. This is merged with what is in the + configuration file. This can be specified multiple + times. The backend type must be in the configuration + itself. -get=true Download any modules for this configuration. @@ -220,6 +228,10 @@ Options: -no-color If specified, output won't contain any color. + -force-copy Suppress prompts about copying state data. This is + equivalent to providing a "yes" to all confirmation + prompts. + ` return strings.TrimSpace(helpText) } diff --git a/command/init_test.go b/command/init_test.go index 6030b877e9..dee54495d1 100644 --- a/command/init_test.go +++ b/command/init_test.go @@ -270,9 +270,6 @@ func TestInit_backendUnset(t *testing.T) { t.Fatalf("err: %s", err) } - // Run it again - defer testInteractiveInput(t, []string{"yes", "yes"})() - ui := new(cli.MockUi) c := &InitCommand{ Meta: Meta{ @@ -281,7 +278,7 @@ func TestInit_backendUnset(t *testing.T) { }, } - args := []string{} + args := []string{"-force-copy"} if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) } @@ -321,6 +318,65 @@ func TestInit_backendConfigFile(t *testing.T) { } } +func TestInit_backendConfigFileChange(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + copy.CopyDir(testFixturePath("init-backend-config-file-change"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + // Ask input + defer testInputMap(t, map[string]string{ + "backend-migrate-to-new": "no", + })() + + ui := new(cli.MockUi) + c := &InitCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + } + + args := []string{"-backend-config", "input.config"} + if code := c.Run(args); code != 0 { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } + + // Read our saved backend config and verify we have our settings + state := testStateRead(t, filepath.Join(DefaultDataDir, DefaultStateFilename)) + if v := state.Backend.Config["path"]; v != "hello" { + t.Fatalf("bad: %#v", v) + } +} + +func TestInit_backendConfigKV(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + copy.CopyDir(testFixturePath("init-backend-config-kv"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + ui := new(cli.MockUi) + c := &InitCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + } + + args := []string{"-backend-config", "path=hello"} + if code := c.Run(args); code != 0 { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } + + // Read our saved backend config and verify we have our settings + state := testStateRead(t, filepath.Join(DefaultDataDir, DefaultStateFilename)) + if v := state.Backend.Config["path"]; v != "hello" { + t.Fatalf("bad: %#v", v) + } +} + func TestInit_copyBackendDst(t *testing.T) { // Create a temporary working directory that is empty td := tempDir(t) diff --git a/command/internal_plugin_list.go b/command/internal_plugin_list.go index 82e2f1df58..2f48908c7d 100644 --- a/command/internal_plugin_list.go +++ b/command/internal_plugin_list.go @@ -15,6 +15,7 @@ import ( azurermprovider "github.com/hashicorp/terraform/builtin/providers/azurerm" bitbucketprovider "github.com/hashicorp/terraform/builtin/providers/bitbucket" chefprovider "github.com/hashicorp/terraform/builtin/providers/chef" + circonusprovider "github.com/hashicorp/terraform/builtin/providers/circonus" clcprovider "github.com/hashicorp/terraform/builtin/providers/clc" cloudflareprovider "github.com/hashicorp/terraform/builtin/providers/cloudflare" cloudstackprovider "github.com/hashicorp/terraform/builtin/providers/cloudstack" @@ -36,6 +37,7 @@ import ( icinga2provider "github.com/hashicorp/terraform/builtin/providers/icinga2" ignitionprovider "github.com/hashicorp/terraform/builtin/providers/ignition" influxdbprovider "github.com/hashicorp/terraform/builtin/providers/influxdb" + kubernetesprovider "github.com/hashicorp/terraform/builtin/providers/kubernetes" libratoprovider "github.com/hashicorp/terraform/builtin/providers/librato" logentriesprovider "github.com/hashicorp/terraform/builtin/providers/logentries" mailgunprovider "github.com/hashicorp/terraform/builtin/providers/mailgun" @@ -89,6 +91,7 @@ var InternalProviders = map[string]plugin.ProviderFunc{ "azurerm": azurermprovider.Provider, "bitbucket": bitbucketprovider.Provider, "chef": chefprovider.Provider, + "circonus": circonusprovider.Provider, "clc": clcprovider.Provider, "cloudflare": cloudflareprovider.Provider, "cloudstack": cloudstackprovider.Provider, @@ -110,6 +113,7 @@ var InternalProviders = map[string]plugin.ProviderFunc{ "icinga2": icinga2provider.Provider, "ignition": ignitionprovider.Provider, "influxdb": influxdbprovider.Provider, + "kubernetes": kubernetesprovider.Provider, "librato": libratoprovider.Provider, "logentries": logentriesprovider.Provider, "mailgun": mailgunprovider.Provider, diff --git a/command/meta.go b/command/meta.go index c18c77fc74..0dd4c78843 100644 --- a/command/meta.go +++ b/command/meta.go @@ -87,14 +87,18 @@ type Meta struct { // // provider is to specify specific resource providers // - // lockState is set to false to disable state locking - statePath string - stateOutPath string - backupPath string - parallelism int - shadow bool - provider string - stateLock bool + // stateLock is set to false to disable state locking + // + // forceInitCopy suppresses confirmation for copying state data during + // init. + statePath string + stateOutPath string + backupPath string + parallelism int + shadow bool + provider string + stateLock bool + forceInitCopy bool } // initStatePaths is used to initialize the default values for @@ -208,11 +212,16 @@ func (m *Meta) contextOpts() *terraform.ContextOpts { vs[k] = v } opts.Variables = vs + opts.Targets = m.targets opts.UIInput = m.UIInput() opts.Parallelism = m.parallelism opts.Shadow = m.shadow + opts.Meta = &terraform.ContextMeta{ + Env: m.Env(), + } + return &opts } diff --git a/command/meta_backend.go b/command/meta_backend.go index ea96e7455f..5019c0242c 100644 --- a/command/meta_backend.go +++ b/command/meta_backend.go @@ -23,7 +23,6 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/mitchellh/mapstructure" - backendlegacy "github.com/hashicorp/terraform/backend/legacy" backendlocal "github.com/hashicorp/terraform/backend/local" ) @@ -38,6 +37,10 @@ type BackendOpts struct { // from a file. ConfigFile string + // ConfigExtra is extra configuration to merge into the backend + // configuration after the extra file above. + ConfigExtra map[string]interface{} + // Plan is a plan that is being used. If this is set, the backend // configuration and output configuration will come from this plan. Plan *terraform.Plan @@ -138,6 +141,20 @@ func (m *Meta) Backend(opts *BackendOpts) (backend.Enhanced, error) { return local, nil } +// IsLocalBackend returns true if the backend is a local backend. We use this +// for some checks that require a remote backend. +func (m *Meta) IsLocalBackend(b backend.Backend) bool { + // Is it a local backend? + bLocal, ok := b.(*backendlocal.Local) + + // If it is, does it not have an alternate state backend? + if ok { + ok = bLocal.Backend == nil + } + + return ok +} + // Operation initializes a new backend.Operation struct. // // This prepares the operation. After calling this, the caller is expected @@ -237,6 +254,20 @@ func (m *Meta) backendConfig(opts *BackendOpts) (*config.Backend, error) { backend.RawConfig = backend.RawConfig.Merge(rc) } + // If we have extra config values, merge that + if len(opts.ConfigExtra) > 0 { + log.Printf( + "[DEBUG] command: adding extra backend config from CLI") + rc, err := config.NewRawConfig(opts.ConfigExtra) + if err != nil { + return nil, fmt.Errorf( + "Error adding extra configuration file for backend: %s", err) + } + + // Merge in the configuration + backend.RawConfig = backend.RawConfig.Merge(rc) + } + // Validate the backend early. We have to do this before the normal // config validation pass since backend loading happens earlier. if errs := backend.Validate(); len(errs) > 0 { @@ -288,6 +319,16 @@ func (m *Meta) backendFromConfig(opts *BackendOpts) (backend.Backend, error) { return nil, fmt.Errorf("Error loading backend config: %s", err) } + // cHash defaults to zero unless c is set + var cHash uint64 + if c != nil { + // We need to rehash to get the value since we may have merged the + // config with an extra ConfigFile. We don't do this when merging + // because we do want the ORIGINAL value on c so that we store + // that to not detect drift. This is covered in tests. + cHash = c.Rehash() + } + // Get the path to where we store a local cache of backend configuration // if we're using a remote backend. This may not yet exist which means // we haven't used a non-local backend before. That is okay. @@ -370,7 +411,7 @@ func (m *Meta) backendFromConfig(opts *BackendOpts) (backend.Backend, error) { case c != nil && s.Remote.Empty() && !s.Backend.Empty(): // If our configuration is the same, then we're just initializing // a previously configured remote backend. - if !s.Backend.Empty() && s.Backend.Hash == c.Hash { + if !s.Backend.Empty() && s.Backend.Hash == cHash { return m.backend_C_r_S_unchanged(c, sMgr) } @@ -384,7 +425,7 @@ func (m *Meta) backendFromConfig(opts *BackendOpts) (backend.Backend, error) { log.Printf( "[WARN] command: backend config change! saved: %d, new: %d", - s.Backend.Hash, c.Hash) + s.Backend.Hash, cHash) return m.backend_C_r_S_changed(c, sMgr, true) // Configuring a backend for the first time while having legacy @@ -406,7 +447,7 @@ func (m *Meta) backendFromConfig(opts *BackendOpts) (backend.Backend, error) { case c != nil && !s.Remote.Empty() && !s.Backend.Empty(): // If the hashes are the same, we have a legacy remote state with // an unchanged stored backend state. - if s.Backend.Hash == c.Hash { + if s.Backend.Hash == cHash { if !opts.Init { initReason := fmt.Sprintf( "Legacy remote state found with configured backend %q", @@ -457,6 +498,7 @@ func (m *Meta) backendFromPlan(opts *BackendOpts) (backend.Backend, error) { "and specify the state path when creating the plan.") } + planBackend := opts.Plan.Backend planState := opts.Plan.State if planState == nil { // The state can be nil, we just have to make it empty for the logic @@ -465,7 +507,7 @@ func (m *Meta) backendFromPlan(opts *BackendOpts) (backend.Backend, error) { } // Validation only for non-local plans - local := planState.Remote.Empty() && planState.Backend.Empty() + local := planState.Remote.Empty() && planBackend.Empty() if !local { // We currently don't allow "-state-out" to be specified. if m.stateOutPath != "" { @@ -476,7 +518,7 @@ func (m *Meta) backendFromPlan(opts *BackendOpts) (backend.Backend, error) { /* // Determine the path where we'd be writing state path := DefaultStateFilename - if !planState.Remote.Empty() || !planState.Backend.Empty() { + if !planState.Remote.Empty() || !planBackend.Empty() { path = filepath.Join(m.DataDir(), DefaultStateFilename) } @@ -505,16 +547,26 @@ func (m *Meta) backendFromPlan(opts *BackendOpts) (backend.Backend, error) { var err error switch { // No remote state at all, all local - case planState.Remote.Empty() && planState.Backend.Empty(): + case planState.Remote.Empty() && planBackend.Empty(): + log.Printf("[INFO] command: initializing local backend from plan (not set)") + // Get the local backend b, err = m.Backend(&BackendOpts{ForceLocal: true}) // New backend configuration set - case planState.Remote.Empty() && !planState.Backend.Empty(): - b, err = m.backendInitFromSaved(planState.Backend) + case planState.Remote.Empty() && !planBackend.Empty(): + log.Printf( + "[INFO] command: initializing backend from plan: %s", + planBackend.Type) + + b, err = m.backendInitFromSaved(planBackend) // Legacy remote state set - case !planState.Remote.Empty() && planState.Backend.Empty(): + case !planState.Remote.Empty() && planBackend.Empty(): + log.Printf( + "[INFO] command: initializing legacy remote backend from plan: %s", + planState.Remote.Type) + // Write our current state to an inmemory state just so that we // have it in the format of state.State inmem := &state.InmemState{} @@ -524,7 +576,7 @@ func (m *Meta) backendFromPlan(opts *BackendOpts) (backend.Backend, error) { b, err = m.backend_c_R_s(nil, inmem) // Both set, this can't happen in a plan. - case !planState.Remote.Empty() && !planState.Backend.Empty(): + case !planState.Remote.Empty() && !planBackend.Empty(): return nil, fmt.Errorf(strings.TrimSpace(errBackendPlanBoth)) } @@ -632,16 +684,20 @@ func (m *Meta) backend_c_r_S( // Get the backend type for output backendType := s.Backend.Type - // Confirm with the user that the copy should occur - copy, err := m.confirm(&terraform.InputOpts{ - Id: "backend-migrate-to-local", - Query: fmt.Sprintf("Do you want to copy the state from %q?", s.Backend.Type), - Description: fmt.Sprintf( - strings.TrimSpace(inputBackendMigrateLocal), s.Backend.Type), - }) - if err != nil { - return nil, fmt.Errorf( - "Error asking for state copy action: %s", err) + copy := m.forceInitCopy + if !copy { + var err error + // Confirm with the user that the copy should occur + copy, err = m.confirm(&terraform.InputOpts{ + Id: "backend-migrate-to-local", + Query: fmt.Sprintf("Do you want to copy the state from %q?", s.Backend.Type), + Description: fmt.Sprintf( + strings.TrimSpace(inputBackendMigrateLocal), s.Backend.Type), + }) + if err != nil { + return nil, fmt.Errorf( + "Error asking for state copy action: %s", err) + } } // If we're copying, perform the migration @@ -753,16 +809,19 @@ func (m *Meta) backend_c_R_S( s := sMgr.State() // Ask the user if they want to migrate their existing remote state - copy, err := m.confirm(&terraform.InputOpts{ - Id: "backend-migrate-to-new", - Query: fmt.Sprintf( - "Do you want to copy the legacy remote state from %q?", - s.Remote.Type), - Description: strings.TrimSpace(inputBackendMigrateLegacyLocal), - }) - if err != nil { - return nil, fmt.Errorf( - "Error asking for state copy action: %s", err) + copy := m.forceInitCopy + if !copy { + copy, err = m.confirm(&terraform.InputOpts{ + Id: "backend-migrate-to-new", + Query: fmt.Sprintf( + "Do you want to copy the legacy remote state from %q?", + s.Remote.Type), + Description: strings.TrimSpace(inputBackendMigrateLegacyLocal), + }) + if err != nil { + return nil, fmt.Errorf( + "Error asking for state copy action: %s", err) + } } // If the user wants a copy, copy! @@ -846,16 +905,19 @@ func (m *Meta) backend_C_R_s( // Finally, ask the user if they want to copy the state from // their old remote state location. - copy, err := m.confirm(&terraform.InputOpts{ - Id: "backend-migrate-to-new", - Query: fmt.Sprintf( - "Do you want to copy the legacy remote state from %q?", - s.Remote.Type), - Description: strings.TrimSpace(inputBackendMigrateLegacy), - }) - if err != nil { - return nil, fmt.Errorf( - "Error asking for state copy action: %s", err) + copy := m.forceInitCopy + if !copy { + copy, err = m.confirm(&terraform.InputOpts{ + Id: "backend-migrate-to-new", + Query: fmt.Sprintf( + "Do you want to copy the legacy remote state from %q?", + s.Remote.Type), + Description: strings.TrimSpace(inputBackendMigrateLegacy), + }) + if err != nil { + return nil, fmt.Errorf( + "Error asking for state copy action: %s", err) + } } // If the user wants a copy, copy! @@ -1003,14 +1065,17 @@ func (m *Meta) backend_C_r_S_changed( } // Check with the user if we want to migrate state - copy, err := m.confirm(&terraform.InputOpts{ - Id: "backend-migrate-to-new", - Query: fmt.Sprintf("Do you want to copy the state from %q?", c.Type), - Description: strings.TrimSpace(fmt.Sprintf(inputBackendMigrateChange, c.Type, s.Backend.Type)), - }) - if err != nil { - return nil, fmt.Errorf( - "Error asking for state copy action: %s", err) + copy := m.forceInitCopy + if !copy { + copy, err = m.confirm(&terraform.InputOpts{ + Id: "backend-migrate-to-new", + Query: fmt.Sprintf("Do you want to copy the state from %q?", c.Type), + Description: strings.TrimSpace(fmt.Sprintf(inputBackendMigrateChange, c.Type, s.Backend.Type)), + }) + if err != nil { + return nil, fmt.Errorf( + "Error asking for state copy action: %s", err) + } } // If we are, then we need to initialize the old backend and @@ -1146,16 +1211,19 @@ func (m *Meta) backend_C_R_S_unchanged( } // Ask if the user wants to move their legacy remote state - copy, err := m.confirm(&terraform.InputOpts{ - Id: "backend-migrate-to-new", - Query: fmt.Sprintf( - "Do you want to copy the legacy remote state from %q?", - s.Remote.Type), - Description: strings.TrimSpace(inputBackendMigrateLegacy), - }) - if err != nil { - return nil, fmt.Errorf( - "Error asking for state copy action: %s", err) + copy := m.forceInitCopy + if !copy { + copy, err = m.confirm(&terraform.InputOpts{ + Id: "backend-migrate-to-new", + Query: fmt.Sprintf( + "Do you want to copy the legacy remote state from %q?", + s.Remote.Type), + Description: strings.TrimSpace(inputBackendMigrateLegacy), + }) + if err != nil { + return nil, fmt.Errorf( + "Error asking for state copy action: %s", err) + } } // If the user wants a copy, copy! @@ -1272,8 +1340,12 @@ func (m *Meta) backendInitFromLegacy(s *terraform.RemoteState) (backend.Backend, } config := terraform.NewResourceConfig(rawC) - // Initialize the legacy remote backend - b := &backendlegacy.Backend{Type: s.Type} + // Get the backend + f := backendinit.Backend(s.Type) + if f == nil { + return nil, fmt.Errorf(strings.TrimSpace(errBackendLegacyUnknown), s.Type) + } + b := f() // Configure if err := b.Configure(config); err != nil { @@ -1328,7 +1400,7 @@ If fixing these errors requires changing your remote state configuration, you must switch your configuration to the new remote backend configuration. You can learn more about remote backends at the URL below: -TODO: URL +https://www.terraform.io/docs/backends/index.html The error(s) configuring the legacy remote state: @@ -1338,7 +1410,7 @@ The error(s) configuring the legacy remote state: const errBackendLegacyUnknown = ` The legacy remote state type %q could not be found. -Terraform 0.9.0 shipped with backwards compatible for all built-in +Terraform 0.9.0 shipped with backwards compatibility for all built-in legacy remote state types. This error may mean that you were using a custom Terraform build that perhaps supported a different type of remote state. diff --git a/command/meta_backend_migrate.go b/command/meta_backend_migrate.go index 1531b4a925..b9133a0523 100644 --- a/command/meta_backend_migrate.go +++ b/command/meta_backend_migrate.go @@ -162,27 +162,36 @@ func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error { func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error { currentEnv := m.Env() - // Ask the user if they want to migrate their existing remote state - migrate, err := m.confirm(&terraform.InputOpts{ - Id: "backend-migrate-multistate-to-single", - Query: fmt.Sprintf( - "Destination state %q doesn't support environments (named states).\n"+ - "Do you want to copy only your current environment?", - opts.TwoType), - Description: fmt.Sprintf( - strings.TrimSpace(inputBackendMigrateMultiToSingle), - opts.OneType, opts.TwoType, currentEnv), - }) - if err != nil { - return fmt.Errorf( - "Error asking for state migration action: %s", err) + migrate := m.forceInitCopy + if !migrate { + var err error + // Ask the user if they want to migrate their existing remote state + migrate, err = m.confirm(&terraform.InputOpts{ + Id: "backend-migrate-multistate-to-single", + Query: fmt.Sprintf( + "Destination state %q doesn't support environments (named states).\n"+ + "Do you want to copy only your current environment?", + opts.TwoType), + Description: fmt.Sprintf( + strings.TrimSpace(inputBackendMigrateMultiToSingle), + opts.OneType, opts.TwoType, currentEnv), + }) + if err != nil { + return fmt.Errorf( + "Error asking for state migration action: %s", err) + } } + if !migrate { return fmt.Errorf("Migration aborted by user.") } // Copy the default state opts.oneEnv = currentEnv + + // now switch back to the default env so we can acccess the new backend + m.SetEnv(backend.DefaultStateName) + return m.backendMigrateState_s_s(opts) } @@ -231,6 +240,15 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error { one := stateOne.State() two := stateTwo.State() + // Clear the legacy remote state in both cases. If we're at the migration + // step then this won't be used anymore. + if one != nil { + one.Remote = nil + } + if two != nil { + two.Remote = nil + } + var confirmFunc func(state.State, state.State, *backendMigrateOpts) (bool, error) switch { // No migration necessary @@ -282,6 +300,10 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error { } func (m *Meta) backendMigrateEmptyConfirm(one, two state.State, opts *backendMigrateOpts) (bool, error) { + if m.forceInitCopy { + return true, nil + } + inputOpts := &terraform.InputOpts{ Id: "backend-migrate-copy-to-empty", Query: fmt.Sprintf( @@ -344,6 +366,10 @@ func (m *Meta) backendMigrateNonEmptyConfirm( return false, fmt.Errorf("Error saving temporary state: %s", err) } + if m.forceInitCopy { + return true, nil + } + // Ask for confirmation inputOpts := &terraform.InputOpts{ Id: "backend-migrate-to-backend", diff --git a/command/meta_backend_test.go b/command/meta_backend_test.go index dda5db644c..8c1fe2d668 100644 --- a/command/meta_backend_test.go +++ b/command/meta_backend_test.go @@ -480,11 +480,10 @@ func TestMetaBackend_configureNewWithStateExisting(t *testing.T) { defer os.RemoveAll(td) defer testChdir(t, td)() - // Ask input - defer testInteractiveInput(t, []string{"yes"})() - // Setup the meta m := testMetaBackend(t, nil) + // suppress input + m.forceInitCopy = true // Get the backend b, err := m.Backend(&BackendOpts{Init: true}) @@ -722,12 +721,12 @@ func TestMetaBackend_configureNewLegacyCopy(t *testing.T) { defer os.RemoveAll(td) defer testChdir(t, td)() - // Ask input - defer testInteractiveInput(t, []string{"yes", "yes"})() - // Setup the meta m := testMetaBackend(t, nil) + // suppress input + m.forceInitCopy = true + // Get the backend b, err := m.Backend(&BackendOpts{Init: true}) if err != nil { @@ -771,6 +770,13 @@ func TestMetaBackend_configureNewLegacyCopy(t *testing.T) { } } + // Verify we have no configured legacy in the state itself + { + if !state.Remote.Empty() { + t.Fatalf("legacy has remote state: %#v", state.Remote) + } + } + // Write some state state = terraform.NewState() state.Lineage = "changing" @@ -1144,6 +1150,11 @@ func TestMetaBackend_configuredChangeCopy_multiToSingle(t *testing.T) { if _, err := os.Stat(envPath); err != nil { t.Fatal("env should exist") } + + // Verify we are now in the default env, or we may not be able to access the new backend + if env := m.Env(); env != backend.DefaultStateName { + t.Fatal("using non-default env with single-env backend") + } } // Changing a configured backend that supports multi-state to a @@ -1581,11 +1592,9 @@ func TestMetaBackend_configuredUnchangedLegacyCopy(t *testing.T) { defer os.RemoveAll(td) defer testChdir(t, td)() - // Ask input - defer testInteractiveInput(t, []string{"yes", "yes"})() - // Setup the meta m := testMetaBackend(t, nil) + m.forceInitCopy = true // Get the backend b, err := m.Backend(&BackendOpts{Init: true}) @@ -2858,12 +2867,12 @@ func TestMetaBackend_planBackendEmptyDir(t *testing.T) { testFixturePath("backend-plan-backend-empty-config"), DefaultDataDir, DefaultStateFilename)) planState := original.DeepCopy() - planState.Backend = backendState.Backend // Create the plan plan := &terraform.Plan{ - Module: testModule(t, "backend-plan-backend-empty-config"), - State: planState, + Module: testModule(t, "backend-plan-backend-empty-config"), + State: planState, + Backend: backendState.Backend, } // Setup the meta @@ -2960,12 +2969,12 @@ func TestMetaBackend_planBackendMatch(t *testing.T) { testFixturePath("backend-plan-backend-empty-config"), DefaultDataDir, DefaultStateFilename)) planState := original.DeepCopy() - planState.Backend = backendState.Backend // Create the plan plan := &terraform.Plan{ - Module: testModule(t, "backend-plan-backend-empty-config"), - State: planState, + Module: testModule(t, "backend-plan-backend-empty-config"), + State: planState, + Backend: backendState.Backend, } // Setup the meta @@ -3062,15 +3071,15 @@ func TestMetaBackend_planBackendMismatchLineage(t *testing.T) { testFixturePath("backend-plan-backend-empty-config"), DefaultDataDir, DefaultStateFilename)) planState := original.DeepCopy() - planState.Backend = backendState.Backend // Get the real original original = testStateRead(t, "local-state.tfstate") // Create the plan plan := &terraform.Plan{ - Module: testModule(t, "backend-plan-backend-empty-config"), - State: planState, + Module: testModule(t, "backend-plan-backend-empty-config"), + State: planState, + Backend: backendState.Backend, } // Setup the meta diff --git a/command/push.go b/command/push.go index f3e173ec47..3a4c2060e3 100644 --- a/command/push.go +++ b/command/push.go @@ -71,24 +71,6 @@ func (c *PushCommand) Run(args []string) int { return 1 } - /* - // Verify the state is remote, we can't push without a remote state - s, err := c.State() - if err != nil { - c.Ui.Error(fmt.Sprintf("Failed to read state: %s", err)) - return 1 - } - if !s.State().IsRemote() { - c.Ui.Error( - "Remote state is not enabled. For Atlas to run Terraform\n" + - "for you, remote state must be used and configured. Remote\n" + - "state via any backend is accepted, not just Atlas. To\n" + - "configure remote state, use the `terraform remote config`\n" + - "command.") - return 1 - } - */ - // Check if the path is a plan plan, err := c.Plan(configPath) if err != nil { @@ -125,6 +107,17 @@ func (c *PushCommand) Run(args []string) int { return 1 } + // We require a non-local backend + if c.IsLocalBackend(b) { + c.Ui.Error( + "A remote backend is not enabled. For Atlas to run Terraform\n" + + "for you, remote state must be used and configured. Remote \n" + + "state via any backend is accepted, not just Atlas. To configure\n" + + "a backend, please see the documentation at the URL below:\n\n" + + "https://www.terraform.io/docs/state/remote.html") + return 1 + } + // We require a local backend local, ok := b.(backend.Local) if !ok { diff --git a/command/push_test.go b/command/push_test.go index 94573765fc..4afeba3b36 100644 --- a/command/push_test.go +++ b/command/push_test.go @@ -9,9 +9,11 @@ import ( "path/filepath" "reflect" "sort" + "strings" "testing" atlas "github.com/hashicorp/atlas-go/v1" + "github.com/hashicorp/terraform/helper/copy" "github.com/hashicorp/terraform/terraform" "github.com/mitchellh/cli" ) @@ -73,6 +75,70 @@ func TestPush_good(t *testing.T) { } } +func TestPush_goodBackendInit(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + copy.CopyDir(testFixturePath("push-backend-new"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + // init backend + ui := new(cli.MockUi) + ci := &InitCommand{ + Meta: Meta{ + Ui: ui, + }, + } + if code := ci.Run(nil); code != 0 { + t.Fatalf("bad: %d\n%s", code, ui.ErrorWriter) + } + + // Path where the archive will be "uploaded" to + archivePath := testTempFile(t) + defer os.Remove(archivePath) + + client := &mockPushClient{File: archivePath} + ui = new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + + client: client, + } + + args := []string{ + "-vcs=false", + td, + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + actual := testArchiveStr(t, archivePath) + expected := []string{ + // Expected weird behavior, doesn't affect unpackaging + ".terraform/", + ".terraform/", + ".terraform/terraform.tfstate", + ".terraform/terraform.tfstate", + "main.tf", + } + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %#v", actual) + } + + variables := make(map[string]interface{}) + if !reflect.DeepEqual(client.UpsertOptions.Variables, variables) { + t.Fatalf("bad: %#v", client.UpsertOptions) + } + + if client.UpsertOptions.Name != "hello" { + t.Fatalf("bad: %#v", client.UpsertOptions) + } +} + func TestPush_noUploadModules(t *testing.T) { // Path where the archive will be "uploaded" to archivePath := testTempFile(t) @@ -662,6 +728,12 @@ func TestPush_noState(t *testing.T) { } func TestPush_noRemoteState(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + copy.CopyDir(testFixturePath("push-no-remote"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + state := &terraform.State{ Modules: []*terraform.ModuleState{ &terraform.ModuleState{ @@ -679,19 +751,32 @@ func TestPush_noRemoteState(t *testing.T) { } statePath := testStateFile(t, state) + // Path where the archive will be "uploaded" to + archivePath := testTempFile(t) + defer os.Remove(archivePath) + + client := &mockPushClient{File: archivePath} ui := new(cli.MockUi) c := &PushCommand{ Meta: Meta{ Ui: ui, }, + client: client, } args := []string{ + "-vcs=false", "-state", statePath, + td, } if code := c.Run(args); code != 1 { t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) } + + errStr := ui.ErrorWriter.String() + if !strings.Contains(errStr, "remote backend") { + t.Fatalf("bad: %s", errStr) + } } func TestPush_plan(t *testing.T) { diff --git a/command/refresh_test.go b/command/refresh_test.go index 2defaff457..12241e8097 100644 --- a/command/refresh_test.go +++ b/command/refresh_test.go @@ -9,6 +9,7 @@ import ( "strings" "testing" + "github.com/hashicorp/terraform/helper/copy" "github.com/hashicorp/terraform/terraform" "github.com/mitchellh/cli" ) @@ -59,6 +60,37 @@ func TestRefresh(t *testing.T) { } } +func TestRefresh_empty(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + copy.CopyDir(testFixturePath("refresh-empty"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + p := testProvider() + ui := new(cli.MockUi) + c := &RefreshCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + p.RefreshFn = nil + p.RefreshReturn = &terraform.InstanceState{ID: "yes"} + + args := []string{ + td, + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + if p.RefreshCalled { + t.Fatal("refresh should not be called") + } +} + func TestRefresh_lockedState(t *testing.T) { state := testState() statePath := testStateFile(t, state) @@ -96,25 +128,6 @@ func TestRefresh_lockedState(t *testing.T) { } } -func TestRefresh_badState(t *testing.T) { - p := testProvider() - ui := new(cli.MockUi) - c := &RefreshCommand{ - Meta: Meta{ - ContextOpts: testCtxConfig(p), - Ui: ui, - }, - } - - args := []string{ - "-state", "i-should-not-exist-ever", - testFixturePath("refresh"), - } - if code := c.Run(args); code != 1 { - t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) - } -} - func TestRefresh_cwd(t *testing.T) { cwd, err := os.Getwd() if err != nil { diff --git a/command/test-fixtures/apply-terraform-env/main.tf b/command/test-fixtures/apply-terraform-env/main.tf new file mode 100644 index 0000000000..6fc63dbc55 --- /dev/null +++ b/command/test-fixtures/apply-terraform-env/main.tf @@ -0,0 +1 @@ +output "output" { value = "${terraform.env}" } diff --git a/command/test-fixtures/backend-change-multi-to-single/.terraform/environment b/command/test-fixtures/backend-change-multi-to-single/.terraform/environment new file mode 100644 index 0000000000..e5e6010956 --- /dev/null +++ b/command/test-fixtures/backend-change-multi-to-single/.terraform/environment @@ -0,0 +1 @@ +env1 diff --git a/command/test-fixtures/backend-change-multi-to-single/local-state.tfstate b/command/test-fixtures/backend-change-multi-to-single/terraform.tfstate.d/env1/terraform.tfstate similarity index 100% rename from command/test-fixtures/backend-change-multi-to-single/local-state.tfstate rename to command/test-fixtures/backend-change-multi-to-single/terraform.tfstate.d/env1/terraform.tfstate diff --git a/command/test-fixtures/backend-new-legacy/local-state-old.tfstate b/command/test-fixtures/backend-new-legacy/local-state-old.tfstate index 0af594cc40..8f312596d1 100644 --- a/command/test-fixtures/backend-new-legacy/local-state-old.tfstate +++ b/command/test-fixtures/backend-new-legacy/local-state-old.tfstate @@ -2,5 +2,11 @@ "version": 3, "terraform_version": "0.8.2", "serial": 7, - "lineage": "backend-new-legacy" + "lineage": "backend-new-legacy", + "remote": { + "type": "local", + "config": { + "path": "local-state-old.tfstate" + } + } } diff --git a/command/test-fixtures/init-backend-config-file-change/.terraform/terraform.tfstate b/command/test-fixtures/init-backend-config-file-change/.terraform/terraform.tfstate new file mode 100644 index 0000000000..073bd7a822 --- /dev/null +++ b/command/test-fixtures/init-backend-config-file-change/.terraform/terraform.tfstate @@ -0,0 +1,22 @@ +{ + "version": 3, + "serial": 0, + "lineage": "666f9301-7e65-4b19-ae23-71184bb19b03", + "backend": { + "type": "local", + "config": { + "path": "local-state.tfstate" + }, + "hash": 9073424445967744180 + }, + "modules": [ + { + "path": [ + "root" + ], + "outputs": {}, + "resources": {}, + "depends_on": [] + } + ] +} diff --git a/command/test-fixtures/init-backend-config-file-change/input.config b/command/test-fixtures/init-backend-config-file-change/input.config new file mode 100644 index 0000000000..6cd14f4a3d --- /dev/null +++ b/command/test-fixtures/init-backend-config-file-change/input.config @@ -0,0 +1 @@ +path = "hello" diff --git a/command/test-fixtures/init-backend-config-file-change/main.tf b/command/test-fixtures/init-backend-config-file-change/main.tf new file mode 100644 index 0000000000..ca1bd3921e --- /dev/null +++ b/command/test-fixtures/init-backend-config-file-change/main.tf @@ -0,0 +1,5 @@ +terraform { + backend "local" { + path = "local-state.tfstate" + } +} diff --git a/command/test-fixtures/init-backend-config-kv/main.tf b/command/test-fixtures/init-backend-config-kv/main.tf new file mode 100644 index 0000000000..c08b42fb03 --- /dev/null +++ b/command/test-fixtures/init-backend-config-kv/main.tf @@ -0,0 +1,3 @@ +terraform { + backend "local" {} +} diff --git a/command/test-fixtures/push-backend-new/main.tf b/command/test-fixtures/push-backend-new/main.tf new file mode 100644 index 0000000000..68a49b44a5 --- /dev/null +++ b/command/test-fixtures/push-backend-new/main.tf @@ -0,0 +1,5 @@ +terraform { + backend "inmem" {} +} + +atlas { name = "hello" } diff --git a/command/test-fixtures/push-no-remote/main.tf b/command/test-fixtures/push-no-remote/main.tf new file mode 100644 index 0000000000..2651626363 --- /dev/null +++ b/command/test-fixtures/push-no-remote/main.tf @@ -0,0 +1,5 @@ +resource "aws_instance" "foo" {} + +atlas { + name = "foo" +} diff --git a/command/test-fixtures/refresh-empty/main.tf b/command/test-fixtures/refresh-empty/main.tf new file mode 100644 index 0000000000..fec56017dc --- /dev/null +++ b/command/test-fixtures/refresh-empty/main.tf @@ -0,0 +1 @@ +# Hello diff --git a/config/append.go b/config/append.go index a421df4a0d..5f4e89eef7 100644 --- a/config/append.go +++ b/config/append.go @@ -35,8 +35,13 @@ func Append(c1, c2 *Config) (*Config, error) { c.Atlas = c2.Atlas } - c.Terraform = c1.Terraform - if c2.Terraform != nil { + // merge Terraform blocks + if c1.Terraform != nil { + c.Terraform = c1.Terraform + if c2.Terraform != nil { + c.Terraform.Merge(c2.Terraform) + } + } else { c.Terraform = c2.Terraform } diff --git a/config/append_test.go b/config/append_test.go index aecb80e66a..17cca25b72 100644 --- a/config/append_test.go +++ b/config/append_test.go @@ -118,6 +118,31 @@ func TestAppend(t *testing.T) { }, false, }, + + // appending configs merges terraform blocks + { + &Config{ + Terraform: &Terraform{ + RequiredVersion: "A", + }, + }, + &Config{ + Terraform: &Terraform{ + Backend: &Backend{ + Type: "test", + }, + }, + }, + &Config{ + Terraform: &Terraform{ + RequiredVersion: "A", + Backend: &Backend{ + Type: "test", + }, + }, + }, + false, + }, } for i, tc := range cases { diff --git a/config/config.go b/config/config.go index bdae585242..bf064e57a8 100644 --- a/config/config.go +++ b/config/config.go @@ -501,10 +501,13 @@ func (c *Config) Validate() error { // Good case *ModuleVariable: case *ResourceVariable: + case *TerraformVariable: case *UserVariable: default: - panic(fmt.Sprintf("Unknown type in count var in %s: %T", n, v)) + errs = append(errs, fmt.Errorf( + "Internal error. Unknown type in count var in %s: %T", + n, v)) } } diff --git a/config/config_terraform.go b/config/config_terraform.go index 952d59cc4e..a547cc798d 100644 --- a/config/config_terraform.go +++ b/config/config_terraform.go @@ -47,6 +47,18 @@ func (t *Terraform) Validate() []error { return errs } +// Merge t with t2. +// Any conflicting fields are overwritten by t2. +func (t *Terraform) Merge(t2 *Terraform) { + if t2.RequiredVersion != "" { + t.RequiredVersion = t2.RequiredVersion + } + + if t2.Backend != nil { + t.Backend = t2.Backend + } +} + // Backend is the configuration for the "backend" to use with Terraform. // A backend is responsible for all major behavior of Terraform's core. // The abstraction layer above the core (the "backend") allows for behavior diff --git a/config/config_test.go b/config/config_test.go index 2ef68dae97..b391295c86 100644 --- a/config/config_test.go +++ b/config/config_test.go @@ -201,6 +201,12 @@ func TestConfigValidate_table(t *testing.T) { true, "cannot contain interp", }, + { + "nested types in variable default", + "validate-var-nested", + false, + "", + }, } for i, tc := range cases { diff --git a/config/interpolate.go b/config/interpolate.go index 5867c6333c..bbb3555418 100644 --- a/config/interpolate.go +++ b/config/interpolate.go @@ -84,6 +84,13 @@ type SimpleVariable struct { Key string } +// TerraformVariable is a "terraform."-prefixed variable used to access +// metadata about the Terraform run. +type TerraformVariable struct { + Field string + key string +} + // A UserVariable is a variable that is referencing a user variable // that is inputted from outside the configuration. This looks like // "${var.foo}" @@ -101,6 +108,8 @@ func NewInterpolatedVariable(v string) (InterpolatedVariable, error) { return NewPathVariable(v) } else if strings.HasPrefix(v, "self.") { return NewSelfVariable(v) + } else if strings.HasPrefix(v, "terraform.") { + return NewTerraformVariable(v) } else if strings.HasPrefix(v, "var.") { return NewUserVariable(v) } else if strings.HasPrefix(v, "module.") { @@ -278,6 +287,22 @@ func (v *SimpleVariable) GoString() string { return fmt.Sprintf("*%#v", *v) } +func NewTerraformVariable(key string) (*TerraformVariable, error) { + field := key[len("terraform."):] + return &TerraformVariable{ + Field: field, + key: key, + }, nil +} + +func (v *TerraformVariable) FullKey() string { + return v.key +} + +func (v *TerraformVariable) GoString() string { + return fmt.Sprintf("*%#v", *v) +} + func NewUserVariable(key string) (*UserVariable, error) { name := key[len("var."):] elem := "" diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go index ad543c3086..e9c1ea4e24 100644 --- a/config/interpolate_funcs.go +++ b/config/interpolate_funcs.go @@ -88,6 +88,7 @@ func Funcs() map[string]ast.Function { "slice": interpolationFuncSlice(), "sort": interpolationFuncSort(), "split": interpolationFuncSplit(), + "substr": interpolationFuncSubstr(), "timestamp": interpolationFuncTimestamp(), "title": interpolationFuncTitle(), "trimspace": interpolationFuncTrimSpace(), @@ -1183,3 +1184,48 @@ func interpolationFuncTitle() ast.Function { }, } } + +// interpolationFuncSubstr implements the "substr" function that allows strings +// to be truncated. +func interpolationFuncSubstr() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ + ast.TypeString, // input string + ast.TypeInt, // offset + ast.TypeInt, // length + }, + ReturnType: ast.TypeString, + Callback: func(args []interface{}) (interface{}, error) { + str := args[0].(string) + offset := args[1].(int) + length := args[2].(int) + + // Interpret a negative offset as being equivalent to a positive + // offset taken from the end of the string. + if offset < 0 { + offset += len(str) + } + + // Interpret a length of `-1` as indicating that the substring + // should start at `offset` and continue until the end of the + // string. Any other negative length (other than `-1`) is invalid. + if length == -1 { + length = len(str) + } else if length >= 0 { + length += offset + } else { + return nil, fmt.Errorf("length should be a non-negative integer") + } + + if offset > len(str) { + return nil, fmt.Errorf("offset cannot be larger than the length of the string") + } + + if length > len(str) { + return nil, fmt.Errorf("'offset + length' cannot be larger than the length of the string") + } + + return str[offset:length], nil + }, + } +} diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go index 193fcd1474..c5ef36da50 100644 --- a/config/interpolate_funcs_test.go +++ b/config/interpolate_funcs_test.go @@ -2002,7 +2002,7 @@ func TestInterpolateFuncTimestamp(t *testing.T) { } if resultTime.Sub(currentTime).Seconds() > 10.0 { - t.Fatalf("Timestamp Diff too large. Expected: %s\nRecieved: %s", currentTime.Format(time.RFC3339), result.Value.(string)) + t.Fatalf("Timestamp Diff too large. Expected: %s\nReceived: %s", currentTime.Format(time.RFC3339), result.Value.(string)) } } @@ -2071,3 +2071,61 @@ func TestInterpolateFuncPathExpand(t *testing.T) { }, }) } + +func TestInterpolateFuncSubstr(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${substr("foobar", 0, 0)}`, + "", + false, + }, + { + `${substr("foobar", 0, -1)}`, + "foobar", + false, + }, + { + `${substr("foobar", 0, 3)}`, + "foo", + false, + }, + { + `${substr("foobar", 3, 3)}`, + "bar", + false, + }, + { + `${substr("foobar", -3, 3)}`, + "bar", + false, + }, + + // empty string + { + `${substr("", 0, 0)}`, + "", + false, + }, + + // invalid offset + { + `${substr("", 1, 0)}`, + nil, + true, + }, + + // invalid length + { + `${substr("", 0, 1)}`, + nil, + true, + }, + { + `${substr("", 0, -2)}`, + nil, + true, + }, + }, + }) +} diff --git a/config/interpolate_test.go b/config/interpolate_test.go index e5224ee0d6..0cdb18b69d 100644 --- a/config/interpolate_test.go +++ b/config/interpolate_test.go @@ -63,6 +63,14 @@ func TestNewInterpolatedVariable(t *testing.T) { }, false, }, + { + "terraform.env", + &TerraformVariable{ + Field: "env", + key: "terraform.env", + }, + false, + }, } for i, tc := range cases { diff --git a/config/interpolate_walk.go b/config/interpolate_walk.go index 81fa812087..ead3d102e1 100644 --- a/config/interpolate_walk.go +++ b/config/interpolate_walk.go @@ -206,6 +206,12 @@ func (w *interpolationWalker) Primitive(v reflect.Value) error { } func (w *interpolationWalker) replaceCurrent(v reflect.Value) { + // if we don't have at least 2 values, we're not going to find a map, but + // we could panic. + if len(w.cs) < 2 { + return + } + c := w.cs[len(w.cs)-2] switch c.Kind() { case reflect.Map: diff --git a/config/loader_hcl.go b/config/loader_hcl.go index 8e0d62c7ba..a40ad5ba77 100644 --- a/config/loader_hcl.go +++ b/config/loader_hcl.go @@ -209,6 +209,19 @@ func loadTerraformHcl(list *ast.ObjectList) (*Terraform, error) { // Get our one item item := list.Items[0] + // This block should have an empty top level ObjectItem. If there are keys + // here, it's likely because we have a flattened JSON object, and we can + // lift this into a nested ObjectList to decode properly. + if len(item.Keys) > 0 { + item = &ast.ObjectItem{ + Val: &ast.ObjectType{ + List: &ast.ObjectList{ + Items: []*ast.ObjectItem{item}, + }, + }, + } + } + // We need the item value as an ObjectList var listVal *ast.ObjectList if ot, ok := item.Val.(*ast.ObjectType); ok { diff --git a/config/loader_test.go b/config/loader_test.go index f49ce48c15..ace70d90e4 100644 --- a/config/loader_test.go +++ b/config/loader_test.go @@ -359,6 +359,57 @@ backend (s3) } } +func TestLoadFile_terraformBackendJSON(t *testing.T) { + c, err := LoadFile(filepath.Join(fixtureDir, "terraform-backend.tf.json")) + if err != nil { + t.Fatalf("err: %s", err) + } + + if c == nil { + t.Fatal("config should not be nil") + } + + if c.Dir != "" { + t.Fatalf("bad: %#v", c.Dir) + } + + { + actual := terraformStr(c.Terraform) + expected := strings.TrimSpace(` +backend (s3) + foo`) + if actual != expected { + t.Fatalf("bad:\n%s", actual) + } + } +} + +// test that the alternate, more obvious JSON format also decodes properly +func TestLoadFile_terraformBackendJSON2(t *testing.T) { + c, err := LoadFile(filepath.Join(fixtureDir, "terraform-backend-2.tf.json")) + if err != nil { + t.Fatalf("err: %s", err) + } + + if c == nil { + t.Fatal("config should not be nil") + } + + if c.Dir != "" { + t.Fatalf("bad: %#v", c.Dir) + } + + { + actual := terraformStr(c.Terraform) + expected := strings.TrimSpace(` +backend (s3) + foo`) + if actual != expected { + t.Fatalf("bad:\n%s", actual) + } + } +} + func TestLoadFile_terraformBackendMulti(t *testing.T) { _, err := LoadFile(filepath.Join(fixtureDir, "terraform-backend-multi.tf")) if err == nil { diff --git a/config/merge.go b/config/merge.go index 2e7686594d..db214be456 100644 --- a/config/merge.go +++ b/config/merge.go @@ -32,9 +32,13 @@ func Merge(c1, c2 *Config) (*Config, error) { c.Atlas = c2.Atlas } - // Merge the Terraform configuration, which is a complete overwrite. - c.Terraform = c1.Terraform - if c2.Terraform != nil { + // Merge the Terraform configuration + if c1.Terraform != nil { + c.Terraform = c1.Terraform + if c2.Terraform != nil { + c.Terraform.Merge(c2.Terraform) + } + } else { c.Terraform = c2.Terraform } diff --git a/config/merge_test.go b/config/merge_test.go index b1d27b6dbc..5cd87aca66 100644 --- a/config/merge_test.go +++ b/config/merge_test.go @@ -434,6 +434,31 @@ func TestMerge(t *testing.T) { }, false, }, + + // terraform blocks are merged, not overwritten + { + &Config{ + Terraform: &Terraform{ + RequiredVersion: "A", + }, + }, + &Config{ + Terraform: &Terraform{ + Backend: &Backend{ + Type: "test", + }, + }, + }, + &Config{ + Terraform: &Terraform{ + RequiredVersion: "A", + Backend: &Backend{ + Type: "test", + }, + }, + }, + false, + }, } for i, tc := range cases { diff --git a/config/module/test-fixtures/validate-module-unknown/main.tf b/config/module/test-fixtures/validate-module-unknown/main.tf new file mode 100644 index 0000000000..29b3c01bc7 --- /dev/null +++ b/config/module/test-fixtures/validate-module-unknown/main.tf @@ -0,0 +1,3 @@ +resource "null_resource" "var" { + key = "${module.unknown.value}" +} diff --git a/config/module/test-fixtures/validate-required-var/child/main.tf b/config/module/test-fixtures/validate-required-var/child/main.tf index 618ae3c42e..00b6c4e5bd 100644 --- a/config/module/test-fixtures/validate-required-var/child/main.tf +++ b/config/module/test-fixtures/validate-required-var/child/main.tf @@ -1 +1,2 @@ variable "memory" {} +variable "feature" {} diff --git a/config/module/tree.go b/config/module/tree.go index d20f163a49..b6f90fd930 100644 --- a/config/module/tree.go +++ b/config/module/tree.go @@ -259,7 +259,7 @@ func (t *Tree) Validate() error { } // If something goes wrong, here is our error template - newErr := &TreeError{Name: []string{t.Name()}} + newErr := &treeError{Name: []string{t.Name()}} // Terraform core does not handle root module children named "root". // We plan to fix this in the future but this bug was brought up in @@ -271,15 +271,14 @@ func (t *Tree) Validate() error { // Validate our configuration first. if err := t.config.Validate(); err != nil { - newErr.Err = err - return newErr + newErr.Add(err) } // If we're the root, we do extra validation. This validation usually // requires the entire tree (since children don't have parent pointers). if len(t.path) == 0 { if err := t.validateProviderAlias(); err != nil { - return err + newErr.Add(err) } } @@ -293,7 +292,7 @@ func (t *Tree) Validate() error { continue } - verr, ok := err.(*TreeError) + verr, ok := err.(*treeError) if !ok { // Unknown error, just return... return err @@ -301,7 +300,7 @@ func (t *Tree) Validate() error { // Append ourselves to the error and then return verr.Name = append(verr.Name, t.Name()) - return verr + newErr.AddChild(verr) } // Go over all the modules and verify that any parameters are valid @@ -327,10 +326,9 @@ func (t *Tree) Validate() error { // Compare to the keys in our raw config for the module for k, _ := range m.RawConfig.Raw { if _, ok := varMap[k]; !ok { - newErr.Err = fmt.Errorf( + newErr.Add(fmt.Errorf( "module %s: %s is not a valid parameter", - m.Name, k) - return newErr + m.Name, k)) } // Remove the required @@ -339,10 +337,9 @@ func (t *Tree) Validate() error { // If we have any required left over, they aren't set. for k, _ := range requiredMap { - newErr.Err = fmt.Errorf( - "module %s: required variable %s not set", - m.Name, k) - return newErr + newErr.Add(fmt.Errorf( + "module %s: required variable %q not set", + m.Name, k)) } } @@ -357,8 +354,10 @@ func (t *Tree) Validate() error { tree, ok := children[mv.Name] if !ok { - // This should never happen because Load watches us - panic("module not found in children: " + mv.Name) + newErr.Add(fmt.Errorf( + "%s: undefined module referenced %s", + source, mv.Name)) + continue } found := false @@ -369,33 +368,61 @@ func (t *Tree) Validate() error { } } if !found { - newErr.Err = fmt.Errorf( + newErr.Add(fmt.Errorf( "%s: %s is not a valid output for module %s", - source, mv.Field, mv.Name) - return newErr + source, mv.Field, mv.Name)) } } } + return newErr.ErrOrNil() +} + +// treeError is an error use by Tree.Validate to accumulates all +// validation errors. +type treeError struct { + Name []string + Errs []error + Children []*treeError +} + +func (e *treeError) Add(err error) { + e.Errs = append(e.Errs, err) +} + +func (e *treeError) AddChild(err *treeError) { + e.Children = append(e.Children, err) +} + +func (e *treeError) ErrOrNil() error { + if len(e.Errs) > 0 || len(e.Children) > 0 { + return e + } return nil } -// TreeError is an error returned by Tree.Validate if an error occurs -// with validation. -type TreeError struct { - Name []string - Err error -} +func (e *treeError) Error() string { + name := strings.Join(e.Name, ".") + var out bytes.Buffer + fmt.Fprintf(&out, "module %s: ", name) -func (e *TreeError) Error() string { - // Build up the name - var buf bytes.Buffer - for _, n := range e.Name { - buf.WriteString(n) - buf.WriteString(".") + if len(e.Errs) == 1 { + // single like error + out.WriteString(e.Errs[0].Error()) + } else { + // multi-line error + for _, err := range e.Errs { + fmt.Fprintf(&out, "\n %s", err) + } } - buf.Truncate(buf.Len() - 1) - // Format the value - return fmt.Sprintf("module %s: %s", buf.String(), e.Err) + if len(e.Children) > 0 { + // start the next error on a new line + out.WriteString("\n ") + } + for _, child := range e.Children { + out.WriteString(child.Error()) + } + + return out.String() } diff --git a/config/module/tree_test.go b/config/module/tree_test.go index 6ca5f2a72e..87bf1df67a 100644 --- a/config/module/tree_test.go +++ b/config/module/tree_test.go @@ -410,6 +410,27 @@ func TestTreeValidate_requiredChildVar(t *testing.T) { t.Fatalf("err: %s", err) } + err := tree.Validate() + if err == nil { + t.Fatal("should error") + } + + // ensure both variables are mentioned in the output + errMsg := err.Error() + for _, v := range []string{"feature", "memory"} { + if !strings.Contains(errMsg, v) { + t.Fatalf("no mention of missing variable %q", v) + } + } +} + +func TestTreeValidate_unknownModule(t *testing.T) { + tree := NewTree("", testConfig(t, "validate-module-unknown")) + + if err := tree.Load(testStorage(t), GetModeNone); err != nil { + t.Fatalf("err: %s", err) + } + if err := tree.Validate(); err == nil { t.Fatal("should error") } diff --git a/config/test-fixtures/terraform-backend-2.tf.json b/config/test-fixtures/terraform-backend-2.tf.json new file mode 100644 index 0000000000..d705fe85ae --- /dev/null +++ b/config/test-fixtures/terraform-backend-2.tf.json @@ -0,0 +1,9 @@ +{ + "terraform": { + "backend": { + "s3": { + "foo": "bar" + } + } + } +} diff --git a/config/test-fixtures/terraform-backend.tf.json b/config/test-fixtures/terraform-backend.tf.json new file mode 100644 index 0000000000..39c110b8cd --- /dev/null +++ b/config/test-fixtures/terraform-backend.tf.json @@ -0,0 +1,9 @@ +{ + "terraform": [{ + "backend": [{ + "s3": { + "foo": "bar" + } + }] + }] +} diff --git a/config/test-fixtures/validate-var-nested/main.tf b/config/test-fixtures/validate-var-nested/main.tf new file mode 100644 index 0000000000..a3d64647b1 --- /dev/null +++ b/config/test-fixtures/validate-var-nested/main.tf @@ -0,0 +1,6 @@ +variable "foo" { + default = [["foo", "bar"]] +} +variable "bar" { + default = [{foo = "bar"}] +} diff --git a/helper/acctest/random.go b/helper/acctest/random.go index fbc4428d79..1a6fc8d199 100644 --- a/helper/acctest/random.go +++ b/helper/acctest/random.go @@ -1,8 +1,18 @@ package acctest import ( + "bufio" + "bytes" + crand "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "fmt" "math/rand" + "strings" "time" + + "golang.org/x/crypto/ssh" ) // Helpers for generating random tidbits for use in identifiers to prevent @@ -30,6 +40,28 @@ func RandStringFromCharSet(strlen int, charSet string) string { return string(result) } +// RandSSHKeyPair generates a public and private SSH key pair. The public key is +// returned in OpenSSH format, and the private key is PEM encoded. +func RandSSHKeyPair(comment string) (string, string, error) { + privateKey, err := rsa.GenerateKey(crand.Reader, 1024) + if err != nil { + return "", "", err + } + + var privateKeyBuffer bytes.Buffer + privateKeyPEM := &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(privateKey)} + if err := pem.Encode(bufio.NewWriter(&privateKeyBuffer), privateKeyPEM); err != nil { + return "", "", err + } + + publicKey, err := ssh.NewPublicKey(&privateKey.PublicKey) + if err != nil { + return "", "", err + } + keyMaterial := strings.TrimSpace(string(ssh.MarshalAuthorizedKey(publicKey))) + return fmt.Sprintf("%s %s", keyMaterial, comment), privateKeyBuffer.String(), nil +} + // Seeds random with current timestamp func reseed() { rand.Seed(time.Now().UTC().UnixNano()) diff --git a/helper/resource/testing.go b/helper/resource/testing.go index 6d2deb9e53..9557207c3e 100644 --- a/helper/resource/testing.go +++ b/helper/resource/testing.go @@ -22,6 +22,13 @@ import ( const TestEnvVar = "TF_ACC" +// TestProvider can be implemented by any ResourceProvider to provide custom +// reset functionality at the start of an acceptance test. +// The helper/schema Provider implements this interface. +type TestProvider interface { + TestReset() error +} + // TestCheckFunc is the callback type used with acceptance tests to check // the state of a resource. The state passed in is the latest state known, // or in the case of being after a destroy, it is the last known state when @@ -144,6 +151,11 @@ type TestStep struct { // test to pass. ExpectError *regexp.Regexp + // PlanOnly can be set to only run `plan` with this configuration, and not + // actually apply it. This is useful for ensuring config changes result in + // no-op plans + PlanOnly bool + // PreventPostDestroyRefresh can be set to true for cases where data sources // are tested alongside real resources PreventPostDestroyRefresh bool @@ -216,13 +228,9 @@ func Test(t TestT, c TestCase) { c.PreCheck() } - // Build our context options that we can - ctxProviders := c.ProviderFactories - if ctxProviders == nil { - ctxProviders = make(map[string]terraform.ResourceProviderFactory) - for k, p := range c.Providers { - ctxProviders[k] = terraform.ResourceProviderFactoryFixed(p) - } + ctxProviders, err := testProviderFactories(c) + if err != nil { + t.Fatal(err) } opts := terraform.ContextOpts{Providers: ctxProviders} @@ -333,6 +341,43 @@ func Test(t TestT, c TestCase) { } } +// testProviderFactories is a helper to build the ResourceProviderFactory map +// with pre instantiated ResourceProviders, so that we can reset them for the +// test, while only calling the factory function once. +// Any errors are stored so that they can be returned by the factory in +// terraform to match non-test behavior. +func testProviderFactories(c TestCase) (map[string]terraform.ResourceProviderFactory, error) { + ctxProviders := make(map[string]terraform.ResourceProviderFactory) + + // add any fixed providers + for k, p := range c.Providers { + ctxProviders[k] = terraform.ResourceProviderFactoryFixed(p) + } + + // call any factory functions and store the result. + for k, pf := range c.ProviderFactories { + p, err := pf() + ctxProviders[k] = func() (terraform.ResourceProvider, error) { + return p, err + } + } + + // reset the providers if needed + for k, pf := range ctxProviders { + // we can ignore any errors here, if we don't have a provider to reset + // the error will be handled later + p, _ := pf() + if p, ok := p.(TestProvider); ok { + err := p.TestReset() + if err != nil { + return nil, fmt.Errorf("[ERROR] failed to reset provider %q: %s", k, err) + } + } + } + + return ctxProviders, nil +} + // UnitTest is a helper to force the acceptance testing harness to run in the // normal unit test suite. This should only be used for resource that don't // have any external dependencies. diff --git a/helper/resource/testing_config.go b/helper/resource/testing_config.go index b49fdc7940..537a11c34a 100644 --- a/helper/resource/testing_config.go +++ b/helper/resource/testing_config.go @@ -53,34 +53,38 @@ func testStep( "Error refreshing: %s", err) } - // Plan! - if p, err := ctx.Plan(); err != nil { - return state, fmt.Errorf( - "Error planning: %s", err) - } else { - log.Printf("[WARN] Test: Step plan: %s", p) - } - - // We need to keep a copy of the state prior to destroying - // such that destroy steps can verify their behaviour in the check - // function - stateBeforeApplication := state.DeepCopy() - - // Apply! - state, err = ctx.Apply() - if err != nil { - return state, fmt.Errorf("Error applying: %s", err) - } - - // Check! Excitement! - if step.Check != nil { - if step.Destroy { - if err := step.Check(stateBeforeApplication); err != nil { - return state, fmt.Errorf("Check failed: %s", err) - } + // If this step is a PlanOnly step, skip over this first Plan and subsequent + // Apply, and use the follow up Plan that checks for perpetual diffs + if !step.PlanOnly { + // Plan! + if p, err := ctx.Plan(); err != nil { + return state, fmt.Errorf( + "Error planning: %s", err) } else { - if err := step.Check(state); err != nil { - return state, fmt.Errorf("Check failed: %s", err) + log.Printf("[WARN] Test: Step plan: %s", p) + } + + // We need to keep a copy of the state prior to destroying + // such that destroy steps can verify their behaviour in the check + // function + stateBeforeApplication := state.DeepCopy() + + // Apply! + state, err = ctx.Apply() + if err != nil { + return state, fmt.Errorf("Error applying: %s", err) + } + + // Check! Excitement! + if step.Check != nil { + if step.Destroy { + if err := step.Check(stateBeforeApplication); err != nil { + return state, fmt.Errorf("Check failed: %s", err) + } + } else { + if err := step.Check(state); err != nil { + return state, fmt.Errorf("Check failed: %s", err) + } } } } diff --git a/helper/resource/testing_test.go b/helper/resource/testing_test.go index d2e05c0c52..7c64f9eb8c 100644 --- a/helper/resource/testing_test.go +++ b/helper/resource/testing_test.go @@ -4,7 +4,9 @@ import ( "errors" "fmt" "os" + "regexp" "strings" + "sync" "sync/atomic" "testing" @@ -25,8 +27,26 @@ func init() { } } +// wrap the mock provider to implement TestProvider +type resetProvider struct { + *terraform.MockResourceProvider + mu sync.Mutex + TestResetCalled bool + TestResetError error +} + +func (p *resetProvider) TestReset() error { + p.mu.Lock() + defer p.mu.Unlock() + p.TestResetCalled = true + return p.TestResetError +} + func TestTest(t *testing.T) { - mp := testProvider() + mp := &resetProvider{ + MockResourceProvider: testProvider(), + } + mp.DiffReturn = nil mp.ApplyFn = func( @@ -95,6 +115,61 @@ func TestTest(t *testing.T) { if !checkDestroy { t.Fatal("didn't call check for destroy") } + if !mp.TestResetCalled { + t.Fatal("didn't call TestReset") + } +} + +func TestTest_plan_only(t *testing.T) { + mp := testProvider() + mp.ApplyReturn = &terraform.InstanceState{ + ID: "foo", + } + + checkDestroy := false + + checkDestroyFn := func(*terraform.State) error { + checkDestroy = true + return nil + } + + mt := new(mockT) + Test(mt, TestCase{ + Providers: map[string]terraform.ResourceProvider{ + "test": mp, + }, + CheckDestroy: checkDestroyFn, + Steps: []TestStep{ + TestStep{ + Config: testConfigStr, + PlanOnly: true, + ExpectNonEmptyPlan: false, + }, + }, + }) + + if !mt.failed() { + t.Fatal("test should've failed") + } + + expected := `Step 0 error: After applying this step, the plan was not empty: + +DIFF: + +CREATE: test_instance.foo + foo: "" => "bar" + +STATE: + +` + + if mt.failMessage() != expected { + t.Fatalf("Expected message: %s\n\ngot:\n\n%s", expected, mt.failMessage()) + } + + if !checkDestroy { + t.Fatal("didn't call check for destroy") + } } func TestTest_idRefresh(t *testing.T) { @@ -355,6 +430,53 @@ func TestTest_stepError(t *testing.T) { } } +func TestTest_factoryError(t *testing.T) { + resourceFactoryError := fmt.Errorf("resource factory error") + + factory := func() (terraform.ResourceProvider, error) { + return nil, resourceFactoryError + } + + mt := new(mockT) + Test(mt, TestCase{ + ProviderFactories: map[string]terraform.ResourceProviderFactory{ + "test": factory, + }, + Steps: []TestStep{ + TestStep{ + ExpectError: regexp.MustCompile("resource factory error"), + }, + }, + }) + + if !mt.failed() { + t.Fatal("test should've failed") + } +} + +func TestTest_resetError(t *testing.T) { + mp := &resetProvider{ + MockResourceProvider: testProvider(), + TestResetError: fmt.Errorf("provider reset error"), + } + + mt := new(mockT) + Test(mt, TestCase{ + Providers: map[string]terraform.ResourceProvider{ + "test": mp, + }, + Steps: []TestStep{ + TestStep{ + ExpectError: regexp.MustCompile("provider reset error"), + }, + }, + }) + + if !mt.failed() { + t.Fatal("test should've failed") + } +} + func TestComposeAggregateTestCheckFunc(t *testing.T) { check1 := func(s *terraform.State) error { return errors.New("Error 1") diff --git a/helper/schema/provider.go b/helper/schema/provider.go index 5b50d54a1f..d52d2f5f06 100644 --- a/helper/schema/provider.go +++ b/helper/schema/provider.go @@ -50,8 +50,15 @@ type Provider struct { // See the ConfigureFunc documentation for more information. ConfigureFunc ConfigureFunc + // MetaReset is called by TestReset to reset any state stored in the meta + // interface. This is especially important if the StopContext is stored by + // the provider. + MetaReset func() error + meta interface{} + // a mutex is required because TestReset can directly repalce the stopCtx + stopMu sync.Mutex stopCtx context.Context stopCtxCancel context.CancelFunc stopOnce sync.Once @@ -124,20 +131,43 @@ func (p *Provider) Stopped() bool { // StopCh returns a channel that is closed once the provider is stopped. func (p *Provider) StopContext() context.Context { p.stopOnce.Do(p.stopInit) + + p.stopMu.Lock() + defer p.stopMu.Unlock() + return p.stopCtx } func (p *Provider) stopInit() { + p.stopMu.Lock() + defer p.stopMu.Unlock() + p.stopCtx, p.stopCtxCancel = context.WithCancel(context.Background()) } // Stop implementation of terraform.ResourceProvider interface. func (p *Provider) Stop() error { p.stopOnce.Do(p.stopInit) + + p.stopMu.Lock() + defer p.stopMu.Unlock() + p.stopCtxCancel() return nil } +// TestReset resets any state stored in the Provider, and will call TestReset +// on Meta if it implements the TestProvider interface. +// This may be used to reset the schema.Provider at the start of a test, and is +// automatically called by resource.Test. +func (p *Provider) TestReset() error { + p.stopInit() + if p.MetaReset != nil { + return p.MetaReset() + } + return nil +} + // Input implementation of terraform.ResourceProvider interface. func (p *Provider) Input( input terraform.UIInput, diff --git a/helper/schema/provider_test.go b/helper/schema/provider_test.go index ed5918844b..5b06c5e576 100644 --- a/helper/schema/provider_test.go +++ b/helper/schema/provider_test.go @@ -381,3 +381,29 @@ func TestProviderStop_stopFirst(t *testing.T) { t.Fatal("should be stopped") } } + +func TestProviderReset(t *testing.T) { + var p Provider + stopCtx := p.StopContext() + p.MetaReset = func() error { + stopCtx = p.StopContext() + return nil + } + + // cancel the current context + p.Stop() + + if err := p.TestReset(); err != nil { + t.Fatal(err) + } + + // the first context should have been replaced + if err := stopCtx.Err(); err != nil { + t.Fatal(err) + } + + // we should not get a canceled context here either + if err := p.StopContext().Err(); err != nil { + t.Fatal(err) + } +} diff --git a/helper/schema/resource_test.go b/helper/schema/resource_test.go index f98aa5c431..67dfaa4352 100644 --- a/helper/schema/resource_test.go +++ b/helper/schema/resource_test.go @@ -156,7 +156,7 @@ func TestResourceDiff_Timeout_diff(t *testing.T) { raw, err := config.NewRawConfig( map[string]interface{}{ "foo": 42, - "timeout": []map[string]interface{}{ + "timeouts": []map[string]interface{}{ map[string]interface{}{ "create": "2h", }}, diff --git a/helper/schema/resource_timeout.go b/helper/schema/resource_timeout.go index 908d3e4060..445819f0f9 100644 --- a/helper/schema/resource_timeout.go +++ b/helper/schema/resource_timeout.go @@ -10,6 +10,7 @@ import ( ) const TimeoutKey = "e2bfb730-ecaa-11e6-8f88-34363bc7c4c0" +const TimeoutsConfigKey = "timeouts" const ( TimeoutCreate = "create" @@ -60,7 +61,7 @@ func (t *ResourceTimeout) ConfigDecode(s *Resource, c *terraform.ResourceConfig) *t = *raw.(*ResourceTimeout) } - if raw, ok := c.Config["timeout"]; ok { + if raw, ok := c.Config[TimeoutsConfigKey]; ok { if configTimeouts, ok := raw.([]map[string]interface{}); ok { for _, timeoutValues := range configTimeouts { // loop through each Timeout given in the configuration and validate they diff --git a/helper/schema/resource_timeout_test.go b/helper/schema/resource_timeout_test.go index 6e6b2604ac..ad036600b5 100644 --- a/helper/schema/resource_timeout_test.go +++ b/helper/schema/resource_timeout_test.go @@ -63,8 +63,8 @@ func TestResourceTimeout_ConfigDecode_badkey(t *testing.T) { raw, err := config.NewRawConfig( map[string]interface{}{ - "foo": "bar", - "timeout": c.Config, + "foo": "bar", + TimeoutsConfigKey: c.Config, }) if err != nil { t.Fatalf("err: %s", err) @@ -104,7 +104,7 @@ func TestResourceTimeout_ConfigDecode(t *testing.T) { raw, err := config.NewRawConfig( map[string]interface{}{ "foo": "bar", - "timeout": []map[string]interface{}{ + TimeoutsConfigKey: []map[string]interface{}{ map[string]interface{}{ "create": "2m", }, diff --git a/helper/schema/schema.go b/helper/schema/schema.go index 05d21c7ff1..9f103d1857 100644 --- a/helper/schema/schema.go +++ b/helper/schema/schema.go @@ -477,7 +477,9 @@ func (m schemaMap) Input( // Skip things that don't require config, if that is even valid // for a provider schema. - if !v.Required && !v.Optional { + // Required XOR Optional must always be true to validate, so we only + // need to check one. + if v.Optional { continue } @@ -1262,8 +1264,15 @@ func (m schemaMap) validateMap( return nil, []error{fmt.Errorf("%s: should be a map", k)} } - // If it is not a slice, it is valid + // If it is not a slice, validate directly if rawV.Kind() != reflect.Slice { + mapIface := rawV.Interface() + if _, errs := validateMapValues(k, mapIface.(map[string]interface{}), schema); len(errs) > 0 { + return nil, errs + } + if schema.ValidateFunc != nil { + return schema.ValidateFunc(mapIface, k) + } return nil, nil } @@ -1279,6 +1288,10 @@ func (m schemaMap) validateMap( return nil, []error{fmt.Errorf( "%s: should be a map", k)} } + mapIface := v.Interface() + if _, errs := validateMapValues(k, mapIface.(map[string]interface{}), schema); len(errs) > 0 { + return nil, errs + } } if schema.ValidateFunc != nil { @@ -1295,6 +1308,67 @@ func (m schemaMap) validateMap( return nil, nil } +func validateMapValues(k string, m map[string]interface{}, schema *Schema) ([]string, []error) { + for key, raw := range m { + valueType, err := getValueType(k, schema) + if err != nil { + return nil, []error{err} + } + + switch valueType { + case TypeBool: + var n bool + if err := mapstructure.WeakDecode(raw, &n); err != nil { + return nil, []error{fmt.Errorf("%s (%s): %s", k, key, err)} + } + case TypeInt: + var n int + if err := mapstructure.WeakDecode(raw, &n); err != nil { + return nil, []error{fmt.Errorf("%s (%s): %s", k, key, err)} + } + case TypeFloat: + var n float64 + if err := mapstructure.WeakDecode(raw, &n); err != nil { + return nil, []error{fmt.Errorf("%s (%s): %s", k, key, err)} + } + case TypeString: + var n string + if err := mapstructure.WeakDecode(raw, &n); err != nil { + return nil, []error{fmt.Errorf("%s (%s): %s", k, key, err)} + } + default: + panic(fmt.Sprintf("Unknown validation type: %#v", schema.Type)) + } + } + return nil, nil +} + +func getValueType(k string, schema *Schema) (ValueType, error) { + if schema.Elem == nil { + return TypeString, nil + } + if vt, ok := schema.Elem.(ValueType); ok { + return vt, nil + } + + if s, ok := schema.Elem.(*Schema); ok { + if s.Elem == nil { + return TypeString, nil + } + if vt, ok := s.Elem.(ValueType); ok { + return vt, nil + } + } + + if _, ok := schema.Elem.(*Resource); ok { + // TODO: We don't actually support this (yet) + // but silently pass the validation, until we decide + // how to handle nested structures in maps + return TypeString, nil + } + return 0, fmt.Errorf("%s: unexpected map value type: %#v", k, schema.Elem) +} + func (m schemaMap) validateObject( k string, schema map[string]*Schema, @@ -1327,7 +1401,7 @@ func (m schemaMap) validateObject( if m, ok := raw.(map[string]interface{}); ok { for subk, _ := range m { if _, ok := schema[subk]; !ok { - if subk == "timeout" { + if subk == TimeoutsConfigKey { continue } es = append(es, fmt.Errorf( @@ -1372,28 +1446,28 @@ func (m schemaMap) validatePrimitive( // Verify that we can parse this as the correct type var n bool if err := mapstructure.WeakDecode(raw, &n); err != nil { - return nil, []error{err} + return nil, []error{fmt.Errorf("%s: %s", k, err)} } decoded = n case TypeInt: // Verify that we can parse this as an int var n int if err := mapstructure.WeakDecode(raw, &n); err != nil { - return nil, []error{err} + return nil, []error{fmt.Errorf("%s: %s", k, err)} } decoded = n case TypeFloat: // Verify that we can parse this as an int var n float64 if err := mapstructure.WeakDecode(raw, &n); err != nil { - return nil, []error{err} + return nil, []error{fmt.Errorf("%s: %s", k, err)} } decoded = n case TypeString: // Verify that we can parse this as a string var n string if err := mapstructure.WeakDecode(raw, &n); err != nil { - return nil, []error{err} + return nil, []error{fmt.Errorf("%s: %s", k, err)} } decoded = n default: diff --git a/helper/schema/schema_test.go b/helper/schema/schema_test.go index 4119b7ff58..2d79341b30 100644 --- a/helper/schema/schema_test.go +++ b/helper/schema/schema_test.go @@ -3173,7 +3173,7 @@ func TestSchemaMap_Input(t *testing.T) { * String decode */ - "uses input on optional field with no config": { + "no input on optional field with no config": { Schema: map[string]*Schema{ "availability_zone": &Schema{ Type: TypeString, @@ -3181,15 +3181,9 @@ func TestSchemaMap_Input(t *testing.T) { }, }, - Input: map[string]string{ - "availability_zone": "foo", - }, - - Result: map[string]interface{}{ - "availability_zone": "foo", - }, - - Err: false, + Input: map[string]string{}, + Result: map[string]interface{}{}, + Err: false, }, "input ignored when config has a value": { @@ -3276,7 +3270,7 @@ func TestSchemaMap_Input(t *testing.T) { DefaultFunc: func() (interface{}, error) { return nil, nil }, - Optional: true, + Required: true, }, }, @@ -3290,6 +3284,22 @@ func TestSchemaMap_Input(t *testing.T) { Err: false, }, + + "input not used when optional default function returns nil": { + Schema: map[string]*Schema{ + "availability_zone": &Schema{ + Type: TypeString, + DefaultFunc: func() (interface{}, error) { + return nil, nil + }, + Optional: true, + }, + }, + + Input: map[string]string{}, + Result: map[string]interface{}{}, + Err: false, + }, } for i, tc := range cases { @@ -4774,7 +4784,7 @@ func TestSchemaMap_Validate(t *testing.T) { Err: false, }, - "special timeout field": { + "special timeouts field": { Schema: map[string]*Schema{ "availability_zone": &Schema{ Type: TypeString, @@ -4785,54 +4795,181 @@ func TestSchemaMap_Validate(t *testing.T) { }, Config: map[string]interface{}{ - "timeout": "bar", + TimeoutsConfigKey: "bar", }, Err: false, }, + + "invalid bool field": { + Schema: map[string]*Schema{ + "bool_field": { + Type: TypeBool, + Optional: true, + }, + }, + Config: map[string]interface{}{ + "bool_field": "abcdef", + }, + Err: true, + }, + "invalid integer field": { + Schema: map[string]*Schema{ + "integer_field": { + Type: TypeInt, + Optional: true, + }, + }, + Config: map[string]interface{}{ + "integer_field": "abcdef", + }, + Err: true, + }, + "invalid float field": { + Schema: map[string]*Schema{ + "float_field": { + Type: TypeFloat, + Optional: true, + }, + }, + Config: map[string]interface{}{ + "float_field": "abcdef", + }, + Err: true, + }, + + // Invalid map values + "invalid bool map value": { + Schema: map[string]*Schema{ + "boolMap": &Schema{ + Type: TypeMap, + Elem: TypeBool, + Optional: true, + }, + }, + Config: map[string]interface{}{ + "boolMap": map[string]interface{}{ + "boolField": "notbool", + }, + }, + Err: true, + }, + "invalid int map value": { + Schema: map[string]*Schema{ + "intMap": &Schema{ + Type: TypeMap, + Elem: TypeInt, + Optional: true, + }, + }, + Config: map[string]interface{}{ + "intMap": map[string]interface{}{ + "intField": "notInt", + }, + }, + Err: true, + }, + "invalid float map value": { + Schema: map[string]*Schema{ + "floatMap": &Schema{ + Type: TypeMap, + Elem: TypeFloat, + Optional: true, + }, + }, + Config: map[string]interface{}{ + "floatMap": map[string]interface{}{ + "floatField": "notFloat", + }, + }, + Err: true, + }, + + "map with positive validate function": { + Schema: map[string]*Schema{ + "floatInt": &Schema{ + Type: TypeMap, + Elem: TypeInt, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + return + }, + }, + }, + Config: map[string]interface{}{ + "floatInt": map[string]interface{}{ + "rightAnswer": "42", + "tooMuch": "43", + }, + }, + Err: false, + }, + "map with negative validate function": { + Schema: map[string]*Schema{ + "floatInt": &Schema{ + Type: TypeMap, + Elem: TypeInt, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + es = append(es, fmt.Errorf("this is not fine")) + return + }, + }, + }, + Config: map[string]interface{}{ + "floatInt": map[string]interface{}{ + "rightAnswer": "42", + "tooMuch": "43", + }, + }, + Err: true, + }, } for tn, tc := range cases { - c, err := config.NewRawConfig(tc.Config) - if err != nil { - t.Fatalf("err: %s", err) - } - if tc.Vars != nil { - vars := make(map[string]ast.Variable) - for k, v := range tc.Vars { - vars[k] = ast.Variable{Value: v, Type: ast.TypeString} - } - - if err := c.Interpolate(vars); err != nil { + t.Run(tn, func(t *testing.T) { + c, err := config.NewRawConfig(tc.Config) + if err != nil { t.Fatalf("err: %s", err) } - } + if tc.Vars != nil { + vars := make(map[string]ast.Variable) + for k, v := range tc.Vars { + vars[k] = ast.Variable{Value: v, Type: ast.TypeString} + } - ws, es := schemaMap(tc.Schema).Validate(terraform.NewResourceConfig(c)) - if len(es) > 0 != tc.Err { - if len(es) == 0 { - t.Errorf("%q: no errors", tn) + if err := c.Interpolate(vars); err != nil { + t.Fatalf("err: %s", err) + } } - for _, e := range es { - t.Errorf("%q: err: %s", tn, e) + ws, es := schemaMap(tc.Schema).Validate(terraform.NewResourceConfig(c)) + if len(es) > 0 != tc.Err { + if len(es) == 0 { + t.Errorf("%q: no errors", tn) + } + + for _, e := range es { + t.Errorf("%q: err: %s", tn, e) + } + + t.FailNow() } - t.FailNow() - } - - if !reflect.DeepEqual(ws, tc.Warnings) { - t.Fatalf("%q: warnings:\n\nexpected: %#v\ngot:%#v", tn, tc.Warnings, ws) - } - - if tc.Errors != nil { - sort.Sort(errorSort(es)) - sort.Sort(errorSort(tc.Errors)) - - if !reflect.DeepEqual(es, tc.Errors) { - t.Fatalf("%q: errors:\n\nexpected: %q\ngot: %q", tn, tc.Errors, es) + if !reflect.DeepEqual(ws, tc.Warnings) { + t.Fatalf("%q: warnings:\n\nexpected: %#v\ngot:%#v", tn, tc.Warnings, ws) } - } + + if tc.Errors != nil { + sort.Sort(errorSort(es)) + sort.Sort(errorSort(tc.Errors)) + + if !reflect.DeepEqual(es, tc.Errors) { + t.Fatalf("%q: errors:\n\nexpected: %q\ngot: %q", tn, tc.Errors, es) + } + } + }) + } } diff --git a/helper/validation/validation.go b/helper/validation/validation.go index 484f7d7dae..82a9dec729 100644 --- a/helper/validation/validation.go +++ b/helper/validation/validation.go @@ -2,6 +2,7 @@ package validation import ( "fmt" + "net" "strings" "github.com/hashicorp/terraform/helper/schema" @@ -47,3 +48,53 @@ func StringInSlice(valid []string, ignoreCase bool) schema.SchemaValidateFunc { return } } + +// StringLenBetween returns a SchemaValidateFunc which tests if the provided value +// is of type string and has length between min and max (inclusive) +func StringLenBetween(min, max int) schema.SchemaValidateFunc { + return func(i interface{}, k string) (s []string, es []error) { + v, ok := i.(string) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be string", k)) + return + } + if len(v) < min || len(v) > max { + es = append(es, fmt.Errorf("expected length of %s to be in the range (%d - %d), got %s", k, min, max, v)) + } + return + } +} + +// CIDRNetwork returns a SchemaValidateFunc which tests if the provided value +// is of type string, is in valid CIDR network notation, and has significant bits between min and max (inclusive) +func CIDRNetwork(min, max int) schema.SchemaValidateFunc { + return func(i interface{}, k string) (s []string, es []error) { + v, ok := i.(string) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be string", k)) + return + } + + _, ipnet, err := net.ParseCIDR(v) + if err != nil { + es = append(es, fmt.Errorf( + "expected %s to contain a valid CIDR, got: %s with err: %s", k, v, err)) + return + } + + if ipnet == nil || v != ipnet.String() { + es = append(es, fmt.Errorf( + "expected %s to contain a valid network CIDR, expected %s, got %s", + k, ipnet, v)) + } + + sigbits, _ := ipnet.Mask.Size() + if sigbits < min || sigbits > max { + es = append(es, fmt.Errorf( + "expected %q to contain a network CIDR with between %d and %d significant bits, got: %d", + k, min, max, sigbits)) + } + + return + } +} diff --git a/helper/variables/flag_any.go b/helper/variables/flag_any.go new file mode 100644 index 0000000000..650324e434 --- /dev/null +++ b/helper/variables/flag_any.go @@ -0,0 +1,25 @@ +package variables + +import ( + "strings" +) + +// FlagAny is a flag.Value for parsing user variables in the format of +// 'key=value' OR a file path. 'key=value' is assumed if '=' is in the value. +// You cannot use a file path that contains an '='. +type FlagAny map[string]interface{} + +func (v *FlagAny) String() string { + return "" +} + +func (v *FlagAny) Set(raw string) error { + idx := strings.Index(raw, "=") + if idx >= 0 { + flag := (*Flag)(v) + return flag.Set(raw) + } + + flag := (*FlagFile)(v) + return flag.Set(raw) +} diff --git a/helper/variables/flag_any_test.go b/helper/variables/flag_any_test.go new file mode 100644 index 0000000000..8cf72fcad6 --- /dev/null +++ b/helper/variables/flag_any_test.go @@ -0,0 +1,299 @@ +package variables + +import ( + "flag" + "fmt" + "io/ioutil" + "reflect" + "testing" + + "github.com/davecgh/go-spew/spew" +) + +func TestFlagAny_impl(t *testing.T) { + var _ flag.Value = new(FlagAny) +} + +func TestFlagAny(t *testing.T) { + cases := []struct { + Input interface{} + Output map[string]interface{} + Error bool + }{ + { + "=value", + nil, + true, + }, + + { + " =value", + nil, + true, + }, + + { + "key=value", + map[string]interface{}{"key": "value"}, + false, + }, + + { + "key=", + map[string]interface{}{"key": ""}, + false, + }, + + { + "key=foo=bar", + map[string]interface{}{"key": "foo=bar"}, + false, + }, + + { + "key=false", + map[string]interface{}{"key": "false"}, + false, + }, + + { + "key =value", + map[string]interface{}{"key": "value"}, + false, + }, + + { + "key = value", + map[string]interface{}{"key": " value"}, + false, + }, + + { + `key = "value"`, + map[string]interface{}{"key": "value"}, + false, + }, + + { + "map.key=foo", + map[string]interface{}{"map.key": "foo"}, + false, + }, + + { + "key", + nil, + true, + }, + + { + `key=["hello", "world"]`, + map[string]interface{}{"key": []interface{}{"hello", "world"}}, + false, + }, + + { + `key={"hello" = "world", "foo" = "bar"}`, + map[string]interface{}{ + "key": map[string]interface{}{ + "hello": "world", + "foo": "bar", + }, + }, + false, + }, + + { + `key={"hello" = "world", "foo" = "bar"}\nkey2="invalid"`, + nil, + true, + }, + + { + "key=/path", + map[string]interface{}{"key": "/path"}, + false, + }, + + { + "key=1234.dkr.ecr.us-east-1.amazonaws.com/proj:abcdef", + map[string]interface{}{"key": "1234.dkr.ecr.us-east-1.amazonaws.com/proj:abcdef"}, + false, + }, + + // simple values that can parse as numbers should remain strings + { + "key=1", + map[string]interface{}{ + "key": "1", + }, + false, + }, + { + "key=1.0", + map[string]interface{}{ + "key": "1.0", + }, + false, + }, + { + "key=0x10", + map[string]interface{}{ + "key": "0x10", + }, + false, + }, + + // Test setting multiple times + { + []string{ + "foo=bar", + "bar=baz", + }, + map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + false, + }, + + // Test map merging + { + []string{ + `foo={ foo = "bar" }`, + `foo={ bar = "baz" }`, + }, + map[string]interface{}{ + "foo": map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + }, + false, + }, + } + + for i, tc := range cases { + t.Run(fmt.Sprintf("%d-%s", i, tc.Input), func(t *testing.T) { + var input []string + switch v := tc.Input.(type) { + case string: + input = []string{v} + case []string: + input = v + default: + t.Fatalf("bad input type: %T", tc.Input) + } + + f := new(FlagAny) + for i, single := range input { + err := f.Set(single) + + // Only check for expected errors on the final input + expected := tc.Error && i == len(input)-1 + if err != nil != expected { + t.Fatalf("bad error. Input: %#v\n\nError: %s", single, err) + } + } + + actual := map[string]interface{}(*f) + if !reflect.DeepEqual(actual, tc.Output) { + t.Fatalf("bad:\nexpected: %s\n\n got: %s\n", spew.Sdump(tc.Output), spew.Sdump(actual)) + } + }) + } +} + +func TestFlagAny_file(t *testing.T) { + inputLibucl := ` +foo = "bar" +` + inputMap := ` +foo = { + k = "v" +}` + + inputJson := `{ + "foo": "bar"}` + + cases := []struct { + Input interface{} + Output map[string]interface{} + Error bool + }{ + { + inputLibucl, + map[string]interface{}{"foo": "bar"}, + false, + }, + + { + inputJson, + map[string]interface{}{"foo": "bar"}, + false, + }, + + { + `map.key = "foo"`, + map[string]interface{}{"map.key": "foo"}, + false, + }, + + { + inputMap, + map[string]interface{}{ + "foo": map[string]interface{}{ + "k": "v", + }, + }, + false, + }, + + { + []string{ + `foo = { "k" = "v"}`, + `foo = { "j" = "v" }`, + }, + map[string]interface{}{ + "foo": map[string]interface{}{ + "k": "v", + "j": "v", + }, + }, + false, + }, + } + + path := testTempFile(t) + + for i, tc := range cases { + t.Run(fmt.Sprintf("%d", i), func(t *testing.T) { + var input []string + switch i := tc.Input.(type) { + case string: + input = []string{i} + case []string: + input = i + default: + t.Fatalf("bad input type: %T", i) + } + + f := new(FlagAny) + for _, input := range input { + if err := ioutil.WriteFile(path, []byte(input), 0644); err != nil { + t.Fatalf("err: %s", err) + } + + err := f.Set(path) + if err != nil != tc.Error { + t.Fatalf("bad error. Input: %#v, err: %s", input, err) + } + } + + actual := map[string]interface{}(*f) + if !reflect.DeepEqual(actual, tc.Output) { + t.Fatalf("bad: %#v", actual) + } + }) + } +} diff --git a/state/lock.go b/state/lock.go new file mode 100644 index 0000000000..b3a03b3ef2 --- /dev/null +++ b/state/lock.go @@ -0,0 +1,38 @@ +package state + +import ( + "github.com/hashicorp/terraform/terraform" +) + +// LockDisabled implements State and Locker but disables state locking. +// If State doesn't support locking, this is a no-op. This is useful for +// easily disabling locking of an existing state or for tests. +type LockDisabled struct { + // We can't embed State directly since Go dislikes that a field is + // State and State interface has a method State + Inner State +} + +func (s *LockDisabled) State() *terraform.State { + return s.Inner.State() +} + +func (s *LockDisabled) WriteState(v *terraform.State) error { + return s.Inner.WriteState(v) +} + +func (s *LockDisabled) RefreshState() error { + return s.Inner.RefreshState() +} + +func (s *LockDisabled) PersistState() error { + return s.Inner.PersistState() +} + +func (s *LockDisabled) Lock(info *LockInfo) (string, error) { + return "", nil +} + +func (s *LockDisabled) Unlock(id string) error { + return nil +} diff --git a/state/lock_test.go b/state/lock_test.go new file mode 100644 index 0000000000..d7246ac9d8 --- /dev/null +++ b/state/lock_test.go @@ -0,0 +1,10 @@ +package state + +import ( + "testing" +) + +func TestLockDisabled_impl(t *testing.T) { + var _ State = new(LockDisabled) + var _ Locker = new(LockDisabled) +} diff --git a/state/remote/remote.go b/state/remote/remote.go index 0b1ee5f7c7..b997032011 100644 --- a/state/remote/remote.go +++ b/state/remote/remote.go @@ -51,7 +51,6 @@ var BuiltinClients = map[string]Factory{ "gcs": gcsFactory, "http": httpFactory, "local": fileFactory, - "s3": s3Factory, "swift": swiftFactory, "manta": mantaFactory, } diff --git a/state/remote/s3_test.go b/state/remote/s3_test.go deleted file mode 100644 index 358c1a676c..0000000000 --- a/state/remote/s3_test.go +++ /dev/null @@ -1,238 +0,0 @@ -package remote - -import ( - "fmt" - "os" - "testing" - "time" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/s3" -) - -func TestS3Client_impl(t *testing.T) { - var _ Client = new(S3Client) - var _ ClientLocker = new(S3Client) -} - -func TestS3Factory(t *testing.T) { - // This test just instantiates the client. Shouldn't make any actual - // requests nor incur any costs. - - config := make(map[string]string) - - // Empty config is an error - _, err := s3Factory(config) - if err == nil { - t.Fatalf("Empty config should be error") - } - - config["region"] = "us-west-1" - config["bucket"] = "foo" - config["key"] = "bar" - config["encrypt"] = "1" - - // For this test we'll provide the credentials as config. The - // acceptance tests implicitly test passing credentials as - // environment variables. - config["access_key"] = "bazkey" - config["secret_key"] = "bazsecret" - - client, err := s3Factory(config) - if err != nil { - t.Fatalf("Error for valid config") - } - - s3Client := client.(*S3Client) - - if *s3Client.nativeClient.Config.Region != "us-west-1" { - t.Fatalf("Incorrect region was populated") - } - if s3Client.bucketName != "foo" { - t.Fatalf("Incorrect bucketName was populated") - } - if s3Client.keyName != "bar" { - t.Fatalf("Incorrect keyName was populated") - } - - credentials, err := s3Client.nativeClient.Config.Credentials.Get() - if err != nil { - t.Fatalf("Error when requesting credentials") - } - if credentials.AccessKeyID != "bazkey" { - t.Fatalf("Incorrect Access Key Id was populated") - } - if credentials.SecretAccessKey != "bazsecret" { - t.Fatalf("Incorrect Secret Access Key was populated") - } -} - -func TestS3Client(t *testing.T) { - // This test creates a bucket in S3 and populates it. - // It may incur costs, so it will only run if AWS credential environment - // variables are present. - - accessKeyId := os.Getenv("AWS_ACCESS_KEY_ID") - if accessKeyId == "" { - t.Skipf("skipping; AWS_ACCESS_KEY_ID must be set") - } - - regionName := os.Getenv("AWS_DEFAULT_REGION") - if regionName == "" { - regionName = "us-west-2" - } - - bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix()) - keyName := "testState" - testData := []byte(`testing data`) - - config := make(map[string]string) - config["region"] = regionName - config["bucket"] = bucketName - config["key"] = keyName - config["encrypt"] = "1" - - client, err := s3Factory(config) - if err != nil { - t.Fatalf("Error for valid config") - } - - s3Client := client.(*S3Client) - nativeClient := s3Client.nativeClient - - createBucketReq := &s3.CreateBucketInput{ - Bucket: &bucketName, - } - - // Be clear about what we're doing in case the user needs to clean - // this up later. - t.Logf("Creating S3 bucket %s in %s", bucketName, regionName) - _, err = nativeClient.CreateBucket(createBucketReq) - if err != nil { - t.Skipf("Failed to create test S3 bucket, so skipping") - } - - // Ensure we can perform a PUT request with the encryption header - err = s3Client.Put(testData) - if err != nil { - t.Logf("WARNING: Failed to send test data to S3 bucket. (error was %s)", err) - } - - defer func() { - deleteBucketReq := &s3.DeleteBucketInput{ - Bucket: &bucketName, - } - - _, err := nativeClient.DeleteBucket(deleteBucketReq) - if err != nil { - t.Logf("WARNING: Failed to delete the test S3 bucket. It may have been left in your AWS account and may incur storage charges. (error was %s)", err) - } - }() - - testClient(t, client) -} - -func TestS3ClientLocks(t *testing.T) { - // This test creates a DynamoDB table. - // It may incur costs, so it will only run if AWS credential environment - // variables are present. - - accessKeyId := os.Getenv("AWS_ACCESS_KEY_ID") - if accessKeyId == "" { - t.Skipf("skipping; AWS_ACCESS_KEY_ID must be set") - } - - regionName := os.Getenv("AWS_DEFAULT_REGION") - if regionName == "" { - regionName = "us-west-2" - } - - bucketName := fmt.Sprintf("terraform-remote-s3-lock-%x", time.Now().Unix()) - keyName := "testState" - - config := make(map[string]string) - config["region"] = regionName - config["bucket"] = bucketName - config["key"] = keyName - config["encrypt"] = "1" - config["lock_table"] = bucketName - - client, err := s3Factory(config) - if err != nil { - t.Fatalf("Error for valid config") - } - - s3Client := client.(*S3Client) - - // set this up before we try to crate the table, in case we timeout creating it. - defer deleteDynaboDBTable(t, s3Client, bucketName) - - createDynamoDBTable(t, s3Client, bucketName) - - TestRemoteLocks(t, client, client) -} - -// create the dynamoDB table, and wait until we can query it. -func createDynamoDBTable(t *testing.T, c *S3Client, tableName string) { - createInput := &dynamodb.CreateTableInput{ - AttributeDefinitions: []*dynamodb.AttributeDefinition{ - { - AttributeName: aws.String("LockID"), - AttributeType: aws.String("S"), - }, - }, - KeySchema: []*dynamodb.KeySchemaElement{ - { - AttributeName: aws.String("LockID"), - KeyType: aws.String("HASH"), - }, - }, - ProvisionedThroughput: &dynamodb.ProvisionedThroughput{ - ReadCapacityUnits: aws.Int64(5), - WriteCapacityUnits: aws.Int64(5), - }, - TableName: aws.String(tableName), - } - - _, err := c.dynClient.CreateTable(createInput) - if err != nil { - t.Fatal(err) - } - - // now wait until it's ACTIVE - start := time.Now() - time.Sleep(time.Second) - - describeInput := &dynamodb.DescribeTableInput{ - TableName: aws.String(tableName), - } - - for { - resp, err := c.dynClient.DescribeTable(describeInput) - if err != nil { - t.Fatal(err) - } - - if *resp.Table.TableStatus == "ACTIVE" { - return - } - - if time.Since(start) > time.Minute { - t.Fatalf("timed out creating DynamoDB table %s", tableName) - } - - time.Sleep(3 * time.Second) - } - -} - -func deleteDynaboDBTable(t *testing.T, c *S3Client, tableName string) { - params := &dynamodb.DeleteTableInput{ - TableName: aws.String(tableName), - } - _, err := c.dynClient.DeleteTable(params) - if err != nil { - t.Logf("WARNING: Failed to delete the test DynamoDB table %q. It has been left in your AWS account and may incur charges. (error was %s)", tableName, err) - } -} diff --git a/terraform/context.go b/terraform/context.go index 3c4e4b62ed..15528beed8 100644 --- a/terraform/context.go +++ b/terraform/context.go @@ -49,6 +49,7 @@ var ( // ContextOpts are the user-configurable options to create a context with // NewContext. type ContextOpts struct { + Meta *ContextMeta Destroy bool Diff *Diff Hooks []Hook @@ -65,6 +66,14 @@ type ContextOpts struct { UIInput UIInput } +// ContextMeta is metadata about the running context. This is information +// that this package or structure cannot determine on its own but exposes +// into Terraform in various ways. This must be provided by the Context +// initializer. +type ContextMeta struct { + Env string // Env is the state environment +} + // Context represents all the context that Terraform needs in order to // perform operations on infrastructure. This structure is built using // NewContext. See the documentation for that. @@ -80,6 +89,7 @@ type Context struct { diff *Diff diffLock sync.RWMutex hooks []Hook + meta *ContextMeta module *module.Tree sh *stopHook shadow bool @@ -178,6 +188,7 @@ func NewContext(opts *ContextOpts) (*Context, error) { destroy: opts.Destroy, diff: diff, hooks: hooks, + meta: opts.Meta, module: opts.Module, shadow: opts.Shadow, state: state, @@ -313,6 +324,7 @@ func (c *Context) Interpolater() *Interpolater { var stateLock sync.RWMutex return &Interpolater{ Operation: walkApply, + Meta: c.meta, Module: c.module, State: c.state.DeepCopy(), StateLock: &stateLock, @@ -781,15 +793,14 @@ func (c *Context) walk( } // Watch for a stop so we can call the provider Stop() API. - doneCh := make(chan struct{}) - stopCh := c.runContext.Done() - go c.watchStop(walker, doneCh, stopCh) + watchStop, watchWait := c.watchStop(walker) // Walk the real graph, this will block until it completes realErr := graph.Walk(walker) - // Close the done channel so the watcher stops - close(doneCh) + // Close the channel so the watcher stops, and wait for it to return. + close(watchStop) + <-watchWait // If we have a shadow graph and we interrupted the real graph, then // we just close the shadow and never verify it. It is non-trivial to @@ -878,52 +889,74 @@ func (c *Context) walk( return walker, realErr } -func (c *Context) watchStop(walker *ContextGraphWalker, doneCh, stopCh <-chan struct{}) { - // Wait for a stop or completion - select { - case <-stopCh: - // Stop was triggered. Fall out of the select - case <-doneCh: - // Done, just exit completely - return - } +// watchStop immediately returns a `stop` and a `wait` chan after dispatching +// the watchStop goroutine. This will watch the runContext for cancellation and +// stop the providers accordingly. When the watch is no longer needed, the +// `stop` chan should be closed before waiting on the `wait` chan. +// The `wait` chan is important, because without synchronizing with the end of +// the watchStop goroutine, the runContext may also be closed during the select +// incorrectly causing providers to be stopped. Even if the graph walk is done +// at that point, stopping a provider permanently cancels its StopContext which +// can cause later actions to fail. +func (c *Context) watchStop(walker *ContextGraphWalker) (chan struct{}, <-chan struct{}) { + stop := make(chan struct{}) + wait := make(chan struct{}) - // If we're here, we're stopped, trigger the call. + // get the runContext cancellation channel now, because releaseRun will + // write to the runContext field. + done := c.runContext.Done() - { - // Copy the providers so that a misbehaved blocking Stop doesn't - // completely hang Terraform. - walker.providerLock.Lock() - ps := make([]ResourceProvider, 0, len(walker.providerCache)) - for _, p := range walker.providerCache { - ps = append(ps, p) + go func() { + defer close(wait) + // Wait for a stop or completion + select { + case <-done: + // done means the context was canceled, so we need to try and stop + // providers. + case <-stop: + // our own stop channel was closed. + return } - defer walker.providerLock.Unlock() - for _, p := range ps { - // We ignore the error for now since there isn't any reasonable - // action to take if there is an error here, since the stop is still - // advisory: Terraform will exit once the graph node completes. - p.Stop() - } - } + // If we're here, we're stopped, trigger the call. - { - // Call stop on all the provisioners - walker.provisionerLock.Lock() - ps := make([]ResourceProvisioner, 0, len(walker.provisionerCache)) - for _, p := range walker.provisionerCache { - ps = append(ps, p) - } - defer walker.provisionerLock.Unlock() + { + // Copy the providers so that a misbehaved blocking Stop doesn't + // completely hang Terraform. + walker.providerLock.Lock() + ps := make([]ResourceProvider, 0, len(walker.providerCache)) + for _, p := range walker.providerCache { + ps = append(ps, p) + } + defer walker.providerLock.Unlock() - for _, p := range ps { - // We ignore the error for now since there isn't any reasonable - // action to take if there is an error here, since the stop is still - // advisory: Terraform will exit once the graph node completes. - p.Stop() + for _, p := range ps { + // We ignore the error for now since there isn't any reasonable + // action to take if there is an error here, since the stop is still + // advisory: Terraform will exit once the graph node completes. + p.Stop() + } } - } + + { + // Call stop on all the provisioners + walker.provisionerLock.Lock() + ps := make([]ResourceProvisioner, 0, len(walker.provisionerCache)) + for _, p := range walker.provisionerCache { + ps = append(ps, p) + } + defer walker.provisionerLock.Unlock() + + for _, p := range ps { + // We ignore the error for now since there isn't any reasonable + // action to take if there is an error here, since the stop is still + // advisory: Terraform will exit once the graph node completes. + p.Stop() + } + } + }() + + return stop, wait } // parseVariableAsHCL parses the value of a single variable as would have been specified diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go index 04197b97dc..afe2f85a71 100644 --- a/terraform/context_apply_test.go +++ b/terraform/context_apply_test.go @@ -1740,6 +1740,7 @@ func TestContext2Apply_cancel(t *testing.T) { if ctx.sh.Stopped() { break } + time.Sleep(10 * time.Millisecond) } } @@ -8069,3 +8070,33 @@ func TestContext2Apply_dataDependsOn(t *testing.T) { t.Fatalf("bad:\n%s", strings.TrimSpace(state.String())) } } + +func TestContext2Apply_terraformEnv(t *testing.T) { + m := testModule(t, "apply-terraform-env") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + + ctx := testContext2(t, &ContextOpts{ + Meta: &ContextMeta{Env: "foo"}, + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + }) + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := state.RootModule().Outputs["output"] + expected := "foo" + if actual == nil || actual.Value != expected { + t.Fatalf("bad: \n%s", actual) + } +} diff --git a/terraform/context_input_test.go b/terraform/context_input_test.go index f5e0f47a24..5e3434bd80 100644 --- a/terraform/context_input_test.go +++ b/terraform/context_input_test.go @@ -658,3 +658,30 @@ func TestContext2Input_hcl(t *testing.T) { t.Fatalf("bad: \n%s", actualStr) } } + +// adding a list interpolation in fails to interpolate the count variable +func TestContext2Input_submoduleTriggersInvalidCount(t *testing.T) { + input := new(MockUIInput) + m := testModule(t, "input-submodule-count") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + UIInput: input, + }) + + p.InputFn = func(i UIInput, c *ResourceConfig) (*ResourceConfig, error) { + return c, nil + } + p.ConfigureFn = func(c *ResourceConfig) error { + return nil + } + + if err := ctx.Input(InputModeStd); err != nil { + t.Fatalf("err: %s", err) + } +} diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go index bf3ff4f415..7064f64655 100644 --- a/terraform/context_plan_test.go +++ b/terraform/context_plan_test.go @@ -3095,3 +3095,54 @@ func TestContext2Plan_listOrder(t *testing.T) { t.Fatal("aws_instance.a and aws_instance.b diffs should match:\n", plan) } } + +// Make sure ignore-changes doesn't interfere with set/list/map diffs. +// If a resource was being replaced by a RequiresNew attribute that gets +// ignored, we need to filter the diff properly to properly update rather than +// replace. +func TestContext2Plan_ignoreChangesWithFlatmaps(t *testing.T) { + m := testModule(t, "plan-ignore-changes-with-flatmaps") + p := testProvider("aws") + p.DiffFn = testDiffFn + s := &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "user_data": "x", + "require_new": "", + "set.#": "1", + "set.0.a": "1", + "lst.#": "1", + "lst.0": "j", + }, + }, + }, + }, + }, + }, + } + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: s, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(plan.Diff.String()) + expected := strings.TrimSpace(testTFPlanDiffIgnoreChangesWithFlatmaps) + if actual != expected { + t.Fatalf("bad:\n%s\n\nexpected\n\n%s", actual, expected) + } +} diff --git a/terraform/context_refresh_test.go b/terraform/context_refresh_test.go index 7c00cf4f45..b29e63679d 100644 --- a/terraform/context_refresh_test.go +++ b/terraform/context_refresh_test.go @@ -60,6 +60,32 @@ func TestContext2Refresh(t *testing.T) { } } +func TestContext2Refresh_dataComputedModuleVar(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "refresh-data-module-var") + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + }) + + p.RefreshFn = nil + p.RefreshReturn = &InstanceState{ + ID: "foo", + } + + s, err := ctx.Refresh() + if err != nil { + t.Fatalf("err: %s", err) + } + + checkStateString(t, s, ` + +module.child: + `) +} + func TestContext2Refresh_targeted(t *testing.T) { p := testProvider("aws") m := testModule(t, "refresh-targeted") diff --git a/terraform/context_test.go b/terraform/context_test.go index 91babd75fc..3534e9aa36 100644 --- a/terraform/context_test.go +++ b/terraform/context_test.go @@ -262,6 +262,11 @@ func testDiffFn( if _, ok := c.Raw["__"+k+"_requires_new"]; ok { attrDiff.RequiresNew = true } + + if attr, ok := s.Attributes[k]; ok { + attrDiff.Old = attr + } + diff.Attributes[k] = attrDiff } } diff --git a/terraform/diff.go b/terraform/diff.go index 5cf1b78ce1..a9fae6c2c8 100644 --- a/terraform/diff.go +++ b/terraform/diff.go @@ -25,6 +25,9 @@ const ( DiffDestroyCreate ) +// multiVal matches the index key to a flatmapped set, list or map +var multiVal = regexp.MustCompile(`\.(#|%)$`) + // Diff trackes the changes that are necessary to apply a configuration // to an existing infrastructure. type Diff struct { @@ -808,7 +811,6 @@ func (d *InstanceDiff) Same(d2 *InstanceDiff) (bool, string) { } // search for the suffix of the base of a [computed] map, list or set. - multiVal := regexp.MustCompile(`\.(#|~#|%)$`) match := multiVal.FindStringSubmatch(k) if diffOld.NewComputed && len(match) == 2 { diff --git a/terraform/eval_diff.go b/terraform/eval_diff.go index 717d951053..6f09526a4c 100644 --- a/terraform/eval_diff.go +++ b/terraform/eval_diff.go @@ -152,6 +152,7 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { }) } + // filter out ignored resources if err := n.processIgnoreChanges(diff); err != nil { return nil, err } @@ -190,72 +191,81 @@ func (n *EvalDiff) processIgnoreChanges(diff *InstanceDiff) error { return nil } - changeType := diff.ChangeType() - // If we're just creating the resource, we shouldn't alter the // Diff at all - if changeType == DiffCreate { + if diff.ChangeType() == DiffCreate { return nil } // If the resource has been tainted then we don't process ignore changes // since we MUST recreate the entire resource. - if diff.DestroyTainted { + if diff.GetDestroyTainted() { return nil } + attrs := diff.CopyAttributes() + + // get the complete set of keys we want to ignore ignorableAttrKeys := make(map[string]bool) for _, ignoredKey := range ignoreChanges { - for k := range diff.CopyAttributes() { + for k := range attrs { if ignoredKey == "*" || strings.HasPrefix(k, ignoredKey) { ignorableAttrKeys[k] = true } } } - // If we are replacing the resource, then we expect there to be a bunch of - // extraneous attribute diffs we need to filter out for the other - // non-requires-new attributes going from "" -> "configval" or "" -> - // "". Filtering these out allows us to see if we might be able to - // skip this diff altogether. - if changeType == DiffDestroyCreate { - for k, v := range diff.CopyAttributes() { + // If the resource was being destroyed, check to see if we can ignore the + // reason for it being destroyed. + if diff.GetDestroy() { + for k, v := range attrs { + if k == "id" { + // id will always be changed if we intended to replace this instance + continue + } if v.Empty() || v.NewComputed { + continue + } + + // If any RequiresNew attribute isn't ignored, we need to keep the diff + // as-is to be able to replace the resource. + if v.RequiresNew && !ignorableAttrKeys[k] { + return nil + } + } + + // Now that we know that we aren't replacing the instance, we can filter + // out all the empty and computed attributes. There may be a bunch of + // extraneous attribute diffs for the other non-requires-new attributes + // going from "" -> "configval" or "" -> "". + // We must make sure any flatmapped containers are filterred (or not) as a + // whole. + containers := groupContainers(diff) + keep := map[string]bool{} + for _, v := range containers { + if v.keepDiff() { + // At least one key has changes, so list all the sibling keys + // to keep in the diff. + for k := range v { + keep[k] = true + } + } + } + + for k, v := range attrs { + if (v.Empty() || v.NewComputed) && !keep[k] { ignorableAttrKeys[k] = true } } - - // Here we emulate the implementation of diff.RequiresNew() with one small - // tweak, we ignore the "id" attribute diff that gets added by EvalDiff, - // since that was added in reaction to RequiresNew being true. - requiresNewAfterIgnores := false - for k, v := range diff.CopyAttributes() { - if k == "id" { - continue - } - if _, ok := ignorableAttrKeys[k]; ok { - continue - } - if v.RequiresNew == true { - requiresNewAfterIgnores = true - } - } - - // If we still require resource replacement after ignores, we - // can't touch the diff, as all of the attributes will be - // required to process the replacement. - if requiresNewAfterIgnores { - return nil - } - - // Here we undo the two reactions to RequireNew in EvalDiff - the "id" - // attribute diff and the Destroy boolean field - log.Printf("[DEBUG] Removing 'id' diff and setting Destroy to false " + - "because after ignore_changes, this diff no longer requires replacement") - diff.DelAttribute("id") - diff.SetDestroy(false) } + // Here we undo the two reactions to RequireNew in EvalDiff - the "id" + // attribute diff and the Destroy boolean field + log.Printf("[DEBUG] Removing 'id' diff and setting Destroy to false " + + "because after ignore_changes, this diff no longer requires replacement") + diff.DelAttribute("id") + diff.SetDestroy(false) + // If we didn't hit any of our early exit conditions, we can filter the diff. for k := range ignorableAttrKeys { log.Printf("[DEBUG] [EvalIgnoreChanges] %s - Ignoring diff attribute: %s", @@ -266,6 +276,46 @@ func (n *EvalDiff) processIgnoreChanges(diff *InstanceDiff) error { return nil } +// a group of key-*ResourceAttrDiff pairs from the same flatmapped container +type flatAttrDiff map[string]*ResourceAttrDiff + +// we need to keep all keys if any of them have a diff +func (f flatAttrDiff) keepDiff() bool { + for _, v := range f { + if !v.Empty() && !v.NewComputed { + return true + } + } + return false +} + +// sets, lists and maps need to be compared for diff inclusion as a whole, so +// group the flatmapped keys together for easier comparison. +func groupContainers(d *InstanceDiff) map[string]flatAttrDiff { + isIndex := multiVal.MatchString + containers := map[string]flatAttrDiff{} + attrs := d.CopyAttributes() + // we need to loop once to find the index key + for k := range attrs { + if isIndex(k) { + // add the key, always including the final dot to fully qualify it + containers[k[:len(k)-1]] = flatAttrDiff{} + } + } + + // loop again to find all the sub keys + for prefix, values := range containers { + for k, attrDiff := range attrs { + // we include the index value as well, since it could be part of the diff + if strings.HasPrefix(k, prefix) { + values[k] = attrDiff + } + } + } + + return containers +} + // EvalDiffDestroy is an EvalNode implementation that returns a plain // destroy diff. type EvalDiffDestroy struct { diff --git a/terraform/graph_walk_context.go b/terraform/graph_walk_context.go index 19fd47ceb6..e63b460356 100644 --- a/terraform/graph_walk_context.go +++ b/terraform/graph_walk_context.go @@ -84,6 +84,7 @@ func (w *ContextGraphWalker) EnterPath(path []string) EvalContext { StateLock: &w.Context.stateLock, Interpolater: &Interpolater{ Operation: w.Operation, + Meta: w.Context.meta, Module: w.Context.module, State: w.Context.state, StateLock: &w.Context.stateLock, diff --git a/terraform/interpolate.go b/terraform/interpolate.go index 11d5a53dcf..0c5acaa354 100644 --- a/terraform/interpolate.go +++ b/terraform/interpolate.go @@ -25,6 +25,7 @@ const ( // for interpolations such as `aws_instance.foo.bar`. type Interpolater struct { Operation walkOperation + Meta *ContextMeta Module *module.Tree State *State StateLock *sync.RWMutex @@ -87,6 +88,8 @@ func (i *Interpolater) Values( err = i.valueSelfVar(scope, n, v, result) case *config.SimpleVariable: err = i.valueSimpleVar(scope, n, v, result) + case *config.TerraformVariable: + err = i.valueTerraformVar(scope, n, v, result) case *config.UserVariable: err = i.valueUserVar(scope, n, v, result) default: @@ -259,7 +262,7 @@ func (i *Interpolater) valueResourceVar( // If it truly is missing, we'll catch it on a later walk. // This applies only to graph nodes that interpolate during the // config walk, e.g. providers. - if i.Operation == walkInput { + if i.Operation == walkInput || i.Operation == walkRefresh { result[n] = unknownVariable() return nil } @@ -309,6 +312,25 @@ func (i *Interpolater) valueSimpleVar( n) } +func (i *Interpolater) valueTerraformVar( + scope *InterpolationScope, + n string, + v *config.TerraformVariable, + result map[string]ast.Variable) error { + if v.Field != "env" { + return fmt.Errorf( + "%s: only supported key for 'terraform.X' interpolations is 'env'", n) + } + + if i.Meta == nil { + return fmt.Errorf( + "%s: internal error: nil Meta. Please report a bug.", n) + } + + result[n] = ast.Variable{Type: ast.TypeString, Value: i.Meta.Env} + return nil +} + func (i *Interpolater) valueUserVar( scope *InterpolationScope, n string, @@ -518,6 +540,13 @@ func (i *Interpolater) computeResourceMultiVariable( unknownVariable := unknownVariable() + // If we're only looking for input, we don't need to expand a + // multi-variable. This prevents us from encountering things that should be + // known but aren't because the state has yet to be refreshed. + if i.Operation == walkInput { + return &unknownVariable, nil + } + // Get the information about this resource variable, and verify // that it exists and such. module, cr, err := i.resourceVariableInfo(scope, v) diff --git a/terraform/interpolate_test.go b/terraform/interpolate_test.go index bdadedc4fc..6f1d2c3448 100644 --- a/terraform/interpolate_test.go +++ b/terraform/interpolate_test.go @@ -893,6 +893,33 @@ func TestInterpolater_resourceUnknownVariableList(t *testing.T) { interfaceToVariableSwallowError([]interface{}{})) } +func TestInterpolater_terraformEnv(t *testing.T) { + i := &Interpolater{ + Meta: &ContextMeta{Env: "foo"}, + } + + scope := &InterpolationScope{ + Path: rootModulePath, + } + + testInterpolate(t, i, scope, "terraform.env", ast.Variable{ + Value: "foo", + Type: ast.TypeString, + }) +} + +func TestInterpolater_terraformInvalid(t *testing.T) { + i := &Interpolater{ + Meta: &ContextMeta{Env: "foo"}, + } + + scope := &InterpolationScope{ + Path: rootModulePath, + } + + testInterpolateErr(t, i, scope, "terraform.nope") +} + func testInterpolate( t *testing.T, i *Interpolater, scope *InterpolationScope, diff --git a/terraform/shadow_context.go b/terraform/shadow_context.go index 5f7914328e..5588af252c 100644 --- a/terraform/shadow_context.go +++ b/terraform/shadow_context.go @@ -46,6 +46,7 @@ func newShadowContext(c *Context) (*Context, *Context, Shadow) { destroy: c.destroy, diff: c.diff.DeepCopy(), hooks: nil, + meta: c.meta, module: c.module, state: c.state.DeepCopy(), targets: targetRaw.([]string), @@ -77,6 +78,7 @@ func newShadowContext(c *Context) (*Context, *Context, Shadow) { diff: c.diff, // diffLock - no copy hooks: c.hooks, + meta: c.meta, module: c.module, sh: c.sh, state: c.state, diff --git a/terraform/state.go b/terraform/state.go index 5fa74a79fb..4e5aa713f9 100644 --- a/terraform/state.go +++ b/terraform/state.go @@ -585,7 +585,7 @@ func (s *State) CompareAges(other *State) (StateAgeComparison, error) { } // SameLineage returns true only if the state given in argument belongs -// to the same "lineage" of states as the reciever. +// to the same "lineage" of states as the receiver. func (s *State) SameLineage(other *State) bool { s.Lock() defer s.Unlock() diff --git a/terraform/state_upgrade_v1_to_v2.go b/terraform/state_upgrade_v1_to_v2.go index 928cdba113..aa13cce803 100644 --- a/terraform/state_upgrade_v1_to_v2.go +++ b/terraform/state_upgrade_v1_to_v2.go @@ -64,10 +64,19 @@ func (old *moduleStateV1) upgradeToV2() (*ModuleState, error) { return nil, nil } - path, err := copystructure.Copy(old.Path) + pathRaw, err := copystructure.Copy(old.Path) if err != nil { return nil, fmt.Errorf("Error upgrading ModuleState V1: %v", err) } + path, ok := pathRaw.([]string) + if !ok { + return nil, fmt.Errorf("Error upgrading ModuleState V1: path is not a list of strings") + } + if len(path) == 0 { + // We found some V1 states with a nil path. Assume root and catch + // duplicate path errors later (as part of Validate). + path = rootModulePath + } // Outputs needs upgrading to use the new structure outputs := make(map[string]*OutputState) @@ -94,7 +103,7 @@ func (old *moduleStateV1) upgradeToV2() (*ModuleState, error) { } return &ModuleState{ - Path: path.([]string), + Path: path, Outputs: outputs, Resources: resources, Dependencies: dependencies.([]string), diff --git a/terraform/state_upgrade_v1_to_v2_test.go b/terraform/state_upgrade_v1_to_v2_test.go new file mode 100644 index 0000000000..a660ae898e --- /dev/null +++ b/terraform/state_upgrade_v1_to_v2_test.go @@ -0,0 +1,22 @@ +package terraform + +import ( + "os" + "path/filepath" + "testing" +) + +func TestReadStateV1ToV2_noPath(t *testing.T) { + f, err := os.Open(filepath.Join(fixtureDir, "state-upgrade", "v1-to-v2-empty-path.tfstate")) + if err != nil { + t.Fatalf("err: %s", err) + } + defer f.Close() + + s, err := ReadState(f) + if err != nil { + t.Fatalf("err: %s", err) + } + + checkStateString(t, s, "") +} diff --git a/terraform/state_upgrade_v2_to_v3.go b/terraform/state_upgrade_v2_to_v3.go index 1fc458d150..e52d35fcd1 100644 --- a/terraform/state_upgrade_v2_to_v3.go +++ b/terraform/state_upgrade_v2_to_v3.go @@ -18,7 +18,7 @@ func upgradeStateV2ToV3(old *State) (*State, error) { // Ensure the copied version is v2 before attempting to upgrade if new.Version != 2 { - return nil, fmt.Errorf("Cannot appply v2->v3 state upgrade to " + + return nil, fmt.Errorf("Cannot apply v2->v3 state upgrade to " + "a state which is not version 2.") } diff --git a/terraform/terraform_test.go b/terraform/terraform_test.go index f1075b7f15..4a640cf1dd 100644 --- a/terraform/terraform_test.go +++ b/terraform/terraform_test.go @@ -1500,7 +1500,7 @@ DIFF: DESTROY/CREATE: aws_instance.foo type: "" => "aws_instance" - vars: "" => "foo" + vars: "foo" => "foo" STATE: @@ -1570,6 +1570,17 @@ aws_instance.foo: ami = ami-abcd1234 ` +const testTFPlanDiffIgnoreChangesWithFlatmaps = ` +UPDATE: aws_instance.foo + lst.#: "1" => "2" + lst.0: "j" => "j" + lst.1: "" => "k" + set.#: "1" => "1" + set.0.a: "1" => "1" + set.0.b: "" => "2" + type: "" => "aws_instance" +` + const testTerraformPlanIgnoreChangesWildcardStr = ` DIFF: diff --git a/terraform/test-fixtures/apply-terraform-env/main.tf b/terraform/test-fixtures/apply-terraform-env/main.tf new file mode 100644 index 0000000000..a5ab886177 --- /dev/null +++ b/terraform/test-fixtures/apply-terraform-env/main.tf @@ -0,0 +1,3 @@ +output "output" { + value = "${terraform.env}" +} diff --git a/terraform/test-fixtures/input-submodule-count/main.tf b/terraform/test-fixtures/input-submodule-count/main.tf new file mode 100644 index 0000000000..1cbfc3450f --- /dev/null +++ b/terraform/test-fixtures/input-submodule-count/main.tf @@ -0,0 +1,4 @@ +module "mod" { + source = "./mod" + count = 2 +} diff --git a/terraform/test-fixtures/input-submodule-count/mod/main.tf b/terraform/test-fixtures/input-submodule-count/mod/main.tf new file mode 100644 index 0000000000..995abe2564 --- /dev/null +++ b/terraform/test-fixtures/input-submodule-count/mod/main.tf @@ -0,0 +1,11 @@ +variable "count" { +} + +resource "aws_instance" "foo" { + count = "${var.count}" +} + +module "submod" { + source = "./submod" + list = ["${aws_instance.foo.*.id}"] +} diff --git a/terraform/test-fixtures/input-submodule-count/mod/submod/main.tf b/terraform/test-fixtures/input-submodule-count/mod/submod/main.tf new file mode 100644 index 0000000000..c0c8d15afa --- /dev/null +++ b/terraform/test-fixtures/input-submodule-count/mod/submod/main.tf @@ -0,0 +1,7 @@ +variable "list" { + type = "list" +} + +resource "aws_instance" "bar" { + count = "${var.list[0]}" +} diff --git a/terraform/test-fixtures/plan-ignore-changes-with-flatmaps/main.tf b/terraform/test-fixtures/plan-ignore-changes-with-flatmaps/main.tf new file mode 100644 index 0000000000..49885194ea --- /dev/null +++ b/terraform/test-fixtures/plan-ignore-changes-with-flatmaps/main.tf @@ -0,0 +1,16 @@ +resource "aws_instance" "foo" { + id = "bar" + user_data = "x" + require_new = "yes" + + set = { + a = "1" + b = "2" + } + + lst = ["j", "k"] + + lifecycle { + ignore_changes = ["require_new"] + } +} diff --git a/terraform/test-fixtures/refresh-data-module-var/child/main.tf b/terraform/test-fixtures/refresh-data-module-var/child/main.tf new file mode 100644 index 0000000000..64d21beda0 --- /dev/null +++ b/terraform/test-fixtures/refresh-data-module-var/child/main.tf @@ -0,0 +1,6 @@ +variable "key" {} + +data "aws_data_source" "foo" { + id = "${var.key}" +} + diff --git a/terraform/test-fixtures/refresh-data-module-var/main.tf b/terraform/test-fixtures/refresh-data-module-var/main.tf new file mode 100644 index 0000000000..06f18b1b58 --- /dev/null +++ b/terraform/test-fixtures/refresh-data-module-var/main.tf @@ -0,0 +1,8 @@ +resource "aws_instance" "A" { + foo = "bar" +} + +module "child" { + source = "child" + key = "${aws_instance.A.id}" +} diff --git a/terraform/test-fixtures/state-upgrade/v1-to-v2-empty-path.tfstate b/terraform/test-fixtures/state-upgrade/v1-to-v2-empty-path.tfstate new file mode 100644 index 0000000000..ee7c9d1873 --- /dev/null +++ b/terraform/test-fixtures/state-upgrade/v1-to-v2-empty-path.tfstate @@ -0,0 +1,38 @@ +{ + "version": 1, + "modules": [{ + "resources": { + "aws_instance.foo1": {"primary":{}}, + "cloudstack_instance.foo1": {"primary":{}}, + "cloudstack_instance.foo2": {"primary":{}}, + "digitalocean_droplet.foo1": {"primary":{}}, + "digitalocean_droplet.foo2": {"primary":{}}, + "digitalocean_droplet.foo3": {"primary":{}}, + "docker_container.foo1": {"primary":{}}, + "docker_container.foo2": {"primary":{}}, + "docker_container.foo3": {"primary":{}}, + "docker_container.foo4": {"primary":{}}, + "google_compute_instance.foo1": {"primary":{}}, + "google_compute_instance.foo2": {"primary":{}}, + "google_compute_instance.foo3": {"primary":{}}, + "google_compute_instance.foo4": {"primary":{}}, + "google_compute_instance.foo5": {"primary":{}}, + "heroku_app.foo1": {"primary":{}}, + "heroku_app.foo2": {"primary":{}}, + "heroku_app.foo3": {"primary":{}}, + "heroku_app.foo4": {"primary":{}}, + "heroku_app.foo5": {"primary":{}}, + "heroku_app.foo6": {"primary":{}}, + "openstack_compute_instance_v2.foo1": {"primary":{}}, + "openstack_compute_instance_v2.foo2": {"primary":{}}, + "openstack_compute_instance_v2.foo3": {"primary":{}}, + "openstack_compute_instance_v2.foo4": {"primary":{}}, + "openstack_compute_instance_v2.foo5": {"primary":{}}, + "openstack_compute_instance_v2.foo6": {"primary":{}}, + "openstack_compute_instance_v2.foo7": {"primary":{}}, + "bar": {"primary":{}}, + "baz": {"primary":{}}, + "zip": {"primary":{}} + } + }] +} diff --git a/terraform/version.go b/terraform/version.go index 9ade41ca07..193e93cd85 100644 --- a/terraform/version.go +++ b/terraform/version.go @@ -7,7 +7,7 @@ import ( ) // The main version number that is being run at the moment. -const Version = "0.9.0" +const Version = "0.9.2" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release diff --git a/vendor/github.com/PagerDuty/go-pagerduty/client.go b/vendor/github.com/PagerDuty/go-pagerduty/client.go index 3cc17e00d5..7613684ca0 100644 --- a/vendor/github.com/PagerDuty/go-pagerduty/client.go +++ b/vendor/github.com/PagerDuty/go-pagerduty/client.go @@ -37,9 +37,9 @@ type APIReference struct { } type errorObject struct { - Code int `json:"code,omitempty"` - Mesage string `json:"message,omitempty"` - Errors []string `json:"errors,omitempty"` + Code int `json:"code,omitempty"` + Message string `json:"message,omitempty"` + Errors interface{} `json:"errors,omitempty"` } // Client wraps http client diff --git a/vendor/github.com/PagerDuty/go-pagerduty/maintenance_window.go b/vendor/github.com/PagerDuty/go-pagerduty/maintenance_window.go index feb807788e..72c5379556 100644 --- a/vendor/github.com/PagerDuty/go-pagerduty/maintenance_window.go +++ b/vendor/github.com/PagerDuty/go-pagerduty/maintenance_window.go @@ -9,13 +9,13 @@ import ( // MaintenanceWindow is used to temporarily disable one or more services for a set period of time. type MaintenanceWindow struct { APIObject - SequenceNumber uint `json:"sequence_number,omitempty"` - StartTime string `json:"start_time"` - EndTime string `json:"end_time"` - Description string - Services []APIObject - Teams []APIListObject - CreatedBy APIListObject `json:"created_by"` + SequenceNumber uint `json:"sequence_number,omitempty"` + StartTime string `json:"start_time"` + EndTime string `json:"end_time"` + Description string `json:"description"` + Services []APIObject `json:"services"` + Teams []APIListObject `json:"teams"` + CreatedBy APIListObject `json:"created_by"` } // ListMaintenanceWindowsResponse is the data structur returned from calling the ListMaintenanceWindows API endpoint. diff --git a/vendor/github.com/armon/go-metrics/README.md b/vendor/github.com/armon/go-metrics/README.md new file mode 100644 index 0000000000..a7399cddff --- /dev/null +++ b/vendor/github.com/armon/go-metrics/README.md @@ -0,0 +1,74 @@ +go-metrics +========== + +This library provides a `metrics` package which can be used to instrument code, +expose application metrics, and profile runtime performance in a flexible manner. + +Current API: [![GoDoc](https://godoc.org/github.com/armon/go-metrics?status.svg)](https://godoc.org/github.com/armon/go-metrics) + +Sinks +===== + +The `metrics` package makes use of a `MetricSink` interface to support delivery +to any type of backend. Currently the following sinks are provided: + +* StatsiteSink : Sinks to a [statsite](https://github.com/armon/statsite/) instance (TCP) +* StatsdSink: Sinks to a [StatsD](https://github.com/etsy/statsd/) / statsite instance (UDP) +* PrometheusSink: Sinks to a [Prometheus](http://prometheus.io/) metrics endpoint (exposed via HTTP for scrapes) +* InmemSink : Provides in-memory aggregation, can be used to export stats +* FanoutSink : Sinks to multiple sinks. Enables writing to multiple statsite instances for example. +* BlackholeSink : Sinks to nowhere + +In addition to the sinks, the `InmemSignal` can be used to catch a signal, +and dump a formatted output of recent metrics. For example, when a process gets +a SIGUSR1, it can dump to stderr recent performance metrics for debugging. + +Examples +======== + +Here is an example of using the package: + +```go +func SlowMethod() { + // Profiling the runtime of a method + defer metrics.MeasureSince([]string{"SlowMethod"}, time.Now()) +} + +// Configure a statsite sink as the global metrics sink +sink, _ := metrics.NewStatsiteSink("statsite:8125") +metrics.NewGlobal(metrics.DefaultConfig("service-name"), sink) + +// Emit a Key/Value pair +metrics.EmitKey([]string{"questions", "meaning of life"}, 42) +``` + +Here is an example of setting up a signal handler: + +```go +// Setup the inmem sink and signal handler +inm := metrics.NewInmemSink(10*time.Second, time.Minute) +sig := metrics.DefaultInmemSignal(inm) +metrics.NewGlobal(metrics.DefaultConfig("service-name"), inm) + +// Run some code +inm.SetGauge([]string{"foo"}, 42) +inm.EmitKey([]string{"bar"}, 30) + +inm.IncrCounter([]string{"baz"}, 42) +inm.IncrCounter([]string{"baz"}, 1) +inm.IncrCounter([]string{"baz"}, 80) + +inm.AddSample([]string{"method", "wow"}, 42) +inm.AddSample([]string{"method", "wow"}, 100) +inm.AddSample([]string{"method", "wow"}, 22) + +.... +``` + +When a signal comes in, output like the following will be dumped to stderr: + + [2014-01-28 14:57:33.04 -0800 PST][G] 'foo': 42.000 + [2014-01-28 14:57:33.04 -0800 PST][P] 'bar': 30.000 + [2014-01-28 14:57:33.04 -0800 PST][C] 'baz': Count: 3 Min: 1.000 Mean: 41.000 Max: 80.000 Stddev: 39.509 + [2014-01-28 14:57:33.04 -0800 PST][S] 'method.wow': Count: 3 Min: 22.000 Mean: 54.667 Max: 100.000 Stddev: 40.513 + diff --git a/vendor/github.com/armon/go-metrics/const_unix.go b/vendor/github.com/armon/go-metrics/const_unix.go new file mode 100644 index 0000000000..31098dd57e --- /dev/null +++ b/vendor/github.com/armon/go-metrics/const_unix.go @@ -0,0 +1,12 @@ +// +build !windows + +package metrics + +import ( + "syscall" +) + +const ( + // DefaultSignal is used with DefaultInmemSignal + DefaultSignal = syscall.SIGUSR1 +) diff --git a/vendor/github.com/armon/go-metrics/const_windows.go b/vendor/github.com/armon/go-metrics/const_windows.go new file mode 100644 index 0000000000..38136af3e4 --- /dev/null +++ b/vendor/github.com/armon/go-metrics/const_windows.go @@ -0,0 +1,13 @@ +// +build windows + +package metrics + +import ( + "syscall" +) + +const ( + // DefaultSignal is used with DefaultInmemSignal + // Windows has no SIGUSR1, use SIGBREAK + DefaultSignal = syscall.Signal(21) +) diff --git a/vendor/github.com/armon/go-metrics/inmem.go b/vendor/github.com/armon/go-metrics/inmem.go new file mode 100644 index 0000000000..83fb6bba09 --- /dev/null +++ b/vendor/github.com/armon/go-metrics/inmem.go @@ -0,0 +1,247 @@ +package metrics + +import ( + "fmt" + "math" + "strings" + "sync" + "time" +) + +// InmemSink provides a MetricSink that does in-memory aggregation +// without sending metrics over a network. It can be embedded within +// an application to provide profiling information. +type InmemSink struct { + // How long is each aggregation interval + interval time.Duration + + // Retain controls how many metrics interval we keep + retain time.Duration + + // maxIntervals is the maximum length of intervals. + // It is retain / interval. + maxIntervals int + + // intervals is a slice of the retained intervals + intervals []*IntervalMetrics + intervalLock sync.RWMutex + + rateDenom float64 +} + +// IntervalMetrics stores the aggregated metrics +// for a specific interval +type IntervalMetrics struct { + sync.RWMutex + + // The start time of the interval + Interval time.Time + + // Gauges maps the key to the last set value + Gauges map[string]float32 + + // Points maps the string to the list of emitted values + // from EmitKey + Points map[string][]float32 + + // Counters maps the string key to a sum of the counter + // values + Counters map[string]*AggregateSample + + // Samples maps the key to an AggregateSample, + // which has the rolled up view of a sample + Samples map[string]*AggregateSample +} + +// NewIntervalMetrics creates a new IntervalMetrics for a given interval +func NewIntervalMetrics(intv time.Time) *IntervalMetrics { + return &IntervalMetrics{ + Interval: intv, + Gauges: make(map[string]float32), + Points: make(map[string][]float32), + Counters: make(map[string]*AggregateSample), + Samples: make(map[string]*AggregateSample), + } +} + +// AggregateSample is used to hold aggregate metrics +// about a sample +type AggregateSample struct { + Count int // The count of emitted pairs + Rate float64 // The count of emitted pairs per time unit (usually 1 second) + Sum float64 // The sum of values + SumSq float64 // The sum of squared values + Min float64 // Minimum value + Max float64 // Maximum value + LastUpdated time.Time // When value was last updated +} + +// Computes a Stddev of the values +func (a *AggregateSample) Stddev() float64 { + num := (float64(a.Count) * a.SumSq) - math.Pow(a.Sum, 2) + div := float64(a.Count * (a.Count - 1)) + if div == 0 { + return 0 + } + return math.Sqrt(num / div) +} + +// Computes a mean of the values +func (a *AggregateSample) Mean() float64 { + if a.Count == 0 { + return 0 + } + return a.Sum / float64(a.Count) +} + +// Ingest is used to update a sample +func (a *AggregateSample) Ingest(v float64, rateDenom float64) { + a.Count++ + a.Sum += v + a.SumSq += (v * v) + if v < a.Min || a.Count == 1 { + a.Min = v + } + if v > a.Max || a.Count == 1 { + a.Max = v + } + a.Rate = float64(a.Count)/rateDenom + a.LastUpdated = time.Now() +} + +func (a *AggregateSample) String() string { + if a.Count == 0 { + return "Count: 0" + } else if a.Stddev() == 0 { + return fmt.Sprintf("Count: %d Sum: %0.3f LastUpdated: %s", a.Count, a.Sum, a.LastUpdated) + } else { + return fmt.Sprintf("Count: %d Min: %0.3f Mean: %0.3f Max: %0.3f Stddev: %0.3f Sum: %0.3f LastUpdated: %s", + a.Count, a.Min, a.Mean(), a.Max, a.Stddev(), a.Sum, a.LastUpdated) + } +} + +// NewInmemSink is used to construct a new in-memory sink. +// Uses an aggregation interval and maximum retention period. +func NewInmemSink(interval, retain time.Duration) *InmemSink { + rateTimeUnit := time.Second + i := &InmemSink{ + interval: interval, + retain: retain, + maxIntervals: int(retain / interval), + rateDenom: float64(interval.Nanoseconds()) / float64(rateTimeUnit.Nanoseconds()), + } + i.intervals = make([]*IntervalMetrics, 0, i.maxIntervals) + return i +} + +func (i *InmemSink) SetGauge(key []string, val float32) { + k := i.flattenKey(key) + intv := i.getInterval() + + intv.Lock() + defer intv.Unlock() + intv.Gauges[k] = val +} + +func (i *InmemSink) EmitKey(key []string, val float32) { + k := i.flattenKey(key) + intv := i.getInterval() + + intv.Lock() + defer intv.Unlock() + vals := intv.Points[k] + intv.Points[k] = append(vals, val) +} + +func (i *InmemSink) IncrCounter(key []string, val float32) { + k := i.flattenKey(key) + intv := i.getInterval() + + intv.Lock() + defer intv.Unlock() + + agg := intv.Counters[k] + if agg == nil { + agg = &AggregateSample{} + intv.Counters[k] = agg + } + agg.Ingest(float64(val), i.rateDenom) +} + +func (i *InmemSink) AddSample(key []string, val float32) { + k := i.flattenKey(key) + intv := i.getInterval() + + intv.Lock() + defer intv.Unlock() + + agg := intv.Samples[k] + if agg == nil { + agg = &AggregateSample{} + intv.Samples[k] = agg + } + agg.Ingest(float64(val), i.rateDenom) +} + +// Data is used to retrieve all the aggregated metrics +// Intervals may be in use, and a read lock should be acquired +func (i *InmemSink) Data() []*IntervalMetrics { + // Get the current interval, forces creation + i.getInterval() + + i.intervalLock.RLock() + defer i.intervalLock.RUnlock() + + intervals := make([]*IntervalMetrics, len(i.intervals)) + copy(intervals, i.intervals) + return intervals +} + +func (i *InmemSink) getExistingInterval(intv time.Time) *IntervalMetrics { + i.intervalLock.RLock() + defer i.intervalLock.RUnlock() + + n := len(i.intervals) + if n > 0 && i.intervals[n-1].Interval == intv { + return i.intervals[n-1] + } + return nil +} + +func (i *InmemSink) createInterval(intv time.Time) *IntervalMetrics { + i.intervalLock.Lock() + defer i.intervalLock.Unlock() + + // Check for an existing interval + n := len(i.intervals) + if n > 0 && i.intervals[n-1].Interval == intv { + return i.intervals[n-1] + } + + // Add the current interval + current := NewIntervalMetrics(intv) + i.intervals = append(i.intervals, current) + n++ + + // Truncate the intervals if they are too long + if n >= i.maxIntervals { + copy(i.intervals[0:], i.intervals[n-i.maxIntervals:]) + i.intervals = i.intervals[:i.maxIntervals] + } + return current +} + +// getInterval returns the current interval to write to +func (i *InmemSink) getInterval() *IntervalMetrics { + intv := time.Now().Truncate(i.interval) + if m := i.getExistingInterval(intv); m != nil { + return m + } + return i.createInterval(intv) +} + +// Flattens the key for formatting, removes spaces +func (i *InmemSink) flattenKey(parts []string) string { + joined := strings.Join(parts, ".") + return strings.Replace(joined, " ", "_", -1) +} diff --git a/vendor/github.com/armon/go-metrics/inmem_signal.go b/vendor/github.com/armon/go-metrics/inmem_signal.go new file mode 100644 index 0000000000..95d08ee10f --- /dev/null +++ b/vendor/github.com/armon/go-metrics/inmem_signal.go @@ -0,0 +1,100 @@ +package metrics + +import ( + "bytes" + "fmt" + "io" + "os" + "os/signal" + "sync" + "syscall" +) + +// InmemSignal is used to listen for a given signal, and when received, +// to dump the current metrics from the InmemSink to an io.Writer +type InmemSignal struct { + signal syscall.Signal + inm *InmemSink + w io.Writer + sigCh chan os.Signal + + stop bool + stopCh chan struct{} + stopLock sync.Mutex +} + +// NewInmemSignal creates a new InmemSignal which listens for a given signal, +// and dumps the current metrics out to a writer +func NewInmemSignal(inmem *InmemSink, sig syscall.Signal, w io.Writer) *InmemSignal { + i := &InmemSignal{ + signal: sig, + inm: inmem, + w: w, + sigCh: make(chan os.Signal, 1), + stopCh: make(chan struct{}), + } + signal.Notify(i.sigCh, sig) + go i.run() + return i +} + +// DefaultInmemSignal returns a new InmemSignal that responds to SIGUSR1 +// and writes output to stderr. Windows uses SIGBREAK +func DefaultInmemSignal(inmem *InmemSink) *InmemSignal { + return NewInmemSignal(inmem, DefaultSignal, os.Stderr) +} + +// Stop is used to stop the InmemSignal from listening +func (i *InmemSignal) Stop() { + i.stopLock.Lock() + defer i.stopLock.Unlock() + + if i.stop { + return + } + i.stop = true + close(i.stopCh) + signal.Stop(i.sigCh) +} + +// run is a long running routine that handles signals +func (i *InmemSignal) run() { + for { + select { + case <-i.sigCh: + i.dumpStats() + case <-i.stopCh: + return + } + } +} + +// dumpStats is used to dump the data to output writer +func (i *InmemSignal) dumpStats() { + buf := bytes.NewBuffer(nil) + + data := i.inm.Data() + // Skip the last period which is still being aggregated + for i := 0; i < len(data)-1; i++ { + intv := data[i] + intv.RLock() + for name, val := range intv.Gauges { + fmt.Fprintf(buf, "[%v][G] '%s': %0.3f\n", intv.Interval, name, val) + } + for name, vals := range intv.Points { + for _, val := range vals { + fmt.Fprintf(buf, "[%v][P] '%s': %0.3f\n", intv.Interval, name, val) + } + } + for name, agg := range intv.Counters { + fmt.Fprintf(buf, "[%v][C] '%s': %s\n", intv.Interval, name, agg) + } + for name, agg := range intv.Samples { + fmt.Fprintf(buf, "[%v][S] '%s': %s\n", intv.Interval, name, agg) + } + intv.RUnlock() + } + + // Write out the bytes + i.w.Write(buf.Bytes()) +} diff --git a/vendor/github.com/armon/go-metrics/metrics.go b/vendor/github.com/armon/go-metrics/metrics.go new file mode 100755 index 0000000000..b818e4182c --- /dev/null +++ b/vendor/github.com/armon/go-metrics/metrics.go @@ -0,0 +1,115 @@ +package metrics + +import ( + "runtime" + "time" +) + +func (m *Metrics) SetGauge(key []string, val float32) { + if m.HostName != "" && m.EnableHostname { + key = insert(0, m.HostName, key) + } + if m.EnableTypePrefix { + key = insert(0, "gauge", key) + } + if m.ServiceName != "" { + key = insert(0, m.ServiceName, key) + } + m.sink.SetGauge(key, val) +} + +func (m *Metrics) EmitKey(key []string, val float32) { + if m.EnableTypePrefix { + key = insert(0, "kv", key) + } + if m.ServiceName != "" { + key = insert(0, m.ServiceName, key) + } + m.sink.EmitKey(key, val) +} + +func (m *Metrics) IncrCounter(key []string, val float32) { + if m.EnableTypePrefix { + key = insert(0, "counter", key) + } + if m.ServiceName != "" { + key = insert(0, m.ServiceName, key) + } + m.sink.IncrCounter(key, val) +} + +func (m *Metrics) AddSample(key []string, val float32) { + if m.EnableTypePrefix { + key = insert(0, "sample", key) + } + if m.ServiceName != "" { + key = insert(0, m.ServiceName, key) + } + m.sink.AddSample(key, val) +} + +func (m *Metrics) MeasureSince(key []string, start time.Time) { + if m.EnableTypePrefix { + key = insert(0, "timer", key) + } + if m.ServiceName != "" { + key = insert(0, m.ServiceName, key) + } + now := time.Now() + elapsed := now.Sub(start) + msec := float32(elapsed.Nanoseconds()) / float32(m.TimerGranularity) + m.sink.AddSample(key, msec) +} + +// Periodically collects runtime stats to publish +func (m *Metrics) collectStats() { + for { + time.Sleep(m.ProfileInterval) + m.emitRuntimeStats() + } +} + +// Emits various runtime statsitics +func (m *Metrics) emitRuntimeStats() { + // Export number of Goroutines + numRoutines := runtime.NumGoroutine() + m.SetGauge([]string{"runtime", "num_goroutines"}, float32(numRoutines)) + + // Export memory stats + var stats runtime.MemStats + runtime.ReadMemStats(&stats) + m.SetGauge([]string{"runtime", "alloc_bytes"}, float32(stats.Alloc)) + m.SetGauge([]string{"runtime", "sys_bytes"}, float32(stats.Sys)) + m.SetGauge([]string{"runtime", "malloc_count"}, float32(stats.Mallocs)) + m.SetGauge([]string{"runtime", "free_count"}, float32(stats.Frees)) + m.SetGauge([]string{"runtime", "heap_objects"}, float32(stats.HeapObjects)) + m.SetGauge([]string{"runtime", "total_gc_pause_ns"}, float32(stats.PauseTotalNs)) + m.SetGauge([]string{"runtime", "total_gc_runs"}, float32(stats.NumGC)) + + // Export info about the last few GC runs + num := stats.NumGC + + // Handle wrap around + if num < m.lastNumGC { + m.lastNumGC = 0 + } + + // Ensure we don't scan more than 256 + if num-m.lastNumGC >= 256 { + m.lastNumGC = num - 255 + } + + for i := m.lastNumGC; i < num; i++ { + pause := stats.PauseNs[i%256] + m.AddSample([]string{"runtime", "gc_pause_ns"}, float32(pause)) + } + m.lastNumGC = num +} + +// Inserts a string value at an index into the slice +func insert(i int, v string, s []string) []string { + s = append(s, "") + copy(s[i+1:], s[i:]) + s[i] = v + return s +} diff --git a/vendor/github.com/armon/go-metrics/sink.go b/vendor/github.com/armon/go-metrics/sink.go new file mode 100755 index 0000000000..0c240c2c47 --- /dev/null +++ b/vendor/github.com/armon/go-metrics/sink.go @@ -0,0 +1,52 @@ +package metrics + +// The MetricSink interface is used to transmit metrics information +// to an external system +type MetricSink interface { + // A Gauge should retain the last value it is set to + SetGauge(key []string, val float32) + + // Should emit a Key/Value pair for each call + EmitKey(key []string, val float32) + + // Counters should accumulate values + IncrCounter(key []string, val float32) + + // Samples are for timing information, where quantiles are used + AddSample(key []string, val float32) +} + +// BlackholeSink is used to just blackhole messages +type BlackholeSink struct{} + +func (*BlackholeSink) SetGauge(key []string, val float32) {} +func (*BlackholeSink) EmitKey(key []string, val float32) {} +func (*BlackholeSink) IncrCounter(key []string, val float32) {} +func (*BlackholeSink) AddSample(key []string, val float32) {} + +// FanoutSink is used to sink to fanout values to multiple sinks +type FanoutSink []MetricSink + +func (fh FanoutSink) SetGauge(key []string, val float32) { + for _, s := range fh { + s.SetGauge(key, val) + } +} + +func (fh FanoutSink) EmitKey(key []string, val float32) { + for _, s := range fh { + s.EmitKey(key, val) + } +} + +func (fh FanoutSink) IncrCounter(key []string, val float32) { + for _, s := range fh { + s.IncrCounter(key, val) + } +} + +func (fh FanoutSink) AddSample(key []string, val float32) { + for _, s := range fh { + s.AddSample(key, val) + } +} diff --git a/vendor/github.com/armon/go-metrics/start.go b/vendor/github.com/armon/go-metrics/start.go new file mode 100755 index 0000000000..44113f1004 --- /dev/null +++ b/vendor/github.com/armon/go-metrics/start.go @@ -0,0 +1,95 @@ +package metrics + +import ( + "os" + "time" +) + +// Config is used to configure metrics settings +type Config struct { + ServiceName string // Prefixed with keys to seperate services + HostName string // Hostname to use. If not provided and EnableHostname, it will be os.Hostname + EnableHostname bool // Enable prefixing gauge values with hostname + EnableRuntimeMetrics bool // Enables profiling of runtime metrics (GC, Goroutines, Memory) + EnableTypePrefix bool // Prefixes key with a type ("counter", "gauge", "timer") + TimerGranularity time.Duration // Granularity of timers. + ProfileInterval time.Duration // Interval to profile runtime metrics +} + +// Metrics represents an instance of a metrics sink that can +// be used to emit +type Metrics struct { + Config + lastNumGC uint32 + sink MetricSink +} + +// Shared global metrics instance +var globalMetrics *Metrics + +func init() { + // Initialize to a blackhole sink to avoid errors + globalMetrics = &Metrics{sink: &BlackholeSink{}} +} + +// DefaultConfig provides a sane default configuration +func DefaultConfig(serviceName string) *Config { + c := &Config{ + ServiceName: serviceName, // Use client provided service + HostName: "", + EnableHostname: true, // Enable hostname prefix + EnableRuntimeMetrics: true, // Enable runtime profiling + EnableTypePrefix: false, // Disable type prefix + TimerGranularity: time.Millisecond, // Timers are in milliseconds + ProfileInterval: time.Second, // Poll runtime every second + } + + // Try to get the hostname + name, _ := os.Hostname() + c.HostName = name + return c +} + +// New is used to create a new instance of Metrics +func New(conf *Config, sink MetricSink) (*Metrics, error) { + met := &Metrics{} + met.Config = *conf + met.sink = sink + + // Start the runtime collector + if conf.EnableRuntimeMetrics { + go met.collectStats() + } + return met, nil +} + +// NewGlobal is the same as New, but it assigns the metrics object to be +// used globally as well as returning it. +func NewGlobal(conf *Config, sink MetricSink) (*Metrics, error) { + metrics, err := New(conf, sink) + if err == nil { + globalMetrics = metrics + } + return metrics, err +} + +// Proxy all the methods to the globalMetrics instance +func SetGauge(key []string, val float32) { + globalMetrics.SetGauge(key, val) +} + +func EmitKey(key []string, val float32) { + globalMetrics.EmitKey(key, val) +} + +func IncrCounter(key []string, val float32) { + globalMetrics.IncrCounter(key, val) +} + +func AddSample(key []string, val float32) { + globalMetrics.AddSample(key, val) +} + +func MeasureSince(key []string, start time.Time) { + globalMetrics.MeasureSince(key, start) +} diff --git a/vendor/github.com/armon/go-metrics/statsd.go b/vendor/github.com/armon/go-metrics/statsd.go new file mode 100644 index 0000000000..65a5021a05 --- /dev/null +++ b/vendor/github.com/armon/go-metrics/statsd.go @@ -0,0 +1,154 @@ +package metrics + +import ( + "bytes" + "fmt" + "log" + "net" + "strings" + "time" +) + +const ( + // statsdMaxLen is the maximum size of a packet + // to send to statsd + statsdMaxLen = 1400 +) + +// StatsdSink provides a MetricSink that can be used +// with a statsite or statsd metrics server. It uses +// only UDP packets, while StatsiteSink uses TCP. +type StatsdSink struct { + addr string + metricQueue chan string +} + +// NewStatsdSink is used to create a new StatsdSink +func NewStatsdSink(addr string) (*StatsdSink, error) { + s := &StatsdSink{ + addr: addr, + metricQueue: make(chan string, 4096), + } + go s.flushMetrics() + return s, nil +} + +// Close is used to stop flushing to statsd +func (s *StatsdSink) Shutdown() { + close(s.metricQueue) +} + +func (s *StatsdSink) SetGauge(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val)) +} + +func (s *StatsdSink) EmitKey(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|kv\n", flatKey, val)) +} + +func (s *StatsdSink) IncrCounter(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val)) +} + +func (s *StatsdSink) AddSample(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val)) +} + +// Flattens the key for formatting, removes spaces +func (s *StatsdSink) flattenKey(parts []string) string { + joined := strings.Join(parts, ".") + return strings.Map(func(r rune) rune { + switch r { + case ':': + fallthrough + case ' ': + return '_' + default: + return r + } + }, joined) +} + +// Does a non-blocking push to the metrics queue +func (s *StatsdSink) pushMetric(m string) { + select { + case s.metricQueue <- m: + default: + } +} + +// Flushes metrics +func (s *StatsdSink) flushMetrics() { + var sock net.Conn + var err error + var wait <-chan time.Time + ticker := time.NewTicker(flushInterval) + defer ticker.Stop() + +CONNECT: + // Create a buffer + buf := bytes.NewBuffer(nil) + + // Attempt to connect + sock, err = net.Dial("udp", s.addr) + if err != nil { + log.Printf("[ERR] Error connecting to statsd! Err: %s", err) + goto WAIT + } + + for { + select { + case metric, ok := <-s.metricQueue: + // Get a metric from the queue + if !ok { + goto QUIT + } + + // Check if this would overflow the packet size + if len(metric)+buf.Len() > statsdMaxLen { + _, err := sock.Write(buf.Bytes()) + buf.Reset() + if err != nil { + log.Printf("[ERR] Error writing to statsd! Err: %s", err) + goto WAIT + } + } + + // Append to the buffer + buf.WriteString(metric) + + case <-ticker.C: + if buf.Len() == 0 { + continue + } + + _, err := sock.Write(buf.Bytes()) + buf.Reset() + if err != nil { + log.Printf("[ERR] Error flushing to statsd! Err: %s", err) + goto WAIT + } + } + } + +WAIT: + // Wait for a while + wait = time.After(time.Duration(5) * time.Second) + for { + select { + // Dequeue the messages to avoid backlog + case _, ok := <-s.metricQueue: + if !ok { + goto QUIT + } + case <-wait: + goto CONNECT + } + } +QUIT: + s.metricQueue = nil +} diff --git a/vendor/github.com/armon/go-metrics/statsite.go b/vendor/github.com/armon/go-metrics/statsite.go new file mode 100755 index 0000000000..68730139a7 --- /dev/null +++ b/vendor/github.com/armon/go-metrics/statsite.go @@ -0,0 +1,142 @@ +package metrics + +import ( + "bufio" + "fmt" + "log" + "net" + "strings" + "time" +) + +const ( + // We force flush the statsite metrics after this period of + // inactivity. Prevents stats from getting stuck in a buffer + // forever. + flushInterval = 100 * time.Millisecond +) + +// StatsiteSink provides a MetricSink that can be used with a +// statsite metrics server +type StatsiteSink struct { + addr string + metricQueue chan string +} + +// NewStatsiteSink is used to create a new StatsiteSink +func NewStatsiteSink(addr string) (*StatsiteSink, error) { + s := &StatsiteSink{ + addr: addr, + metricQueue: make(chan string, 4096), + } + go s.flushMetrics() + return s, nil +} + +// Close is used to stop flushing to statsite +func (s *StatsiteSink) Shutdown() { + close(s.metricQueue) +} + +func (s *StatsiteSink) SetGauge(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val)) +} + +func (s *StatsiteSink) EmitKey(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|kv\n", flatKey, val)) +} + +func (s *StatsiteSink) IncrCounter(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val)) +} + +func (s *StatsiteSink) AddSample(key []string, val float32) { + flatKey := s.flattenKey(key) + s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val)) +} + +// Flattens the key for formatting, removes spaces +func (s *StatsiteSink) flattenKey(parts []string) string { + joined := strings.Join(parts, ".") + return strings.Map(func(r rune) rune { + switch r { + case ':': + fallthrough + case ' ': + return '_' + default: + return r + } + }, joined) +} + +// Does a non-blocking push to the metrics queue +func (s *StatsiteSink) pushMetric(m string) { + select { + case s.metricQueue <- m: + default: + } +} + +// Flushes metrics +func (s *StatsiteSink) flushMetrics() { + var sock net.Conn + var err error + var wait <-chan time.Time + var buffered *bufio.Writer + ticker := time.NewTicker(flushInterval) + defer ticker.Stop() + +CONNECT: + // Attempt to connect + sock, err = net.Dial("tcp", s.addr) + if err != nil { + log.Printf("[ERR] Error connecting to statsite! Err: %s", err) + goto WAIT + } + + // Create a buffered writer + buffered = bufio.NewWriter(sock) + + for { + select { + case metric, ok := <-s.metricQueue: + // Get a metric from the queue + if !ok { + goto QUIT + } + + // Try to send to statsite + _, err := buffered.Write([]byte(metric)) + if err != nil { + log.Printf("[ERR] Error writing to statsite! Err: %s", err) + goto WAIT + } + case <-ticker.C: + if err := buffered.Flush(); err != nil { + log.Printf("[ERR] Error flushing to statsite! Err: %s", err) + goto WAIT + } + } + } + +WAIT: + // Wait for a while + wait = time.After(time.Duration(5) * time.Second) + for { + select { + // Dequeue the messages to avoid backlog + case _, ok := <-s.metricQueue: + if !ok { + goto QUIT + } + case <-wait: + goto CONNECT + } + } +QUIT: + s.metricQueue = nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md b/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md index e4e72c3685..ebacd5ba40 100644 --- a/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md +++ b/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md @@ -1,3 +1,55 @@ +Release v1.7.9 (2017-03-13) +=== + +Service Client Updates +--- +* `service/devicefarm`: Updates service API, documentation, paginators, and examples + * Network shaping allows users to simulate network connections and conditions while testing their Android, iOS, and web apps with AWS Device Farm. +* `service/cloudwatchevents`: Updates service API, documentation, and examples + +SDK Enhancement +=== +* `aws/session`: Add support for side loaded CA bundles (#1117) + * Adds supports for side loading Certificate Authority bundle files to the SDK using AWS_CA_BUNDLE environment variable or CustomCABundle session option. +* `service/s3/s3crypto`: Add support for AES/CBC/PKCS5Padding (#1124) + +SDK Bug +=== +* `service/rds`: Fixing issue when not providing `SourceRegion` on cross +region operations (#1127) +* `service/rds`: Enables cross region for `CopyDBClusterSnapshot` and +`CreateDBCluster` (#1128) + +Release v1.7.8 (2017-03-10) +=== + +Service Client Updates +--- +* `service/codedeploy`: Updates service paginators + * Add paginators for Codedeploy +* `service/emr`: Updates service API, documentation, and paginators + * This release includes support for instance fleets in Amazon EMR. + +Release v1.7.7 (2017-03-09) +=== + +Service Client Updates +--- +* `service/apigateway`: Updates service API, documentation, and paginators + * API Gateway has added support for ACM certificates on custom domain names. Both Amazon-issued certificates and uploaded third-part certificates are supported. +* `service/clouddirectory`: Updates service API, documentation, and paginators + * Introduces a new Cloud Directory API that enables you to retrieve all available parent paths for any type of object (a node, leaf node, policy node, and index node) in a hierarchy. + +Release v1.7.6 (2017-03-09) +=== + +Service Client Updates +--- +* `service/organizations`: Updates service documentation and examples + * Doc-only Update for Organizations: Add SDK Code Snippets +* `service/workdocs`: Adds new service + * The Administrative SDKs for Amazon WorkDocs provides full administrator level access to WorkDocs site resources, allowing developers to integrate their applications to manage WorkDocs users, content and permissions programmatically + Release v1.7.5 (2017-03-08) === diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index 4ba5d051df..5f0f6dd37a 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -133,6 +133,7 @@ const ( SwfServiceID = "swf" // Swf. WafServiceID = "waf" // Waf. WafRegionalServiceID = "waf-regional" // WafRegional. + WorkdocsServiceID = "workdocs" // Workdocs. WorkspacesServiceID = "workspaces" // Workspaces. XrayServiceID = "xray" // Xray. ) @@ -490,6 +491,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -503,6 +505,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -516,6 +519,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -850,6 +854,7 @@ var awsPartition = partition{ "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, @@ -1450,6 +1455,7 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1583,6 +1589,17 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "workdocs": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "workspaces": service{ Endpoints: endpoints{ diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go b/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go index 9975e320ce..660d9bef98 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go @@ -96,7 +96,7 @@ handler logs every request and its payload made by a service client: // Create a session, and add additional handlers for all service // clients created with the Session to inherit. Adds logging handler. sess := session.Must(session.NewSession()) - + sess.Handlers.Send.PushFront(func(r *request.Request) { // Log every request made and its payload logger.Println("Request: %s/%s, Payload: %s", @@ -169,8 +169,8 @@ session option must be set to SharedConfigEnable, or AWS_SDK_LOAD_CONFIG environment variable set. The shared configuration instructs the SDK to assume an IAM role with MFA -when the mfa_serial configuration field is set in the shared config -(~/.aws/config) or shared credentials (~/.aws/credentials) file. +when the mfa_serial configuration field is set in the shared config +(~/.aws/config) or shared credentials (~/.aws/credentials) file. If mfa_serial is set in the configuration, the SDK will assume the role, and the AssumeRoleTokenProvider session option is not set an an error will @@ -251,6 +251,24 @@ $HOME/.aws/config on Linux/Unix based systems, and AWS_CONFIG_FILE=$HOME/my_shared_config +Path to a custom Credentials Authority (CA) bundle PEM file that the SDK +will use instead of the default system's root CA bundle. Use this only +if you want to replace the CA bundle the SDK uses for TLS requests. + AWS_CA_BUNDLE=$HOME/my_custom_ca_bundle + +Enabling this option will attempt to merge the Transport into the SDK's HTTP +client. If the client's Transport is not a http.Transport an error will be +returned. If the Transport's TLS config is set this option will cause the SDK +to overwrite the Transport's TLS config's RootCAs value. If the CA bundle file +contains multiple certificates all of them will be loaded. + +The Session option CustomCABundle is also available when creating sessions +to also enable this feature. CustomCABundle session option field has priority +over the AWS_CA_BUNDLE environment variable, and will be used if both are set. + +Setting a custom HTTPClient in the aws.Config options will override this setting. +To use this option and custom HTTP client, the HTTP client needs to be provided +when creating the session. Not the service client. */ package session diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go index d2f0c84481..e6278a782c 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go @@ -75,6 +75,24 @@ type envConfig struct { // // AWS_CONFIG_FILE=$HOME/my_shared_config SharedConfigFile string + + // Sets the path to a custom Credentials Authroity (CA) Bundle PEM file + // that the SDK will use instead of the the system's root CA bundle. + // Only use this if you want to configure the SDK to use a custom set + // of CAs. + // + // Enabling this option will attempt to merge the Transport + // into the SDK's HTTP client. If the client's Transport is + // not a http.Transport an error will be returned. If the + // Transport's TLS config is set this option will cause the + // SDK to overwrite the Transport's TLS config's RootCAs value. + // + // Setting a custom HTTPClient in the aws.Config options will override this setting. + // To use this option and custom HTTP client, the HTTP client needs to be provided + // when creating the session. Not the service client. + // + // AWS_CA_BUNDLE=$HOME/my_custom_ca_bundle + CustomCABundle string } var ( @@ -150,6 +168,8 @@ func envConfigLoad(enableSharedConfig bool) envConfig { cfg.SharedCredentialsFile = sharedCredentialsFilename() cfg.SharedConfigFile = sharedConfigFilename() + cfg.CustomCABundle = os.Getenv("AWS_CA_BUNDLE") + return cfg } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go index 42ab3632e2..96c740d00f 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go @@ -1,7 +1,13 @@ package session import ( + "crypto/tls" + "crypto/x509" "fmt" + "io" + "io/ioutil" + "net/http" + "os" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -92,9 +98,10 @@ func New(cfgs ...*aws.Config) *Session { // control through code how the Session will be created. Such as specifying the // config profile, and controlling if shared config is enabled or not. func NewSession(cfgs ...*aws.Config) (*Session, error) { - envCfg := loadEnvConfig() + opts := Options{} + opts.Config.MergeIn(cfgs...) - return newSession(Options{}, envCfg, cfgs...) + return NewSessionWithOptions(opts) } // SharedConfigState provides the ability to optionally override the state @@ -167,6 +174,21 @@ type Options struct { // This field is only used if the shared configuration is enabled, and // the config enables assume role wit MFA via the mfa_serial field. AssumeRoleTokenProvider func() (string, error) + + // Reader for a custom Credentials Authority (CA) bundle in PEM format that + // the SDK will use instead of the default system's root CA bundle. Use this + // only if you want to replace the CA bundle the SDK uses for TLS requests. + // + // Enabling this option will attempt to merge the Transport into the SDK's HTTP + // client. If the client's Transport is not a http.Transport an error will be + // returned. If the Transport's TLS config is set this option will cause the SDK + // to overwrite the Transport's TLS config's RootCAs value. If the CA + // bundle reader contains multiple certificates all of them will be loaded. + // + // The Session option CustomCABundle is also available when creating sessions + // to also enable this feature. CustomCABundle session option field has priority + // over the AWS_CA_BUNDLE environment variable, and will be used if both are set. + CustomCABundle io.Reader } // NewSessionWithOptions returns a new Session created from SDK defaults, config files, @@ -217,6 +239,17 @@ func NewSessionWithOptions(opts Options) (*Session, error) { envCfg.EnableSharedConfig = true } + // Only use AWS_CA_BUNDLE if session option is not provided. + if len(envCfg.CustomCABundle) != 0 && opts.CustomCABundle == nil { + f, err := os.Open(envCfg.CustomCABundle) + if err != nil { + return nil, awserr.New("LoadCustomCABundleError", + "failed to open custom CA bundle PEM file", err) + } + defer f.Close() + opts.CustomCABundle = f + } + return newSession(opts, envCfg, &opts.Config) } @@ -297,9 +330,61 @@ func newSession(opts Options, envCfg envConfig, cfgs ...*aws.Config) (*Session, initHandlers(s) + // Setup HTTP client with custom cert bundle if enabled + if opts.CustomCABundle != nil { + if err := loadCustomCABundle(s, opts.CustomCABundle); err != nil { + return nil, err + } + } + return s, nil } +func loadCustomCABundle(s *Session, bundle io.Reader) error { + var t *http.Transport + switch v := s.Config.HTTPClient.Transport.(type) { + case *http.Transport: + t = v + default: + if s.Config.HTTPClient.Transport != nil { + return awserr.New("LoadCustomCABundleError", + "unable to load custom CA bundle, HTTPClient's transport unsupported type", nil) + } + } + if t == nil { + t = &http.Transport{} + } + + p, err := loadCertPool(bundle) + if err != nil { + return err + } + if t.TLSClientConfig == nil { + t.TLSClientConfig = &tls.Config{} + } + t.TLSClientConfig.RootCAs = p + + s.Config.HTTPClient.Transport = t + + return nil +} + +func loadCertPool(r io.Reader) (*x509.CertPool, error) { + b, err := ioutil.ReadAll(r) + if err != nil { + return nil, awserr.New("LoadCustomCABundleError", + "failed to read custom CA bundle PEM file", err) + } + + p := x509.NewCertPool() + if !p.AppendCertsFromPEM(b) { + return nil, awserr.New("LoadCustomCABundleError", + "failed to load custom CA bundle PEM file", err) + } + + return p, nil +} + func mergeConfigSrcs(cfg, userCfg *aws.Config, envCfg envConfig, sharedCfg sharedConfig, handlers request.Handlers, sessOpts Options) error { // Merge in user provided configuration cfg.MergeIn(userCfg) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go index 5bb1a8e556..438506bf48 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/version.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.7.5" +const SDKVersion = "1.7.9" diff --git a/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go b/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go index 12775be03b..e0af3861b3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go @@ -7388,7 +7388,7 @@ func (s *Account) SetThrottleSettings(v *ThrottleSettings) *Account { type ApiKey struct { _ struct{} `type:"structure"` - // The date when the API Key was created, in ISO 8601 format (http://www.iso.org/iso/home/standards/iso8601.htm). + // The timestamp when the API Key was created. CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` // An AWS Marketplace customer identifier , when integrating with the AWS SaaS @@ -7404,7 +7404,7 @@ type ApiKey struct { // The identifier of the API Key. Id *string `locationName:"id" type:"string"` - // When the API Key was last updated, in ISO 8601 format. + // The timestamp when the API Key was last updated. LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` // The name of the API Key. @@ -7706,13 +7706,13 @@ type ClientCertificate struct { // The identifier of the client certificate. ClientCertificateId *string `locationName:"clientCertificateId" type:"string"` - // The date when the client certificate was created, in ISO 8601 format (http://www.iso.org/iso/home/standards/iso8601.htm). + // The timestamp when the client certificate was created. CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` // The description of the client certificate. Description *string `locationName:"description" type:"string"` - // The date when the client certificate will expire, in ISO 8601 format (http://www.iso.org/iso/home/standards/iso8601.htm). + // The timestamp when the client certificate will expire. ExpirationDate *time.Time `locationName:"expirationDate" type:"timestamp" timestampFormat:"unix"` // The PEM-encoded public key of the client certificate, which can be used to @@ -8299,32 +8299,29 @@ func (s *CreateDocumentationVersionInput) SetStageName(v string) *CreateDocument type CreateDomainNameInput struct { _ struct{} `type:"structure"` - // The body of the server certificate provided by your certificate authority. - // - // CertificateBody is a required field - CertificateBody *string `locationName:"certificateBody" type:"string" required:"true"` + // The reference to an AWS-managed certificate. AWS Certificate Manager is the + // only supported source. + CertificateArn *string `locationName:"certificateArn" type:"string"` - // The intermediate certificates and optionally the root certificate, one after - // the other without any blank lines. If you include the root certificate, your - // certificate chain must start with intermediate certificates and end with - // the root certificate. Use the intermediate certificates that were provided + // [Deprecated] The body of the server certificate provided by your certificate + // authority. + CertificateBody *string `locationName:"certificateBody" type:"string"` + + // [Deprecated] The intermediate certificates and optionally the root certificate, + // one after the other without any blank lines. If you include the root certificate, + // your certificate chain must start with intermediate certificates and end + // with the root certificate. Use the intermediate certificates that were provided // by your certificate authority. Do not include any intermediaries that are // not in the chain of trust path. - // - // CertificateChain is a required field - CertificateChain *string `locationName:"certificateChain" type:"string" required:"true"` + CertificateChain *string `locationName:"certificateChain" type:"string"` - // The name of the certificate. - // - // CertificateName is a required field - CertificateName *string `locationName:"certificateName" type:"string" required:"true"` + // The user-friendly name of the certificate. + CertificateName *string `locationName:"certificateName" type:"string"` - // Your certificate's private key. - // - // CertificatePrivateKey is a required field - CertificatePrivateKey *string `locationName:"certificatePrivateKey" type:"string" required:"true"` + // [Deprecated] Your certificate's private key. + CertificatePrivateKey *string `locationName:"certificatePrivateKey" type:"string"` - // The name of the DomainName resource. + // (Required) The name of the DomainName resource. // // DomainName is a required field DomainName *string `locationName:"domainName" type:"string" required:"true"` @@ -8343,18 +8340,6 @@ func (s CreateDomainNameInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateDomainNameInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateDomainNameInput"} - if s.CertificateBody == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateBody")) - } - if s.CertificateChain == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateChain")) - } - if s.CertificateName == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateName")) - } - if s.CertificatePrivateKey == nil { - invalidParams.Add(request.NewErrParamRequired("CertificatePrivateKey")) - } if s.DomainName == nil { invalidParams.Add(request.NewErrParamRequired("DomainName")) } @@ -8365,6 +8350,12 @@ func (s *CreateDomainNameInput) Validate() error { return nil } +// SetCertificateArn sets the CertificateArn field's value. +func (s *CreateDomainNameInput) SetCertificateArn(v string) *CreateDomainNameInput { + s.CertificateArn = &v + return s +} + // SetCertificateBody sets the CertificateBody field's value. func (s *CreateDomainNameInput) SetCertificateBody(v string) *CreateDomainNameInput { s.CertificateBody = &v @@ -10275,7 +10266,8 @@ type DocumentationPartLocation struct { // a valid and required field for API entity types of API, AUTHORIZER, MODEL, // RESOURCE, METHOD, PATH_PARAMETER, QUERY_PARAMETER, REQUEST_HEADER, REQUEST_BODY, // RESPONSE, RESPONSE_HEADER, and RESPONSE_BODY. Content inheritance does not - // apply to any entity of the API, AUTHROZER, MODEL, or RESOURCE type. + // apply to any entity of the API, AUTHROZER, METHOD, MODEL, REQUEST_BODY, or + // RESOURCE type. // // Type is a required field Type *string `locationName:"type" type:"string" required:"true" enum:"DocumentationPartType"` @@ -10390,10 +10382,14 @@ func (s *DocumentationVersion) SetVersion(v string) *DocumentationVersion { type DomainName struct { _ struct{} `type:"structure"` + // The reference to an AWS-managed certificate. AWS Certificate Manager is the + // only supported source. + CertificateArn *string `locationName:"certificateArn" type:"string"` + // The name of the certificate. CertificateName *string `locationName:"certificateName" type:"string"` - // The date when the certificate was uploaded, in ISO 8601 format (http://www.iso.org/iso/home/standards/iso8601.htm). + // The timestamp when the certificate was uploaded. CertificateUploadDate *time.Time `locationName:"certificateUploadDate" type:"timestamp" timestampFormat:"unix"` // The domain name of the Amazon CloudFront distribution. For more information, @@ -10414,6 +10410,12 @@ func (s DomainName) GoString() string { return s.String() } +// SetCertificateArn sets the CertificateArn field's value. +func (s *DomainName) SetCertificateArn(v string) *DomainName { + s.CertificateArn = &v + return s +} + // SetCertificateName sets the CertificateName field's value. func (s *DomainName) SetCertificateName(v string) *DomainName { s.CertificateName = &v @@ -15441,7 +15443,7 @@ type RestApi struct { // RestApi supports only UTF-8-encoded text payloads. BinaryMediaTypes []*string `locationName:"binaryMediaTypes" type:"list"` - // The date when the API was created, in ISO 8601 format (http://www.iso.org/iso/home/standards/iso8601.htm). + // The timestamp when the API was created. CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` // The API's description. @@ -15645,7 +15647,7 @@ type Stage struct { // The identifier of a client certificate for an API stage. ClientCertificateId *string `locationName:"clientCertificateId" type:"string"` - // The date and time that the stage was created, in ISO 8601 format (http://www.iso.org/iso/home/standards/iso8601.htm). + // The timestamp when the stage was created. CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` // The identifier of the Deployment that the stage points to. @@ -15657,8 +15659,7 @@ type Stage struct { // The version of the associated API documentation. DocumentationVersion *string `locationName:"documentationVersion" type:"string"` - // The date and time that information about the stage was last updated, in ISO - // 8601 format (http://www.iso.org/iso/home/standards/iso8601.htm). + // The timestamp when the stage last updated. LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` // A map that defines the method settings for a Stage resource. Keys (designated diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go index 74f38b0939..d626b354fe 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go @@ -60,12 +60,13 @@ func (c *CloudWatchEvents) DeleteRuleRequest(input *DeleteRuleInput) (req *reque // DeleteRule API operation for Amazon CloudWatch Events. // -// Deletes a rule. You must remove all targets from a rule using RemoveTargets -// before you can delete the rule. +// Deletes the specified rule. // -// Note: When you delete a rule, incoming events might still continue to match -// to the deleted rule. Please allow a short period of time for changes to take -// effect. +// You must remove all targets from a rule using RemoveTargets before you can +// delete the rule. +// +// When you delete a rule, incoming events might continue to match to the deleted +// rule. Please allow a short period of time for changes to take effect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -76,7 +77,7 @@ func (c *CloudWatchEvents) DeleteRuleRequest(input *DeleteRuleInput) (req *reque // // Returned Error Codes: // * ErrCodeConcurrentModificationException "ConcurrentModificationException" -// This exception occurs if there is concurrent modification on rule or target. +// There is concurrent modification on a rule or target. // // * ErrCodeInternalException "InternalException" // This exception occurs due to unexpected causes. @@ -133,7 +134,7 @@ func (c *CloudWatchEvents) DescribeRuleRequest(input *DescribeRuleInput) (req *r // DescribeRule API operation for Amazon CloudWatch Events. // -// Describes the details of the specified rule. +// Describes the specified rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -203,12 +204,11 @@ func (c *CloudWatchEvents) DisableRuleRequest(input *DisableRuleInput) (req *req // DisableRule API operation for Amazon CloudWatch Events. // -// Disables a rule. A disabled rule won't match any events, and won't self-trigger -// if it has a schedule expression. +// Disables the specified rule. A disabled rule won't match any events, and +// won't self-trigger if it has a schedule expression. // -// Note: When you disable a rule, incoming events might still continue to match -// to the disabled rule. Please allow a short period of time for changes to -// take effect. +// When you disable a rule, incoming events might continue to match to the disabled +// rule. Please allow a short period of time for changes to take effect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -222,7 +222,7 @@ func (c *CloudWatchEvents) DisableRuleRequest(input *DisableRuleInput) (req *req // The rule does not exist. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" -// This exception occurs if there is concurrent modification on rule or target. +// There is concurrent modification on a rule or target. // // * ErrCodeInternalException "InternalException" // This exception occurs due to unexpected causes. @@ -281,11 +281,11 @@ func (c *CloudWatchEvents) EnableRuleRequest(input *EnableRuleInput) (req *reque // EnableRule API operation for Amazon CloudWatch Events. // -// Enables a rule. If the rule does not exist, the operation fails. +// Enables the specified rule. If the rule does not exist, the operation fails. // -// Note: When you enable a rule, incoming events might not immediately start -// matching to a newly enabled rule. Please allow a short period of time for -// changes to take effect. +// When you enable a rule, incoming events might not immediately start matching +// to a newly enabled rule. Please allow a short period of time for changes +// to take effect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -299,7 +299,7 @@ func (c *CloudWatchEvents) EnableRuleRequest(input *EnableRuleInput) (req *reque // The rule does not exist. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" -// This exception occurs if there is concurrent modification on rule or target. +// There is concurrent modification on a rule or target. // // * ErrCodeInternalException "InternalException" // This exception occurs due to unexpected causes. @@ -356,12 +356,8 @@ func (c *CloudWatchEvents) ListRuleNamesByTargetRequest(input *ListRuleNamesByTa // ListRuleNamesByTarget API operation for Amazon CloudWatch Events. // -// Lists the names of the rules that the given target is put to. You can see -// which of the rules in Amazon CloudWatch Events can invoke a specific target -// in your account. If you have more rules in your account than the given limit, -// the results will be paginated. In that case, use the next token returned -// in the response and repeat ListRulesByTarget until the NextToken in the response -// is returned as null. +// Lists the rules for the specified target. You can see which of the rules +// in Amazon CloudWatch Events can invoke a specific target in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -426,11 +422,8 @@ func (c *CloudWatchEvents) ListRulesRequest(input *ListRulesInput) (req *request // ListRules API operation for Amazon CloudWatch Events. // -// Lists the Amazon CloudWatch Events rules in your account. You can either -// list all the rules or you can provide a prefix to match to the rule names. -// If you have more rules in your account than the given limit, the results -// will be paginated. In that case, use the next token returned in the response -// and repeat ListRules until the NextToken in the response is returned as null. +// Lists your Amazon CloudWatch Events rules. You can either list all the rules +// or you can provide a prefix to match to the rule names. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -495,7 +488,7 @@ func (c *CloudWatchEvents) ListTargetsByRuleRequest(input *ListTargetsByRuleInpu // ListTargetsByRule API operation for Amazon CloudWatch Events. // -// Lists of targets assigned to the rule. +// Lists the targets assigned to the specified rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -629,20 +622,20 @@ func (c *CloudWatchEvents) PutRuleRequest(input *PutRuleInput) (req *request.Req // PutRule API operation for Amazon CloudWatch Events. // -// Creates or updates a rule. Rules are enabled by default, or based on value -// of the State parameter. You can disable a rule using DisableRule. +// Creates or updates the specified rule. Rules are enabled by default, or based +// on value of the state. You can disable a rule using DisableRule. // -// Note: When you create or update a rule, incoming events might not immediately -// start matching to new or updated rules. Please allow a short period of time -// for changes to take effect. +// When you create or update a rule, incoming events might not immediately start +// matching to new or updated rules. Please allow a short period of time for +// changes to take effect. // // A rule must contain at least an EventPattern or ScheduleExpression. Rules // with EventPatterns are triggered when a matching event is observed. Rules // with ScheduleExpressions self-trigger based on the given schedule. A rule // can have both an EventPattern and a ScheduleExpression, in which case the -// rule will trigger on matching events as well as on a schedule. +// rule triggers on matching events as well as on a schedule. // -// Note: Most services in AWS treat : or / as the same character in Amazon Resource +// Most services in AWS treat : or / as the same character in Amazon Resource // Names (ARNs). However, CloudWatch Events uses an exact match in event patterns // and rules. Be sure to use the correct ARN characters when creating event // patterns so that they match the ARN syntax in the event you want to match. @@ -656,14 +649,13 @@ func (c *CloudWatchEvents) PutRuleRequest(input *PutRuleInput) (req *request.Req // // Returned Error Codes: // * ErrCodeInvalidEventPatternException "InvalidEventPatternException" -// The event pattern is invalid. +// The event pattern is not valid. // // * ErrCodeLimitExceededException "LimitExceededException" -// This exception occurs if you try to create more rules or add more targets -// to a rule than allowed by default. +// You tried to create more rules or add more targets to a rule than is allowed. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" -// This exception occurs if there is concurrent modification on rule or target. +// There is concurrent modification on a rule or target. // // * ErrCodeInternalException "InternalException" // This exception occurs due to unexpected causes. @@ -720,30 +712,49 @@ func (c *CloudWatchEvents) PutTargetsRequest(input *PutTargetsInput) (req *reque // PutTargets API operation for Amazon CloudWatch Events. // -// Adds target(s) to a rule. Targets are the resources that can be invoked when -// a rule is triggered. For example, AWS Lambda functions, Amazon Kinesis streams, -// and built-in targets. Updates the target(s) if they are already associated -// with the role. In other words, if there is already a target with the given -// target ID, then the target associated with that ID is updated. +// Adds the specified targets to the specified rule, or updates the targets +// if they are already associated with the rule. // -// In order to be able to make API calls against the resources you own, Amazon -// CloudWatch Events needs the appropriate permissions. For AWS Lambda and Amazon -// SNS resources, CloudWatch Events relies on resource-based policies. For Amazon -// Kinesis streams, CloudWatch Events relies on IAM roles. For more information, -// see Permissions for Sending Events to Targets (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/EventsTargetPermissions.html) -// in the Amazon CloudWatch Developer Guide. +// Targets are the resources that are invoked when a rule is triggered. Example +// targets include EC2 instances, AWS Lambda functions, Amazon Kinesis streams, +// Amazon ECS tasks, AWS Step Functions state machines, and built-in targets. +// Note that creating rules with built-in targets is supported only in the AWS +// Management Console. // -// Input and InputPath are mutually-exclusive and optional parameters of a target. -// When a rule is triggered due to a matched event, if for a target: +// For some target types, PutTargets provides target-specific parameters. If +// the target is an Amazon Kinesis stream, you can optionally specify which +// shard the event goes to by using the KinesisParameters argument. To invoke +// a command on multiple EC2 instances with one rule, you can use the RunCommandParameters +// field. // -// * Neither Input nor InputPath is specified, then the entire event is passed -// to the target in JSON form. -// * InputPath is specified in the form of JSONPath (e.g. $.detail), then -// only the part of the event specified in the path is passed to the target -// (e.g. only the detail part of the event is passed). -// * Input is specified in the form of a valid JSON, then the matched event +// To be able to make API calls against the resources that you own, Amazon CloudWatch +// Events needs the appropriate permissions. For AWS Lambda and Amazon SNS resources, +// CloudWatch Events relies on resource-based policies. For EC2 instances, Amazon +// Kinesis streams, and AWS Step Functions state machines, CloudWatch Events +// relies on IAM roles that you specify in the RoleARN argument in PutTarget. +// For more information, see Authentication and Access Control (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/auth-and-access-control-cwe.html) +// in the Amazon CloudWatch Events User Guide. +// +// Input, InputPath and InputTransformer are mutually exclusive and optional +// parameters of a target. When a rule is triggered due to a matched event: +// +// * If none of the following arguments are specified for a target, then +// the entire event is passed to the target in JSON form (unless the target +// is Amazon EC2 Run Command or Amazon ECS task, in which case nothing from +// the event is passed to the target). +// +// * If Input is specified in the form of valid JSON, then the matched event // is overridden with this constant. -// Note: When you add targets to a rule, when the associated rule triggers, +// +// * If InputPath is specified in the form of JSONPath (for example, $.detail), +// then only the part of the event specified in the path is passed to the +// target (for example, only the detail part of the event is passed). +// +// * If InputTransformer is specified, then one or more specified JSONPaths +// are extracted from the event and used as values in a template that you +// specify as the input to the target. +// +// When you add targets to a rule and the associated rule triggers soon after, // new or updated targets might not be immediately invoked. Please allow a short // period of time for changes to take effect. // @@ -759,11 +770,10 @@ func (c *CloudWatchEvents) PutTargetsRequest(input *PutTargetsInput) (req *reque // The rule does not exist. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" -// This exception occurs if there is concurrent modification on rule or target. +// There is concurrent modification on a rule or target. // // * ErrCodeLimitExceededException "LimitExceededException" -// This exception occurs if you try to create more rules or add more targets -// to a rule than allowed by default. +// You tried to create more rules or add more targets to a rule than is allowed. // // * ErrCodeInternalException "InternalException" // This exception occurs due to unexpected causes. @@ -820,12 +830,12 @@ func (c *CloudWatchEvents) RemoveTargetsRequest(input *RemoveTargetsInput) (req // RemoveTargets API operation for Amazon CloudWatch Events. // -// Removes target(s) from a rule so that when the rule is triggered, those targets -// will no longer be invoked. +// Removes the specified targets from the specified rule. When the rule is triggered, +// those targets are no longer be invoked. // -// Note: When you remove a target, when the associated rule triggers, removed -// targets might still continue to be invoked. Please allow a short period of -// time for changes to take effect. +// When you remove a target, when the associated rule triggers, removed targets +// might continue to be invoked. Please allow a short period of time for changes +// to take effect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -839,7 +849,7 @@ func (c *CloudWatchEvents) RemoveTargetsRequest(input *RemoveTargetsInput) (req // The rule does not exist. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" -// This exception occurs if there is concurrent modification on rule or target. +// There is concurrent modification on a rule or target. // // * ErrCodeInternalException "InternalException" // This exception occurs due to unexpected causes. @@ -896,9 +906,9 @@ func (c *CloudWatchEvents) TestEventPatternRequest(input *TestEventPatternInput) // TestEventPattern API operation for Amazon CloudWatch Events. // -// Tests whether an event pattern matches the provided event. +// Tests whether the specified event pattern matches the provided event. // -// Note: Most services in AWS treat : or / as the same character in Amazon Resource +// Most services in AWS treat : or / as the same character in Amazon Resource // Names (ARNs). However, CloudWatch Events uses an exact match in event patterns // and rules. Be sure to use the correct ARN characters when creating event // patterns so that they match the ARN syntax in the event you want to match. @@ -912,7 +922,7 @@ func (c *CloudWatchEvents) TestEventPatternRequest(input *TestEventPatternInput) // // Returned Error Codes: // * ErrCodeInvalidEventPatternException "InvalidEventPatternException" -// The event pattern is invalid. +// The event pattern is not valid. // // * ErrCodeInternalException "InternalException" // This exception occurs due to unexpected causes. @@ -924,12 +934,11 @@ func (c *CloudWatchEvents) TestEventPattern(input *TestEventPatternInput) (*Test return out, err } -// Container for the parameters to the DeleteRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/DeleteRuleRequest type DeleteRuleInput struct { _ struct{} `type:"structure"` - // The name of the rule to be deleted. + // The name of the rule. // // Name is a required field Name *string `min:"1" type:"string" required:"true"` @@ -982,12 +991,11 @@ func (s DeleteRuleOutput) GoString() string { return s.String() } -// Container for the parameters to the DescribeRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/DescribeRuleRequest type DescribeRuleInput struct { _ struct{} `type:"structure"` - // The name of the rule you want to describe details for. + // The name of the rule. // // Name is a required field Name *string `min:"1" type:"string" required:"true"` @@ -1025,21 +1033,20 @@ func (s *DescribeRuleInput) SetName(v string) *DescribeRuleInput { return s } -// The result of the DescribeRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/DescribeRuleResponse type DescribeRuleOutput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) associated with the rule. + // The Amazon Resource Name (ARN) of the rule. Arn *string `min:"1" type:"string"` - // The rule's description. + // The description of the rule. Description *string `type:"string"` // The event pattern. EventPattern *string `type:"string"` - // The rule's name. + // The name of the rule. Name *string `min:"1" type:"string"` // The Amazon Resource Name (ARN) of the IAM role associated with the rule. @@ -1104,12 +1111,11 @@ func (s *DescribeRuleOutput) SetState(v string) *DescribeRuleOutput { return s } -// Container for the parameters to the DisableRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/DisableRuleRequest type DisableRuleInput struct { _ struct{} `type:"structure"` - // The name of the rule you want to disable. + // The name of the rule. // // Name is a required field Name *string `min:"1" type:"string" required:"true"` @@ -1162,12 +1168,68 @@ func (s DisableRuleOutput) GoString() string { return s.String() } -// Container for the parameters to the EnableRule operation. +// The custom parameters to be used when the target is an Amazon ECS cluster. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/EcsParameters +type EcsParameters struct { + _ struct{} `type:"structure"` + + // The number of tasks to create based on the TaskDefinition. The default is + // one. + TaskCount *int64 `min:"1" type:"integer"` + + // The ARN of the task definition to use if the event target is an Amazon ECS + // cluster. + // + // TaskDefinitionArn is a required field + TaskDefinitionArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s EcsParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EcsParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *EcsParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EcsParameters"} + if s.TaskCount != nil && *s.TaskCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("TaskCount", 1)) + } + if s.TaskDefinitionArn == nil { + invalidParams.Add(request.NewErrParamRequired("TaskDefinitionArn")) + } + if s.TaskDefinitionArn != nil && len(*s.TaskDefinitionArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskDefinitionArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTaskCount sets the TaskCount field's value. +func (s *EcsParameters) SetTaskCount(v int64) *EcsParameters { + s.TaskCount = &v + return s +} + +// SetTaskDefinitionArn sets the TaskDefinitionArn field's value. +func (s *EcsParameters) SetTaskDefinitionArn(v string) *EcsParameters { + s.TaskDefinitionArn = &v + return s +} + // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/EnableRuleRequest type EnableRuleInput struct { _ struct{} `type:"structure"` - // The name of the rule that you want to enable. + // The name of the rule. // // Name is a required field Name *string `min:"1" type:"string" required:"true"` @@ -1220,7 +1282,106 @@ func (s EnableRuleOutput) GoString() string { return s.String() } -// Container for the parameters to the ListRuleNamesByTarget operation. +// Contains the parameters needed for you to provide custom input to a target +// based on one or more pieces of data extracted from the event. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/InputTransformer +type InputTransformer struct { + _ struct{} `type:"structure"` + + // Map of JSON paths to be extracted from the event. These are key-value pairs, + // where each value is a JSON path. + InputPathsMap map[string]*string `type:"map"` + + // Input template where you can use the values of the keys from InputPathsMap + // to customize the data sent to the target. + // + // InputTemplate is a required field + InputTemplate *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s InputTransformer) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputTransformer) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputTransformer) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputTransformer"} + if s.InputTemplate == nil { + invalidParams.Add(request.NewErrParamRequired("InputTemplate")) + } + if s.InputTemplate != nil && len(*s.InputTemplate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InputTemplate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputPathsMap sets the InputPathsMap field's value. +func (s *InputTransformer) SetInputPathsMap(v map[string]*string) *InputTransformer { + s.InputPathsMap = v + return s +} + +// SetInputTemplate sets the InputTemplate field's value. +func (s *InputTransformer) SetInputTemplate(v string) *InputTransformer { + s.InputTemplate = &v + return s +} + +// This object enables you to specify a JSON path to extract from the event +// and use as the partition key for the Amazon Kinesis stream, so that you can +// control the shard to which the event goes. If you do not include this parameter, +// the default is to use the eventId as the partition key. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/KinesisParameters +type KinesisParameters struct { + _ struct{} `type:"structure"` + + // The JSON path to be extracted from the event and used as the partition key. + // For more information, see Amazon Kinesis Streams Key Concepts (http://docs.aws.amazon.com/streams/latest/dev/key-concepts.html#partition-key) + // in the Amazon Kinesis Streams Developer Guide. + // + // PartitionKeyPath is a required field + PartitionKeyPath *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s KinesisParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisParameters"} + if s.PartitionKeyPath == nil { + invalidParams.Add(request.NewErrParamRequired("PartitionKeyPath")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPartitionKeyPath sets the PartitionKeyPath field's value. +func (s *KinesisParameters) SetPartitionKeyPath(v string) *KinesisParameters { + s.PartitionKeyPath = &v + return s +} + // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/ListRuleNamesByTargetRequest type ListRuleNamesByTargetInput struct { _ struct{} `type:"structure"` @@ -1228,12 +1389,10 @@ type ListRuleNamesByTargetInput struct { // The maximum number of results to return. Limit *int64 `min:"1" type:"integer"` - // The token returned by a previous call to indicate that there is more data - // available. + // The token returned by a previous call to retrieve the next set of results. NextToken *string `min:"1" type:"string"` - // The Amazon Resource Name (ARN) of the target resource that you want to list - // the rules for. + // The Amazon Resource Name (ARN) of the target resource. // // TargetArn is a required field TargetArn *string `min:"1" type:"string" required:"true"` @@ -1289,15 +1448,15 @@ func (s *ListRuleNamesByTargetInput) SetTargetArn(v string) *ListRuleNamesByTarg return s } -// The result of the ListRuleNamesByTarget operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/ListRuleNamesByTargetResponse type ListRuleNamesByTargetOutput struct { _ struct{} `type:"structure"` - // Indicates that there are additional results to retrieve. + // Indicates whether there are additional results to retrieve. If there are + // no more results, the value is null. NextToken *string `min:"1" type:"string"` - // List of rules names that can invoke the given target. + // The names of the rules that can invoke the given target. RuleNames []*string `type:"list"` } @@ -1323,7 +1482,6 @@ func (s *ListRuleNamesByTargetOutput) SetRuleNames(v []*string) *ListRuleNamesBy return s } -// Container for the parameters to the ListRules operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/ListRulesRequest type ListRulesInput struct { _ struct{} `type:"structure"` @@ -1334,8 +1492,7 @@ type ListRulesInput struct { // The prefix matching the rule name. NamePrefix *string `min:"1" type:"string"` - // The token returned by a previous call to indicate that there is more data - // available. + // The token returned by a previous call to retrieve the next set of results. NextToken *string `min:"1" type:"string"` } @@ -1386,15 +1543,15 @@ func (s *ListRulesInput) SetNextToken(v string) *ListRulesInput { return s } -// The result of the ListRules operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/ListRulesResponse type ListRulesOutput struct { _ struct{} `type:"structure"` - // Indicates that there are additional results to retrieve. + // Indicates whether there are additional results to retrieve. If there are + // no more results, the value is null. NextToken *string `min:"1" type:"string"` - // List of rules matching the specified criteria. + // The rules that match the specified criteria. Rules []*Rule `type:"list"` } @@ -1420,7 +1577,6 @@ func (s *ListRulesOutput) SetRules(v []*Rule) *ListRulesOutput { return s } -// Container for the parameters to the ListTargetsByRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/ListTargetsByRuleRequest type ListTargetsByRuleInput struct { _ struct{} `type:"structure"` @@ -1428,11 +1584,10 @@ type ListTargetsByRuleInput struct { // The maximum number of results to return. Limit *int64 `min:"1" type:"integer"` - // The token returned by a previous call to indicate that there is more data - // available. + // The token returned by a previous call to retrieve the next set of results. NextToken *string `min:"1" type:"string"` - // The name of the rule whose targets you want to list. + // The name of the rule. // // Rule is a required field Rule *string `min:"1" type:"string" required:"true"` @@ -1488,16 +1643,16 @@ func (s *ListTargetsByRuleInput) SetRule(v string) *ListTargetsByRuleInput { return s } -// The result of the ListTargetsByRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/ListTargetsByRuleResponse type ListTargetsByRuleOutput struct { _ struct{} `type:"structure"` - // Indicates that there are additional results to retrieve. + // Indicates whether there are additional results to retrieve. If there are + // no more results, the value is null. NextToken *string `min:"1" type:"string"` - // Lists the targets assigned to the rule. - Targets []*Target `type:"list"` + // The targets assigned to the rule. + Targets []*Target `min:"1" type:"list"` } // String returns the string representation @@ -1522,7 +1677,6 @@ func (s *ListTargetsByRuleOutput) SetTargets(v []*Target) *ListTargetsByRuleOutp return s } -// Container for the parameters to the PutEvents operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutEventsRequest type PutEventsInput struct { _ struct{} `type:"structure"` @@ -1567,15 +1721,13 @@ func (s *PutEventsInput) SetEntries(v []*PutEventsRequestEntry) *PutEventsInput return s } -// The result of the PutEvents operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutEventsResponse type PutEventsOutput struct { _ struct{} `type:"structure"` - // A list of successfully and unsuccessfully ingested events results. If the - // ingestion was successful, the entry will have the event ID in it. If not, - // then the ErrorCode and ErrorMessage can be used to identify the problem with - // the entry. + // The successfully and unsuccessfully ingested events results. If the ingestion + // was successful, the entry has the event ID in it. Otherwise, you can use + // the error code and error message to identify the problem with the entry. Entries []*PutEventsResultEntry `type:"list"` // The number of failed entries. @@ -1604,13 +1756,13 @@ func (s *PutEventsOutput) SetFailedEntryCount(v int64) *PutEventsOutput { return s } -// Contains information about the event to be used in PutEvents. +// Represents an event to be submitted. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutEventsRequestEntry type PutEventsRequestEntry struct { _ struct{} `type:"structure"` // In the JSON sense, an object containing fields, which may also contain nested - // sub-objects. No constraints are imposed on its contents. + // subobjects. No constraints are imposed on its contents. Detail *string `type:"string"` // Free-form string used to decide what fields to expect in the event detail. @@ -1623,9 +1775,8 @@ type PutEventsRequestEntry struct { // The source of the event. Source *string `type:"string"` - // Timestamp of event, per RFC3339 (https://www.rfc-editor.org/rfc/rfc3339.txt). - // If no timestamp is provided, the timestamp of the PutEvents call will be - // used. + // The timestamp of the event, per RFC3339 (https://www.rfc-editor.org/rfc/rfc3339.txt). + // If no timestamp is provided, the timestamp of the PutEvents call is used. Time *time.Time `type:"timestamp" timestampFormat:"unix"` } @@ -1669,18 +1820,18 @@ func (s *PutEventsRequestEntry) SetTime(v time.Time) *PutEventsRequestEntry { return s } -// A PutEventsResult contains a list of PutEventsResultEntry. +// Represents an event that failed to be submitted. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutEventsResultEntry type PutEventsResultEntry struct { _ struct{} `type:"structure"` - // The error code representing why the event submission failed on this entry. + // The error code that indicates why the event submission failed. ErrorCode *string `type:"string"` - // The error message explaining why the event submission failed on this entry. + // The error message that explains why the event submission failed. ErrorMessage *string `type:"string"` - // The ID of the event submitted to Amazon CloudWatch Events. + // The ID of the event. EventId *string `type:"string"` } @@ -1712,7 +1863,6 @@ func (s *PutEventsResultEntry) SetEventId(v string) *PutEventsResultEntry { return s } -// Container for the parameters to the PutRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutRuleRequest type PutRuleInput struct { _ struct{} `type:"structure"` @@ -1803,12 +1953,11 @@ func (s *PutRuleInput) SetState(v string) *PutRuleInput { return s } -// The result of the PutRule operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutRuleResponse type PutRuleOutput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) that identifies the rule. + // The Amazon Resource Name (ARN) of the rule. RuleArn *string `min:"1" type:"string"` } @@ -1828,20 +1977,19 @@ func (s *PutRuleOutput) SetRuleArn(v string) *PutRuleOutput { return s } -// Container for the parameters to the PutTargets operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutTargetsRequest type PutTargetsInput struct { _ struct{} `type:"structure"` - // The name of the rule you want to add targets to. + // The name of the rule. // // Rule is a required field Rule *string `min:"1" type:"string" required:"true"` - // List of targets you want to update or add to the rule. + // The targets to update or add to the rule. // // Targets is a required field - Targets []*Target `type:"list" required:"true"` + Targets []*Target `min:"1" type:"list" required:"true"` } // String returns the string representation @@ -1866,6 +2014,9 @@ func (s *PutTargetsInput) Validate() error { if s.Targets == nil { invalidParams.Add(request.NewErrParamRequired("Targets")) } + if s.Targets != nil && len(s.Targets) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Targets", 1)) + } if s.Targets != nil { for i, v := range s.Targets { if v == nil { @@ -1895,12 +2046,11 @@ func (s *PutTargetsInput) SetTargets(v []*Target) *PutTargetsInput { return s } -// The result of the PutTargets operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutTargetsResponse type PutTargetsOutput struct { _ struct{} `type:"structure"` - // An array of failed target entries. + // The failed target entries. FailedEntries []*PutTargetsResultEntry `type:"list"` // The number of failed entries. @@ -1929,18 +2079,18 @@ func (s *PutTargetsOutput) SetFailedEntryCount(v int64) *PutTargetsOutput { return s } -// A PutTargetsResult contains a list of PutTargetsResultEntry. +// Represents a target that failed to be added to a rule. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/PutTargetsResultEntry type PutTargetsResultEntry struct { _ struct{} `type:"structure"` - // The error code representing why the target submission failed on this entry. + // The error code that indicates why the target addition failed. ErrorCode *string `type:"string"` - // The error message explaining why the target submission failed on this entry. + // The error message that explains why the target addition failed. ErrorMessage *string `type:"string"` - // The ID of the target submitted to Amazon CloudWatch Events. + // The ID of the target. TargetId *string `min:"1" type:"string"` } @@ -1972,17 +2122,16 @@ func (s *PutTargetsResultEntry) SetTargetId(v string) *PutTargetsResultEntry { return s } -// Container for the parameters to the RemoveTargets operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/RemoveTargetsRequest type RemoveTargetsInput struct { _ struct{} `type:"structure"` - // The list of target IDs to remove from the rule. + // The IDs of the targets to remove from the rule. // // Ids is a required field Ids []*string `min:"1" type:"list" required:"true"` - // The name of the rule you want to remove targets from. + // The name of the rule. // // Rule is a required field Rule *string `min:"1" type:"string" required:"true"` @@ -2032,12 +2181,11 @@ func (s *RemoveTargetsInput) SetRule(v string) *RemoveTargetsInput { return s } -// The result of the RemoveTargets operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/RemoveTargetsResponse type RemoveTargetsOutput struct { _ struct{} `type:"structure"` - // An array of failed target entries. + // The failed target entries. FailedEntries []*RemoveTargetsResultEntry `type:"list"` // The number of failed entries. @@ -2066,19 +2214,18 @@ func (s *RemoveTargetsOutput) SetFailedEntryCount(v int64) *RemoveTargetsOutput return s } -// The ID of the target requested to be removed from the rule by Amazon CloudWatch -// Events. +// Represents a target that failed to be removed from a rule. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/RemoveTargetsResultEntry type RemoveTargetsResultEntry struct { _ struct{} `type:"structure"` - // The error code representing why the target removal failed on this entry. + // The error code that indicates why the target removal failed. ErrorCode *string `type:"string"` - // The error message explaining why the target removal failed on this entry. + // The error message that explains why the target removal failed. ErrorMessage *string `type:"string"` - // The ID of the target requested to be removed by Amazon CloudWatch Events. + // The ID of the target. TargetId *string `min:"1" type:"string"` } @@ -2110,8 +2257,7 @@ func (s *RemoveTargetsResultEntry) SetTargetId(v string) *RemoveTargetsResultEnt return s } -// Contains information about a rule in Amazon CloudWatch Events. A ListRulesResult -// contains a list of Rules. +// Contains information about a rule in Amazon CloudWatch Events. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/Rule type Rule struct { _ struct{} `type:"structure"` @@ -2125,17 +2271,16 @@ type Rule struct { // The event pattern of the rule. EventPattern *string `type:"string"` - // The rule's name. + // The name of the rule. Name *string `min:"1" type:"string"` - // The Amazon Resource Name (ARN) associated with the role that is used for - // target invocation. + // The Amazon Resource Name (ARN) of the role that is used for target invocation. RoleArn *string `min:"1" type:"string"` // The scheduling expression. For example, "cron(0 20 * * ? *)", "rate(5 minutes)". ScheduleExpression *string `type:"string"` - // The rule's state. + // The state of the rule. State *string `type:"string" enum:"RuleState"` } @@ -2191,41 +2336,175 @@ func (s *Rule) SetState(v string) *Rule { return s } -// Targets are the resources that can be invoked when a rule is triggered. For -// example, AWS Lambda functions, Amazon Kinesis streams, and built-in targets. -// -// Input and InputPath are mutually-exclusive and optional parameters of a target. -// When a rule is triggered due to a matched event, if for a target: -// -// * Neither Input nor InputPath is specified, then the entire event is passed -// to the target in JSON form. -// * InputPath is specified in the form of JSONPath (e.g. $.detail), then -// only the part of the event specified in the path is passed to the target -// (e.g. only the detail part of the event is passed). -// * Input is specified in the form of a valid JSON, then the matched event -// is overridden with this constant. +// This parameter contains the criteria (either InstanceIds or a tag) used to +// specify which EC2 instances are to be sent the command. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/RunCommandParameters +type RunCommandParameters struct { + _ struct{} `type:"structure"` + + // Currently, we support including only one RunCommandTarget block, which specifies + // either an array of InstanceIds or a tag. + // + // RunCommandTargets is a required field + RunCommandTargets []*RunCommandTarget `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s RunCommandParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RunCommandParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RunCommandParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RunCommandParameters"} + if s.RunCommandTargets == nil { + invalidParams.Add(request.NewErrParamRequired("RunCommandTargets")) + } + if s.RunCommandTargets != nil && len(s.RunCommandTargets) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RunCommandTargets", 1)) + } + if s.RunCommandTargets != nil { + for i, v := range s.RunCommandTargets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RunCommandTargets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRunCommandTargets sets the RunCommandTargets field's value. +func (s *RunCommandParameters) SetRunCommandTargets(v []*RunCommandTarget) *RunCommandParameters { + s.RunCommandTargets = v + return s +} + +// Information about the EC2 instances that are to be sent the command, specified +// as key-value pairs. Each RunCommandTarget block can include only one key, +// but this key may specify multiple values. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/RunCommandTarget +type RunCommandTarget struct { + _ struct{} `type:"structure"` + + // Can be either tag:tag-key or InstanceIds. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // If Key is tag:tag-key, Values is a list of tag values. If Key is InstanceIds, + // Values is a list of Amazon EC2 instance IDs. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s RunCommandTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RunCommandTarget) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RunCommandTarget) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RunCommandTarget"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *RunCommandTarget) SetKey(v string) *RunCommandTarget { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *RunCommandTarget) SetValues(v []*string) *RunCommandTarget { + s.Values = v + return s +} + +// Targets are the resources to be invoked when a rule is triggered. Target +// types include EC2 instances, AWS Lambda functions, Amazon Kinesis streams, +// Amazon ECS tasks, AWS Step Functions state machines, Run Command, and built-in +// targets. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/Target type Target struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) associated of the target. + // The Amazon Resource Name (ARN) of the target. // // Arn is a required field Arn *string `min:"1" type:"string" required:"true"` - // The unique target assignment ID. + // Contains the Amazon ECS task definition and task count to be used, if the + // event target is an Amazon ECS task. For more information about Amazon ECS + // tasks, see Task Definitions (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_defintions.html) + // in the Amazon EC2 Container Service Developer Guide. + EcsParameters *EcsParameters `type:"structure"` + + // The ID of the target. // // Id is a required field Id *string `min:"1" type:"string" required:"true"` - // Valid JSON text passed to the target. For more information about JSON text, - // see The JavaScript Object Notation (JSON) Data Interchange Format (http://www.rfc-editor.org/rfc/rfc7159.txt). + // Valid JSON text passed to the target. In this case, nothing from the event + // itself is passed to the target. For more information, see The JavaScript + // Object Notation (JSON) Data Interchange Format (http://www.rfc-editor.org/rfc/rfc7159.txt). Input *string `type:"string"` // The value of the JSONPath that is used for extracting part of the matched // event when passing it to the target. For more information about JSON paths, // see JSONPath (http://goessner.net/articles/JsonPath/). InputPath *string `type:"string"` + + // Settings to enable you to provide custom input to a target based on certain + // event data. You can extract one or more key-value pairs from the event and + // then use that data to send customized input to the target. + InputTransformer *InputTransformer `type:"structure"` + + // The custom parameter you can use to control shard assignment, when the target + // is an Amazon Kinesis stream. If you do not include this parameter, the default + // is to use the eventId as the partition key. + KinesisParameters *KinesisParameters `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM role to be used for this target + // when the rule is triggered. If one rule triggers multiple targets, you can + // use a different IAM role for each target. + RoleArn *string `min:"1" type:"string"` + + // Parameters used when you are using the rule to invoke Amazon EC2 Run Command. + RunCommandParameters *RunCommandParameters `type:"structure"` } // String returns the string representation @@ -2253,6 +2532,29 @@ func (s *Target) Validate() error { if s.Id != nil && len(*s.Id) < 1 { invalidParams.Add(request.NewErrParamMinLen("Id", 1)) } + if s.RoleArn != nil && len(*s.RoleArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 1)) + } + if s.EcsParameters != nil { + if err := s.EcsParameters.Validate(); err != nil { + invalidParams.AddNested("EcsParameters", err.(request.ErrInvalidParams)) + } + } + if s.InputTransformer != nil { + if err := s.InputTransformer.Validate(); err != nil { + invalidParams.AddNested("InputTransformer", err.(request.ErrInvalidParams)) + } + } + if s.KinesisParameters != nil { + if err := s.KinesisParameters.Validate(); err != nil { + invalidParams.AddNested("KinesisParameters", err.(request.ErrInvalidParams)) + } + } + if s.RunCommandParameters != nil { + if err := s.RunCommandParameters.Validate(); err != nil { + invalidParams.AddNested("RunCommandParameters", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -2266,6 +2568,12 @@ func (s *Target) SetArn(v string) *Target { return s } +// SetEcsParameters sets the EcsParameters field's value. +func (s *Target) SetEcsParameters(v *EcsParameters) *Target { + s.EcsParameters = v + return s +} + // SetId sets the Id field's value. func (s *Target) SetId(v string) *Target { s.Id = &v @@ -2284,17 +2592,40 @@ func (s *Target) SetInputPath(v string) *Target { return s } -// Container for the parameters to the TestEventPattern operation. +// SetInputTransformer sets the InputTransformer field's value. +func (s *Target) SetInputTransformer(v *InputTransformer) *Target { + s.InputTransformer = v + return s +} + +// SetKinesisParameters sets the KinesisParameters field's value. +func (s *Target) SetKinesisParameters(v *KinesisParameters) *Target { + s.KinesisParameters = v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *Target) SetRoleArn(v string) *Target { + s.RoleArn = &v + return s +} + +// SetRunCommandParameters sets the RunCommandParameters field's value. +func (s *Target) SetRunCommandParameters(v *RunCommandParameters) *Target { + s.RunCommandParameters = v + return s +} + // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/TestEventPatternRequest type TestEventPatternInput struct { _ struct{} `type:"structure"` - // The event in the JSON format to test against the event pattern. + // The event, in JSON format, to test against the event pattern. // // Event is a required field Event *string `type:"string" required:"true"` - // The event pattern you want to test. + // The event pattern. // // EventPattern is a required field EventPattern *string `type:"string" required:"true"` @@ -2338,7 +2669,6 @@ func (s *TestEventPatternInput) SetEventPattern(v string) *TestEventPatternInput return s } -// The result of the TestEventPattern operation. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07/TestEventPatternResponse type TestEventPatternOutput struct { _ struct{} `type:"structure"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/errors.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/errors.go index f0d17b88d5..fe9ecb8f8c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/errors.go @@ -7,7 +7,7 @@ const ( // ErrCodeConcurrentModificationException for service response error code // "ConcurrentModificationException". // - // This exception occurs if there is concurrent modification on rule or target. + // There is concurrent modification on a rule or target. ErrCodeConcurrentModificationException = "ConcurrentModificationException" // ErrCodeInternalException for service response error code @@ -19,14 +19,13 @@ const ( // ErrCodeInvalidEventPatternException for service response error code // "InvalidEventPatternException". // - // The event pattern is invalid. + // The event pattern is not valid. ErrCodeInvalidEventPatternException = "InvalidEventPatternException" // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // - // This exception occurs if you try to create more rules or add more targets - // to a rule than allowed by default. + // You tried to create more rules or add more targets to a rule than is allowed. ErrCodeLimitExceededException = "LimitExceededException" // ErrCodeResourceNotFoundException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go index 1e814137b0..569d91ed8a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go @@ -12,7 +12,7 @@ import ( ) // Amazon CloudWatch Events helps you to respond to state changes in your AWS -// resources. When your resources change state they automatically send events +// resources. When your resources change state, they automatically send events // into an event stream. You can create rules that match selected events in // the stream and route them to targets to take action. You can also use rules // to take action on a pre-determined schedule. For example, you can configure @@ -23,10 +23,12 @@ import ( // // * Direct specific API records from CloudTrail to an Amazon Kinesis stream // for detailed analysis of potential security or availability risks. +// // * Periodically invoke a built-in target to create a snapshot of an Amazon // EBS volume. -// For more information about Amazon CloudWatch Events features, see the Amazon -// CloudWatch Developer Guide (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide). +// +// For more information about the features of Amazon CloudWatch Events, see +// the Amazon CloudWatch Events User Guide (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events). // The service client's operations are safe to be used concurrently. // It is not safe to mutate any of the client's properties though. // Please also see https://docs.aws.amazon.com/goto/WebAPI/events-2015-10-07 diff --git a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go index 8b4986235d..1004323da9 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go @@ -1921,6 +1921,12 @@ func (c *CodeDeploy) ListApplicationRevisionsRequest(input *ListApplicationRevis Name: opListApplicationRevisions, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "", + TruncationToken: "", + }, } if input == nil { @@ -1982,6 +1988,31 @@ func (c *CodeDeploy) ListApplicationRevisions(input *ListApplicationRevisionsInp return out, err } +// ListApplicationRevisionsPages iterates over the pages of a ListApplicationRevisions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListApplicationRevisions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListApplicationRevisions operation. +// pageNum := 0 +// err := client.ListApplicationRevisionsPages(params, +// func(page *ListApplicationRevisionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CodeDeploy) ListApplicationRevisionsPages(input *ListApplicationRevisionsInput, fn func(p *ListApplicationRevisionsOutput, lastPage bool) (shouldContinue bool)) error { + page, _ := c.ListApplicationRevisionsRequest(input) + page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator")) + return page.EachPage(func(p interface{}, lastPage bool) bool { + return fn(p.(*ListApplicationRevisionsOutput), lastPage) + }) +} + const opListApplications = "ListApplications" // ListApplicationsRequest generates a "aws/request.Request" representing the @@ -2014,6 +2045,12 @@ func (c *CodeDeploy) ListApplicationsRequest(input *ListApplicationsInput) (req Name: opListApplications, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "", + TruncationToken: "", + }, } if input == nil { @@ -2047,6 +2084,31 @@ func (c *CodeDeploy) ListApplications(input *ListApplicationsInput) (*ListApplic return out, err } +// ListApplicationsPages iterates over the pages of a ListApplications operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListApplications method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListApplications operation. +// pageNum := 0 +// err := client.ListApplicationsPages(params, +// func(page *ListApplicationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CodeDeploy) ListApplicationsPages(input *ListApplicationsInput, fn func(p *ListApplicationsOutput, lastPage bool) (shouldContinue bool)) error { + page, _ := c.ListApplicationsRequest(input) + page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator")) + return page.EachPage(func(p interface{}, lastPage bool) bool { + return fn(p.(*ListApplicationsOutput), lastPage) + }) +} + const opListDeploymentConfigs = "ListDeploymentConfigs" // ListDeploymentConfigsRequest generates a "aws/request.Request" representing the @@ -2079,6 +2141,12 @@ func (c *CodeDeploy) ListDeploymentConfigsRequest(input *ListDeploymentConfigsIn Name: opListDeploymentConfigs, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "", + TruncationToken: "", + }, } if input == nil { @@ -2112,6 +2180,31 @@ func (c *CodeDeploy) ListDeploymentConfigs(input *ListDeploymentConfigsInput) (* return out, err } +// ListDeploymentConfigsPages iterates over the pages of a ListDeploymentConfigs operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListDeploymentConfigs method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListDeploymentConfigs operation. +// pageNum := 0 +// err := client.ListDeploymentConfigsPages(params, +// func(page *ListDeploymentConfigsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CodeDeploy) ListDeploymentConfigsPages(input *ListDeploymentConfigsInput, fn func(p *ListDeploymentConfigsOutput, lastPage bool) (shouldContinue bool)) error { + page, _ := c.ListDeploymentConfigsRequest(input) + page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator")) + return page.EachPage(func(p interface{}, lastPage bool) bool { + return fn(p.(*ListDeploymentConfigsOutput), lastPage) + }) +} + const opListDeploymentGroups = "ListDeploymentGroups" // ListDeploymentGroupsRequest generates a "aws/request.Request" representing the @@ -2144,6 +2237,12 @@ func (c *CodeDeploy) ListDeploymentGroupsRequest(input *ListDeploymentGroupsInpu Name: opListDeploymentGroups, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "", + TruncationToken: "", + }, } if input == nil { @@ -2187,6 +2286,31 @@ func (c *CodeDeploy) ListDeploymentGroups(input *ListDeploymentGroupsInput) (*Li return out, err } +// ListDeploymentGroupsPages iterates over the pages of a ListDeploymentGroups operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListDeploymentGroups method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListDeploymentGroups operation. +// pageNum := 0 +// err := client.ListDeploymentGroupsPages(params, +// func(page *ListDeploymentGroupsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CodeDeploy) ListDeploymentGroupsPages(input *ListDeploymentGroupsInput, fn func(p *ListDeploymentGroupsOutput, lastPage bool) (shouldContinue bool)) error { + page, _ := c.ListDeploymentGroupsRequest(input) + page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator")) + return page.EachPage(func(p interface{}, lastPage bool) bool { + return fn(p.(*ListDeploymentGroupsOutput), lastPage) + }) +} + const opListDeploymentInstances = "ListDeploymentInstances" // ListDeploymentInstancesRequest generates a "aws/request.Request" representing the @@ -2219,6 +2343,12 @@ func (c *CodeDeploy) ListDeploymentInstancesRequest(input *ListDeploymentInstanc Name: opListDeploymentInstances, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "", + TruncationToken: "", + }, } if input == nil { @@ -2273,6 +2403,31 @@ func (c *CodeDeploy) ListDeploymentInstances(input *ListDeploymentInstancesInput return out, err } +// ListDeploymentInstancesPages iterates over the pages of a ListDeploymentInstances operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListDeploymentInstances method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListDeploymentInstances operation. +// pageNum := 0 +// err := client.ListDeploymentInstancesPages(params, +// func(page *ListDeploymentInstancesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CodeDeploy) ListDeploymentInstancesPages(input *ListDeploymentInstancesInput, fn func(p *ListDeploymentInstancesOutput, lastPage bool) (shouldContinue bool)) error { + page, _ := c.ListDeploymentInstancesRequest(input) + page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator")) + return page.EachPage(func(p interface{}, lastPage bool) bool { + return fn(p.(*ListDeploymentInstancesOutput), lastPage) + }) +} + const opListDeployments = "ListDeployments" // ListDeploymentsRequest generates a "aws/request.Request" representing the @@ -2305,6 +2460,12 @@ func (c *CodeDeploy) ListDeploymentsRequest(input *ListDeploymentsInput) (req *r Name: opListDeployments, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "", + TruncationToken: "", + }, } if input == nil { @@ -2364,6 +2525,31 @@ func (c *CodeDeploy) ListDeployments(input *ListDeploymentsInput) (*ListDeployme return out, err } +// ListDeploymentsPages iterates over the pages of a ListDeployments operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListDeployments method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListDeployments operation. +// pageNum := 0 +// err := client.ListDeploymentsPages(params, +// func(page *ListDeploymentsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CodeDeploy) ListDeploymentsPages(input *ListDeploymentsInput, fn func(p *ListDeploymentsOutput, lastPage bool) (shouldContinue bool)) error { + page, _ := c.ListDeploymentsRequest(input) + page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator")) + return page.EachPage(func(p interface{}, lastPage bool) bool { + return fn(p.(*ListDeploymentsOutput), lastPage) + }) +} + const opListOnPremisesInstances = "ListOnPremisesInstances" // ListOnPremisesInstancesRequest generates a "aws/request.Request" representing the diff --git a/vendor/github.com/aws/aws-sdk-go/service/emr/api.go b/vendor/github.com/aws/aws-sdk-go/service/emr/api.go index f036a451a7..169ba4e3a4 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/emr/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/emr/api.go @@ -13,6 +13,77 @@ import ( "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" ) +const opAddInstanceFleet = "AddInstanceFleet" + +// AddInstanceFleetRequest generates a "aws/request.Request" representing the +// client's request for the AddInstanceFleet operation. The "output" return +// value can be used to capture response data after the request's "Send" method +// is called. +// +// See AddInstanceFleet for usage and error information. +// +// Creating a request object using this method should be used when you want to inject +// custom logic into the request's lifecycle using a custom handler, or if you want to +// access properties on the request object before or after sending the request. If +// you just want the service response, call the AddInstanceFleet method directly +// instead. +// +// Note: You must call the "Send" method on the returned request object in order +// to execute the request. +// +// // Example sending a request using the AddInstanceFleetRequest method. +// req, resp := client.AddInstanceFleetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/AddInstanceFleet +func (c *EMR) AddInstanceFleetRequest(input *AddInstanceFleetInput) (req *request.Request, output *AddInstanceFleetOutput) { + op := &request.Operation{ + Name: opAddInstanceFleet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddInstanceFleetInput{} + } + + output = &AddInstanceFleetOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddInstanceFleet API operation for Amazon Elastic MapReduce. +// +// Adds an instance fleet to a running cluster. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic MapReduce's +// API operation AddInstanceFleet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerException "InternalServerException" +// This exception occurs when there is an internal failure in the EMR service. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// This exception occurs when there is something wrong with user input. +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/AddInstanceFleet +func (c *EMR) AddInstanceFleet(input *AddInstanceFleetInput) (*AddInstanceFleetOutput, error) { + req, out := c.AddInstanceFleetRequest(input) + err := req.Send() + return out, err +} + const opAddInstanceGroups = "AddInstanceGroups" // AddInstanceGroupsRequest generates a "aws/request.Request" representing the @@ -124,19 +195,19 @@ func (c *EMR) AddJobFlowStepsRequest(input *AddJobFlowStepsInput) (req *request. // AddJobFlowSteps API operation for Amazon Elastic MapReduce. // -// AddJobFlowSteps adds new steps to a running job flow. A maximum of 256 steps +// AddJobFlowSteps adds new steps to a running cluster. A maximum of 256 steps // are allowed in each job flow. // -// If your job flow is long-running (such as a Hive data warehouse) or complex, +// If your cluster is long-running (such as a Hive data warehouse) or complex, // you may require more than 256 steps to process your data. You can bypass -// the 256-step limitation in various ways, including using the SSH shell to -// connect to the master node and submitting queries directly to the software -// running on the master node, such as Hive and Hadoop. For more information -// on how to do this, see Add More than 256 Steps to a Job Flow (http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/AddMoreThan256Steps.html) -// in the Amazon EMR Developer's Guide. +// the 256-step limitation in various ways, including using SSH to connect to +// the master node and submitting queries directly to the software running on +// the master node, such as Hive and Hadoop. For more information on how to +// do this, see Add More than 256 Steps to a Cluster (http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/AddMoreThan256Steps.html) +// in the Amazon EMR Management Guide. // // A step specifies the location of a JAR file stored either on the master node -// of the job flow or in Amazon S3. Each step is performed by the main function +// of the cluster or in Amazon S3. Each step is performed by the main function // of the main class of the JAR file. The main class can be specified either // in the manifest of the JAR or by using the MainFunction parameter of the // step. @@ -145,7 +216,7 @@ func (c *EMR) AddJobFlowStepsRequest(input *AddJobFlowStepsInput) (req *request. // complete, the main function must exit with a zero exit code and all Hadoop // jobs started while the step was running must have completed and run successfully. // -// You can only add steps to a job flow that is in one of the following states: +// You can only add steps to a cluster that is in one of the following states: // STARTING, BOOTSTRAPPING, RUNNING, or WAITING. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -944,6 +1015,108 @@ func (c *EMR) ListClustersPages(input *ListClustersInput, fn func(p *ListCluster }) } +const opListInstanceFleets = "ListInstanceFleets" + +// ListInstanceFleetsRequest generates a "aws/request.Request" representing the +// client's request for the ListInstanceFleets operation. The "output" return +// value can be used to capture response data after the request's "Send" method +// is called. +// +// See ListInstanceFleets for usage and error information. +// +// Creating a request object using this method should be used when you want to inject +// custom logic into the request's lifecycle using a custom handler, or if you want to +// access properties on the request object before or after sending the request. If +// you just want the service response, call the ListInstanceFleets method directly +// instead. +// +// Note: You must call the "Send" method on the returned request object in order +// to execute the request. +// +// // Example sending a request using the ListInstanceFleetsRequest method. +// req, resp := client.ListInstanceFleetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ListInstanceFleets +func (c *EMR) ListInstanceFleetsRequest(input *ListInstanceFleetsInput) (req *request.Request, output *ListInstanceFleetsOutput) { + op := &request.Operation{ + Name: opListInstanceFleets, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListInstanceFleetsInput{} + } + + output = &ListInstanceFleetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListInstanceFleets API operation for Amazon Elastic MapReduce. +// +// Lists all available details about the instance fleets in a cluster. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic MapReduce's +// API operation ListInstanceFleets for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerException "InternalServerException" +// This exception occurs when there is an internal failure in the EMR service. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// This exception occurs when there is something wrong with user input. +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ListInstanceFleets +func (c *EMR) ListInstanceFleets(input *ListInstanceFleetsInput) (*ListInstanceFleetsOutput, error) { + req, out := c.ListInstanceFleetsRequest(input) + err := req.Send() + return out, err +} + +// ListInstanceFleetsPages iterates over the pages of a ListInstanceFleets operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListInstanceFleets method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListInstanceFleets operation. +// pageNum := 0 +// err := client.ListInstanceFleetsPages(params, +// func(page *ListInstanceFleetsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *EMR) ListInstanceFleetsPages(input *ListInstanceFleetsInput, fn func(p *ListInstanceFleetsOutput, lastPage bool) (shouldContinue bool)) error { + page, _ := c.ListInstanceFleetsRequest(input) + page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator")) + return page.EachPage(func(p interface{}, lastPage bool) bool { + return fn(p.(*ListInstanceFleetsOutput), lastPage) + }) +} + const opListInstanceGroups = "ListInstanceGroups" // ListInstanceGroupsRequest generates a "aws/request.Request" representing the @@ -1317,6 +1490,81 @@ func (c *EMR) ListStepsPages(input *ListStepsInput, fn func(p *ListStepsOutput, }) } +const opModifyInstanceFleet = "ModifyInstanceFleet" + +// ModifyInstanceFleetRequest generates a "aws/request.Request" representing the +// client's request for the ModifyInstanceFleet operation. The "output" return +// value can be used to capture response data after the request's "Send" method +// is called. +// +// See ModifyInstanceFleet for usage and error information. +// +// Creating a request object using this method should be used when you want to inject +// custom logic into the request's lifecycle using a custom handler, or if you want to +// access properties on the request object before or after sending the request. If +// you just want the service response, call the ModifyInstanceFleet method directly +// instead. +// +// Note: You must call the "Send" method on the returned request object in order +// to execute the request. +// +// // Example sending a request using the ModifyInstanceFleetRequest method. +// req, resp := client.ModifyInstanceFleetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ModifyInstanceFleet +func (c *EMR) ModifyInstanceFleetRequest(input *ModifyInstanceFleetInput) (req *request.Request, output *ModifyInstanceFleetOutput) { + op := &request.Operation{ + Name: opModifyInstanceFleet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyInstanceFleetInput{} + } + + output = &ModifyInstanceFleetOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// ModifyInstanceFleet API operation for Amazon Elastic MapReduce. +// +// Modifies the target On-Demand and target Spot capacities for the instance +// fleet with the specified InstanceFleetID within the cluster specified using +// ClusterID. The call either succeeds or fails atomically. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic MapReduce's +// API operation ModifyInstanceFleet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerException "InternalServerException" +// This exception occurs when there is an internal failure in the EMR service. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// This exception occurs when there is something wrong with user input. +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ModifyInstanceFleet +func (c *EMR) ModifyInstanceFleet(input *ModifyInstanceFleetInput) (*ModifyInstanceFleetOutput, error) { + req, out := c.ModifyInstanceFleetRequest(input) + err := req.Send() + return out, err +} + const opModifyInstanceGroups = "ModifyInstanceGroups" // ModifyInstanceGroupsRequest generates a "aws/request.Request" representing the @@ -1630,30 +1878,34 @@ func (c *EMR) RunJobFlowRequest(input *RunJobFlowInput) (req *request.Request, o // RunJobFlow API operation for Amazon Elastic MapReduce. // -// RunJobFlow creates and starts running a new job flow. The job flow will run -// the steps specified. After the job flow completes, the cluster is stopped -// and the HDFS partition is lost. To prevent loss of data, configure the last -// step of the job flow to store results in Amazon S3. If the JobFlowInstancesConfigKeepJobFlowAliveWhenNoSteps -// parameter is set to TRUE, the job flow will transition to the WAITING state -// rather than shutting down after the steps have completed. +// RunJobFlow creates and starts running a new cluster (job flow). The cluster +// runs the steps specified. After the steps complete, the cluster stops and +// the HDFS partition is lost. To prevent loss of data, configure the last step +// of the job flow to store results in Amazon S3. If the JobFlowInstancesConfigKeepJobFlowAliveWhenNoSteps +// parameter is set to TRUE, the cluster transitions to the WAITING state rather +// than shutting down after the steps have completed. // // For additional protection, you can set the JobFlowInstancesConfigTerminationProtected -// parameter to TRUE to lock the job flow and prevent it from being terminated +// parameter to TRUE to lock the cluster and prevent it from being terminated // by API call, user intervention, or in the event of a job flow error. // // A maximum of 256 steps are allowed in each job flow. // -// If your job flow is long-running (such as a Hive data warehouse) or complex, +// If your cluster is long-running (such as a Hive data warehouse) or complex, // you may require more than 256 steps to process your data. You can bypass // the 256-step limitation in various ways, including using the SSH shell to // connect to the master node and submitting queries directly to the software // running on the master node, such as Hive and Hadoop. For more information -// on how to do this, see Add More than 256 Steps to a Job Flow (http://docs.aws.amazon.com/ElasticMapReduce/latest/Management/Guide/AddMoreThan256Steps.html) +// on how to do this, see Add More than 256 Steps to a Cluster (http://docs.aws.amazon.com/ElasticMapReduce/latest/Management/Guide/AddMoreThan256Steps.html) // in the Amazon EMR Management Guide. // -// For long running job flows, we recommend that you periodically store your +// For long running clusters, we recommend that you periodically store your // results. // +// The instance fleets configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. The RunJobFlow request can contain +// InstanceFleets parameters or InstanceGroups parameters, but not both. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1720,23 +1972,23 @@ func (c *EMR) SetTerminationProtectionRequest(input *SetTerminationProtectionInp // SetTerminationProtection API operation for Amazon Elastic MapReduce. // -// SetTerminationProtection locks a job flow so the EC2 instances in the cluster -// cannot be terminated by user intervention, an API call, or in the event of -// a job-flow error. The cluster still terminates upon successful completion -// of the job flow. Calling SetTerminationProtection on a job flow is analogous -// to calling the Amazon EC2 DisableAPITermination API on all of the EC2 instances -// in a cluster. +// SetTerminationProtection locks a cluster (job flow) so the EC2 instances +// in the cluster cannot be terminated by user intervention, an API call, or +// in the event of a job-flow error. The cluster still terminates upon successful +// completion of the job flow. Calling SetTerminationProtection on a cluster +// is similar to calling the Amazon EC2 DisableAPITermination API on all EC2 +// instances in a cluster. // -// SetTerminationProtection is used to prevent accidental termination of a job -// flow and to ensure that in the event of an error, the instances will persist -// so you can recover any data stored in their ephemeral instance storage. +// SetTerminationProtection is used to prevent accidental termination of a cluster +// and to ensure that in the event of an error, the instances persist so that +// you can recover any data stored in their ephemeral instance storage. // -// To terminate a job flow that has been locked by setting SetTerminationProtection +// To terminate a cluster that has been locked by setting SetTerminationProtection // to true, you must first unlock the job flow by a subsequent call to SetTerminationProtection // in which you set the value to false. // -// For more information, seeProtecting a Job Flow from Termination (http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_TerminationProtection.html) -// in the Amazon EMR Guide. +// For more information, seeManaging Cluster Termination (http://docs.aws.amazon.com/emr/latest/ManagementGuide/UsingEMR_TerminationProtection.html) +// in the Amazon EMR Management Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1805,11 +2057,11 @@ func (c *EMR) SetVisibleToAllUsersRequest(input *SetVisibleToAllUsersInput) (req // SetVisibleToAllUsers API operation for Amazon Elastic MapReduce. // // Sets whether all AWS Identity and Access Management (IAM) users under your -// account can access the specified job flows. This action works on running -// job flows. You can also set the visibility of a job flow when you launch -// it using the VisibleToAllUsers parameter of RunJobFlow. The SetVisibleToAllUsers -// action can be called only by an IAM user who created the job flow or the -// AWS account that owns the job flow. +// account can access the specified clusters (job flows). This action works +// on running clusters. You can also set the visibility of a cluster when you +// launch it using the VisibleToAllUsers parameter of RunJobFlow. The SetVisibleToAllUsers +// action can be called only by an IAM user who created the cluster or the AWS +// account that owns the cluster. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1877,14 +2129,15 @@ func (c *EMR) TerminateJobFlowsRequest(input *TerminateJobFlowsInput) (req *requ // TerminateJobFlows API operation for Amazon Elastic MapReduce. // -// TerminateJobFlows shuts a list of job flows down. When a job flow is shut -// down, any step not yet completed is canceled and the EC2 instances on which -// the job flow is running are stopped. Any log files not already saved are -// uploaded to Amazon S3 if a LogUri was specified when the job flow was created. +// TerminateJobFlows shuts a list of clusters (job flows) down. When a job flow +// is shut down, any step not yet completed is canceled and the EC2 instances +// on which the cluster is running are stopped. Any log files not already saved +// are uploaded to Amazon S3 if a LogUri was specified when the cluster was +// created. // -// The maximum number of JobFlows allowed is 10. The call to TerminateJobFlows -// is asynchronous. Depending on the configuration of the job flow, it may take -// up to 1-5 minutes for the job flow to completely terminate and release allocated +// The maximum number of clusters allowed is 10. The call to TerminateJobFlows +// is asynchronous. Depending on the configuration of the cluster, it may take +// up to 1-5 minutes for the cluster to completely terminate and release allocated // resources, such as Amazon EC2 instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1906,6 +2159,97 @@ func (c *EMR) TerminateJobFlows(input *TerminateJobFlowsInput) (*TerminateJobFlo return out, err } +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/AddInstanceFleetInput +type AddInstanceFleetInput struct { + _ struct{} `type:"structure"` + + // The unique identifier of the cluster. + // + // ClusterId is a required field + ClusterId *string `type:"string" required:"true"` + + // Specifies the configuration of the instance fleet. + // + // InstanceFleet is a required field + InstanceFleet *InstanceFleetConfig `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AddInstanceFleetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddInstanceFleetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddInstanceFleetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddInstanceFleetInput"} + if s.ClusterId == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterId")) + } + if s.InstanceFleet == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceFleet")) + } + if s.InstanceFleet != nil { + if err := s.InstanceFleet.Validate(); err != nil { + invalidParams.AddNested("InstanceFleet", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterId sets the ClusterId field's value. +func (s *AddInstanceFleetInput) SetClusterId(v string) *AddInstanceFleetInput { + s.ClusterId = &v + return s +} + +// SetInstanceFleet sets the InstanceFleet field's value. +func (s *AddInstanceFleetInput) SetInstanceFleet(v *InstanceFleetConfig) *AddInstanceFleetInput { + s.InstanceFleet = v + return s +} + +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/AddInstanceFleetOutput +type AddInstanceFleetOutput struct { + _ struct{} `type:"structure"` + + // The unique identifier of the cluster. + ClusterId *string `type:"string"` + + // The unique identifier of the instance fleet. + InstanceFleetId *string `type:"string"` +} + +// String returns the string representation +func (s AddInstanceFleetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddInstanceFleetOutput) GoString() string { + return s.String() +} + +// SetClusterId sets the ClusterId field's value. +func (s *AddInstanceFleetOutput) SetClusterId(v string) *AddInstanceFleetOutput { + s.ClusterId = &v + return s +} + +// SetInstanceFleetId sets the InstanceFleetId field's value. +func (s *AddInstanceFleetOutput) SetInstanceFleetId(v string) *AddInstanceFleetOutput { + s.InstanceFleetId = &v + return s +} + // Input to an AddInstanceGroups call. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/AddInstanceGroupsInput type AddInstanceGroupsInput struct { @@ -2172,16 +2516,16 @@ func (s AddTagsOutput) GoString() string { // the cluster. This structure contains a list of strings that indicates the // software to use with the cluster and accepts a user argument list. Amazon // EMR accepts and forwards the argument list to the corresponding installation -// script as bootstrap action argument. For more information, see Launch a Job -// Flow on the MapR Distribution for Hadoop (http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-mapr.html). +// script as bootstrap action argument. For more information, see Using the +// MapR Distribution for Hadoop (http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-mapr.html). // Currently supported values are: // -// * "mapr-m3" - launch the job flow using MapR M3 Edition. +// * "mapr-m3" - launch the cluster using MapR M3 Edition. // -// * "mapr-m5" - launch the job flow using MapR M5 Edition. +// * "mapr-m5" - launch the cluster using MapR M5 Edition. // // * "mapr" with the user arguments specifying "--edition,m3" or "--edition,m5" -// - launch the job flow using MapR M3 or M5 Edition, respectively. +// - launch the cluster using MapR M3 or M5 Edition, respectively. // // In Amazon EMR releases 4.0 and greater, the only accepted parameter is the // application name. To pass arguments to applications, you supply a configuration @@ -2368,7 +2712,7 @@ type AutoScalingPolicyStateChangeReason struct { // The code indicating the reason for the change in status.USER_REQUEST indicates // that the scaling policy status was changed by a user. PROVISION_FAILURE indicates // that the status change was because the policy failed to provision. CLEANUP_FAILURE - // indicates something unclean happened.--> + // indicates an error. Code *string `type:"string" enum:"AutoScalingPolicyStateChangeReasonCode"` // A friendly, more verbose message that accompanies an automatic scaling policy @@ -2403,6 +2747,7 @@ func (s *AutoScalingPolicyStateChangeReason) SetMessage(v string) *AutoScalingPo type AutoScalingPolicyStatus struct { _ struct{} `type:"structure"` + // Indicates the status of the automatic scaling policy. State *string `type:"string" enum:"AutoScalingPolicyState"` // The reason for a change in status. @@ -2490,7 +2835,7 @@ func (s *BootstrapActionConfig) SetScriptBootstrapAction(v *ScriptBootstrapActio return s } -// Reports the configuration of a bootstrap action in a job flow. +// Reports the configuration of a bootstrap action in a cluster (job flow). // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/BootstrapActionDetail type BootstrapActionDetail struct { _ struct{} `type:"structure"` @@ -2515,14 +2860,19 @@ func (s *BootstrapActionDetail) SetBootstrapActionConfig(v *BootstrapActionConfi return s } +// Specification of the status of a CancelSteps request. Available only in Amazon +// EMR version 4.8.0 and later, excluding version 5.0.0. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/CancelStepsInfo type CancelStepsInfo struct { _ struct{} `type:"structure"` + // The reason for the failure if the CancelSteps request fails. Reason *string `type:"string"` + // The status of a CancelSteps Request. The value may be SUBMITTED or FAILED. Status *string `type:"string" enum:"CancelStepsRequestStatus"` + // The encrypted StepId of a step. StepId *string `type:"string"` } @@ -2781,6 +3131,14 @@ type Cluster struct { // The unique identifier for the cluster. Id *string `type:"string"` + // The instance fleet configuration is available only in Amazon EMR versions + // 4.8.0 and later, excluding 5.0.x versions. + // + // The instance group configuration of the cluster. A value of INSTANCE_GROUP + // indicates a uniform instance group configuration. A value of INSTANCE_FLEET + // indicates an instance fleets configuration. + InstanceCollectionType *string `type:"string" enum:"InstanceCollectionType"` + // The path to the Amazon S3 location where logs for this cluster are stored. LogUri *string `type:"string"` @@ -2790,7 +3148,7 @@ type Cluster struct { // The name of the cluster. Name *string `type:"string"` - // An approximation of the cost of the job flow, represented in m1.small/hours. + // An approximation of the cost of the cluster, represented in m1.small/hours. // This value is incremented one time for every hour an m1.small instance runs. // Larger instances are weighted more, so an EC2 instance that is roughly four // times more expensive would result in the normalized instance hours being @@ -2840,9 +3198,9 @@ type Cluster struct { // of a cluster error. TerminationProtected *bool `type:"boolean"` - // Indicates whether the job flow is visible to all IAM users of the AWS account - // associated with the job flow. If this value is set to true, all IAM users - // of that AWS account can view and manage the job flow if they have the proper + // Indicates whether the cluster is visible to all IAM users of the AWS account + // associated with the cluster. If this value is set to true, all IAM users + // of that AWS account can view and manage the cluster if they have the proper // policy permissions set. If this value is false, only the IAM user that created // the cluster can view and manage it. This value can be changed using the SetVisibleToAllUsers // action. @@ -2895,6 +3253,12 @@ func (s *Cluster) SetId(v string) *Cluster { return s } +// SetInstanceCollectionType sets the InstanceCollectionType field's value. +func (s *Cluster) SetInstanceCollectionType(v string) *Cluster { + s.InstanceCollectionType = &v + return s +} + // SetLogUri sets the LogUri field's value. func (s *Cluster) SetLogUri(v string) *Cluster { s.LogUri = &v @@ -3068,7 +3432,7 @@ type ClusterSummary struct { // The name of the cluster. Name *string `type:"string"` - // An approximation of the cost of the job flow, represented in m1.small/hours. + // An approximation of the cost of the cluster, represented in m1.small/hours. // This value is incremented one time for every hour an m1.small instance runs. // Larger instances are weighted more, so an EC2 instance that is roughly four // times more expensive would result in the normalized instance hours being @@ -3202,23 +3566,23 @@ func (s *Command) SetScriptPath(v string) *Command { // Amazon EMR releases 4.x or later. // -// Specifies a hardware and software configuration of the EMR cluster. This -// includes configurations for applications and software bundled with Amazon -// EMR. The Configuration object is a JSON object which is defined by a classification -// and a set of properties. Configurations can be nested, so a configuration -// may have its own Configuration objects listed. +// An optional configuration specification to be used when provisioning cluster +// instances, which can include configurations for applications and software +// bundled with Amazon EMR. A configuration consists of a classification, properties, +// and optional nested configurations. A classification refers to an application-specific +// configuration file. Properties are the settings you want to change in that +// file. For more information, see Configuring Applications (http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-configure-apps.html). // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/Configuration type Configuration struct { _ struct{} `type:"structure"` - // The classification of a configuration. For more information see, Amazon EMR - // Configurations (http://docs.aws.amazon.com/ElasticMapReduce/latest/API/EmrConfigurations.html). + // The classification within a configuration. Classification *string `type:"string"` - // A list of configurations you apply to this configuration object. + // A list of additional configurations to apply within a configuration object. Configurations []*Configuration `type:"list"` - // A set of properties supplied to the Configuration object. + // A set of properties specified within a configuration classification. Properties map[string]*string `type:"map"` } @@ -3896,14 +4260,14 @@ type Ec2InstanceAttributes struct { // the master node as a user named "hadoop". Ec2KeyName *string `type:"string"` - // To launch the job flow in Amazon VPC, set this parameter to the identifier - // of the Amazon VPC subnet where you want the job flow to launch. If you do - // not specify this value, the job flow is launched in the normal AWS cloud, + // To launch the cluster in Amazon VPC, set this parameter to the identifier + // of the Amazon VPC subnet where you want the cluster to launch. If you do + // not specify this value, the cluster is launched in the normal AWS cloud, // outside of a VPC. // // Amazon VPC currently does not support cluster compute quadruple extra large // (cc1.4xlarge) instances. Thus, you cannot specify the cc1.4xlarge instance - // type for nodes of a job flow launched in a VPC. + // type for nodes of a cluster launched in a VPC. Ec2SubnetId *string `type:"string"` // The identifier of the Amazon EC2 security group for the master node. @@ -3912,10 +4276,26 @@ type Ec2InstanceAttributes struct { // The identifier of the Amazon EC2 security group for the slave nodes. EmrManagedSlaveSecurityGroup *string `type:"string"` - // The IAM role that was specified when the job flow was launched. The EC2 instances - // of the job flow assume this role. + // The IAM role that was specified when the cluster was launched. The EC2 instances + // of the cluster assume this role. IamInstanceProfile *string `type:"string"` + // Applies to clusters configured with the The list of availability zones to + // choose from. The service will choose the availability zone with the best + // mix of available capacity and lowest cost to launch the cluster. If you do + // not specify this value, the cluster is launched in any availability zone + // that the customer account has access to. + RequestedEc2AvailabilityZones []*string `type:"list"` + + // Applies to clusters configured with the instance fleets option. Specifies + // the unique identifier of one or more Amazon EC2 subnets in which to launch + // EC2 cluster instances. Amazon EMR chooses the EC2 subnet with the best performance + // and cost characteristics from among the list of RequestedEc2SubnetIds and + // launches all cluster instances within that subnet. If this value is not specified, + // and the account supports EC2-Classic networks, the cluster launches instances + // in the EC2-Classic network and uses Requested + RequestedEc2SubnetIds []*string `type:"list"` + // The identifier of the Amazon EC2 security group for the Amazon EMR service // to access clusters in VPC private subnets. ServiceAccessSecurityGroup *string `type:"string"` @@ -3979,6 +4359,18 @@ func (s *Ec2InstanceAttributes) SetIamInstanceProfile(v string) *Ec2InstanceAttr return s } +// SetRequestedEc2AvailabilityZones sets the RequestedEc2AvailabilityZones field's value. +func (s *Ec2InstanceAttributes) SetRequestedEc2AvailabilityZones(v []*string) *Ec2InstanceAttributes { + s.RequestedEc2AvailabilityZones = v + return s +} + +// SetRequestedEc2SubnetIds sets the RequestedEc2SubnetIds field's value. +func (s *Ec2InstanceAttributes) SetRequestedEc2SubnetIds(v []*string) *Ec2InstanceAttributes { + s.RequestedEc2SubnetIds = v + return s +} + // SetServiceAccessSecurityGroup sets the ServiceAccessSecurityGroup field's value. func (s *Ec2InstanceAttributes) SetServiceAccessSecurityGroup(v string) *Ec2InstanceAttributes { s.ServiceAccessSecurityGroup = &v @@ -4177,9 +4569,18 @@ type Instance struct { // The unique identifier for the instance in Amazon EMR. Id *string `type:"string"` + // The unique identifier of the instance fleet to which an EC2 instance belongs. + InstanceFleetId *string `type:"string"` + // The identifier of the instance group to which this instance belongs. InstanceGroupId *string `type:"string"` + // The EC2 instance type, for example m3.xlarge. + InstanceType *string `min:"1" type:"string"` + + // The instance purchasing option. Valid values are ON_DEMAND or SPOT. + Market *string `type:"string" enum:"MarketType"` + // The private DNS name of the instance. PrivateDnsName *string `type:"string"` @@ -4224,12 +4625,30 @@ func (s *Instance) SetId(v string) *Instance { return s } +// SetInstanceFleetId sets the InstanceFleetId field's value. +func (s *Instance) SetInstanceFleetId(v string) *Instance { + s.InstanceFleetId = &v + return s +} + // SetInstanceGroupId sets the InstanceGroupId field's value. func (s *Instance) SetInstanceGroupId(v string) *Instance { s.InstanceGroupId = &v return s } +// SetInstanceType sets the InstanceType field's value. +func (s *Instance) SetInstanceType(v string) *Instance { + s.InstanceType = &v + return s +} + +// SetMarket sets the Market field's value. +func (s *Instance) SetMarket(v string) *Instance { + s.Market = &v + return s +} + // SetPrivateDnsName sets the PrivateDnsName field's value. func (s *Instance) SetPrivateDnsName(v string) *Instance { s.PrivateDnsName = &v @@ -4260,6 +4679,536 @@ func (s *Instance) SetStatus(v *InstanceStatus) *Instance { return s } +// Describes an instance fleet, which is a group of EC2 instances that host +// a particular node type (master, core, or task) in an Amazon EMR cluster. +// Instance fleets can consist of a mix of instance types and On-Demand and +// Spot instances, which are provisioned to meet a defined target capacity. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceFleet +type InstanceFleet struct { + _ struct{} `type:"structure"` + + // The unique identifier of the instance fleet. + Id *string `type:"string"` + + // The node type that the instance fleet hosts. Valid values are MASTER, CORE, + // or TASK. + InstanceFleetType *string `type:"string" enum:"InstanceFleetType"` + + // The specification for the instance types that comprise an instance fleet. + // Up to five unique instance specifications may be defined for each instance + // fleet. + InstanceTypeSpecifications []*InstanceTypeSpecification `type:"list"` + + // Describes the launch specification for an instance fleet. + LaunchSpecifications *InstanceFleetProvisioningSpecifications `type:"structure"` + + // A friendly name for the instance fleet. + Name *string `type:"string"` + + // The number of On-Demand units that have been provisioned for the instance + // fleet to fulfill TargetOnDemandCapacity. This provisioned capacity might + // be less than or greater than TargetOnDemandCapacity. + ProvisionedOnDemandCapacity *int64 `type:"integer"` + + // The number of Spot units that have been provisioned for this instance fleet + // to fulfill TargetSpotCapacity. This provisioned capacity might be less than + // or greater than TargetSpotCapacity. + ProvisionedSpotCapacity *int64 `type:"integer"` + + // The current status of the instance fleet. + Status *InstanceFleetStatus `type:"structure"` + + // The target capacity of On-Demand units for the instance fleet, which determines + // how many On-Demand instances to provision. When the instance fleet launches, + // Amazon EMR tries to provision On-Demand instances as specified by InstanceTypeConfig. + // Each instance configuration has a specified WeightedCapacity. When an On-Demand + // instance is provisioned, the WeightedCapacity units count toward the target + // capacity. Amazon EMR provisions instances until the target capacity is totally + // fulfilled, even if this results in an overage. For example, if there are + // 2 units remaining to fulfill capacity, and Amazon EMR can only provision + // an instance with a WeightedCapacity of 5 units, the instance is provisioned, + // and the target capacity is exceeded by 3 units. You can use InstanceFleet$ProvisionedOnDemandCapacity + // to determine the Spot capacity units that have been provisioned for the instance + // fleet. + // + // If not specified or set to 0, only Spot instances are provisioned for the + // instance fleet using TargetSpotCapacity. At least one of TargetSpotCapacity + // and TargetOnDemandCapacity should be greater than 0. For a master instance + // fleet, only one of TargetSpotCapacity and TargetOnDemandCapacity can be specified, + // and its value must be 1. + TargetOnDemandCapacity *int64 `type:"integer"` + + // The target capacity of Spot units for the instance fleet, which determines + // how many Spot instances to provision. When the instance fleet launches, Amazon + // EMR tries to provision Spot instances as specified by InstanceTypeConfig. + // Each instance configuration has a specified WeightedCapacity. When a Spot + // instance is provisioned, the WeightedCapacity units count toward the target + // capacity. Amazon EMR provisions instances until the target capacity is totally + // fulfilled, even if this results in an overage. For example, if there are + // 2 units remaining to fulfill capacity, and Amazon EMR can only provision + // an instance with a WeightedCapacity of 5 units, the instance is provisioned, + // and the target capacity is exceeded by 3 units. You can use InstanceFleet$ProvisionedSpotCapacity + // to determine the Spot capacity units that have been provisioned for the instance + // fleet. + // + // If not specified or set to 0, only On-Demand instances are provisioned for + // the instance fleet. At least one of TargetSpotCapacity and TargetOnDemandCapacity + // should be greater than 0. For a master instance fleet, only one of TargetSpotCapacity + // and TargetOnDemandCapacity can be specified, and its value must be 1. + TargetSpotCapacity *int64 `type:"integer"` +} + +// String returns the string representation +func (s InstanceFleet) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceFleet) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *InstanceFleet) SetId(v string) *InstanceFleet { + s.Id = &v + return s +} + +// SetInstanceFleetType sets the InstanceFleetType field's value. +func (s *InstanceFleet) SetInstanceFleetType(v string) *InstanceFleet { + s.InstanceFleetType = &v + return s +} + +// SetInstanceTypeSpecifications sets the InstanceTypeSpecifications field's value. +func (s *InstanceFleet) SetInstanceTypeSpecifications(v []*InstanceTypeSpecification) *InstanceFleet { + s.InstanceTypeSpecifications = v + return s +} + +// SetLaunchSpecifications sets the LaunchSpecifications field's value. +func (s *InstanceFleet) SetLaunchSpecifications(v *InstanceFleetProvisioningSpecifications) *InstanceFleet { + s.LaunchSpecifications = v + return s +} + +// SetName sets the Name field's value. +func (s *InstanceFleet) SetName(v string) *InstanceFleet { + s.Name = &v + return s +} + +// SetProvisionedOnDemandCapacity sets the ProvisionedOnDemandCapacity field's value. +func (s *InstanceFleet) SetProvisionedOnDemandCapacity(v int64) *InstanceFleet { + s.ProvisionedOnDemandCapacity = &v + return s +} + +// SetProvisionedSpotCapacity sets the ProvisionedSpotCapacity field's value. +func (s *InstanceFleet) SetProvisionedSpotCapacity(v int64) *InstanceFleet { + s.ProvisionedSpotCapacity = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *InstanceFleet) SetStatus(v *InstanceFleetStatus) *InstanceFleet { + s.Status = v + return s +} + +// SetTargetOnDemandCapacity sets the TargetOnDemandCapacity field's value. +func (s *InstanceFleet) SetTargetOnDemandCapacity(v int64) *InstanceFleet { + s.TargetOnDemandCapacity = &v + return s +} + +// SetTargetSpotCapacity sets the TargetSpotCapacity field's value. +func (s *InstanceFleet) SetTargetSpotCapacity(v int64) *InstanceFleet { + s.TargetSpotCapacity = &v + return s +} + +// The configuration that defines an instance fleet. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceFleetConfig +type InstanceFleetConfig struct { + _ struct{} `type:"structure"` + + // The node type that the instance fleet hosts. Valid values are MASTER,CORE,and + // TASK. + // + // InstanceFleetType is a required field + InstanceFleetType *string `type:"string" required:"true" enum:"InstanceFleetType"` + + // The instance type configurations that define the EC2 instances in the instance + // fleet. + InstanceTypeConfigs []*InstanceTypeConfig `type:"list"` + + // The launch specification for the instance fleet. + LaunchSpecifications *InstanceFleetProvisioningSpecifications `type:"structure"` + + // The friendly name of the instance fleet. + Name *string `type:"string"` + + // The target capacity of On-Demand units for the instance fleet, which determines + // how many On-Demand instances to provision. When the instance fleet launches, + // Amazon EMR tries to provision On-Demand instances as specified by InstanceTypeConfig. + // Each instance configuration has a specified WeightedCapacity. When an On-Demand + // instance is provisioned, the WeightedCapacity units count toward the target + // capacity. Amazon EMR provisions instances until the target capacity is totally + // fulfilled, even if this results in an overage. For example, if there are + // 2 units remaining to fulfill capacity, and Amazon EMR can only provision + // an instance with a WeightedCapacity of 5 units, the instance is provisioned, + // and the target capacity is exceeded by 3 units. + // + // If not specified or set to 0, only Spot instances are provisioned for the + // instance fleet using TargetSpotCapacity. At least one of TargetSpotCapacity + // and TargetOnDemandCapacity should be greater than 0. For a master instance + // fleet, only one of TargetSpotCapacity and TargetOnDemandCapacity can be specified, + // and its value must be 1. + TargetOnDemandCapacity *int64 `type:"integer"` + + // The target capacity of Spot units for the instance fleet, which determines + // how many Spot instances to provision. When the instance fleet launches, Amazon + // EMR tries to provision Spot instances as specified by InstanceTypeConfig. + // Each instance configuration has a specified WeightedCapacity. When a Spot + // instance is provisioned, the WeightedCapacity units count toward the target + // capacity. Amazon EMR provisions instances until the target capacity is totally + // fulfilled, even if this results in an overage. For example, if there are + // 2 units remaining to fulfill capacity, and Amazon EMR can only provision + // an instance with a WeightedCapacity of 5 units, the instance is provisioned, + // and the target capacity is exceeded by 3 units. + // + // If not specified or set to 0, only On-Demand instances are provisioned for + // the instance fleet. At least one of TargetSpotCapacity and TargetOnDemandCapacity + // should be greater than 0. For a master instance fleet, only one of TargetSpotCapacity + // and TargetOnDemandCapacity can be specified, and its value must be 1. + TargetSpotCapacity *int64 `type:"integer"` +} + +// String returns the string representation +func (s InstanceFleetConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceFleetConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstanceFleetConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstanceFleetConfig"} + if s.InstanceFleetType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceFleetType")) + } + if s.InstanceTypeConfigs != nil { + for i, v := range s.InstanceTypeConfigs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InstanceTypeConfigs", i), err.(request.ErrInvalidParams)) + } + } + } + if s.LaunchSpecifications != nil { + if err := s.LaunchSpecifications.Validate(); err != nil { + invalidParams.AddNested("LaunchSpecifications", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceFleetType sets the InstanceFleetType field's value. +func (s *InstanceFleetConfig) SetInstanceFleetType(v string) *InstanceFleetConfig { + s.InstanceFleetType = &v + return s +} + +// SetInstanceTypeConfigs sets the InstanceTypeConfigs field's value. +func (s *InstanceFleetConfig) SetInstanceTypeConfigs(v []*InstanceTypeConfig) *InstanceFleetConfig { + s.InstanceTypeConfigs = v + return s +} + +// SetLaunchSpecifications sets the LaunchSpecifications field's value. +func (s *InstanceFleetConfig) SetLaunchSpecifications(v *InstanceFleetProvisioningSpecifications) *InstanceFleetConfig { + s.LaunchSpecifications = v + return s +} + +// SetName sets the Name field's value. +func (s *InstanceFleetConfig) SetName(v string) *InstanceFleetConfig { + s.Name = &v + return s +} + +// SetTargetOnDemandCapacity sets the TargetOnDemandCapacity field's value. +func (s *InstanceFleetConfig) SetTargetOnDemandCapacity(v int64) *InstanceFleetConfig { + s.TargetOnDemandCapacity = &v + return s +} + +// SetTargetSpotCapacity sets the TargetSpotCapacity field's value. +func (s *InstanceFleetConfig) SetTargetSpotCapacity(v int64) *InstanceFleetConfig { + s.TargetSpotCapacity = &v + return s +} + +// Configuration parameters for an instance fleet modification request. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceFleetModifyConfig +type InstanceFleetModifyConfig struct { + _ struct{} `type:"structure"` + + // A unique identifier for the instance fleet. + // + // InstanceFleetId is a required field + InstanceFleetId *string `type:"string" required:"true"` + + // The target capacity of On-Demand units for the instance fleet. For more information + // see InstanceFleetConfig$TargetOnDemandCapacity. + TargetOnDemandCapacity *int64 `type:"integer"` + + // The target capacity of Spot units for the instance fleet. For more information, + // see InstanceFleetConfig$TargetSpotCapacity. + TargetSpotCapacity *int64 `type:"integer"` +} + +// String returns the string representation +func (s InstanceFleetModifyConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceFleetModifyConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstanceFleetModifyConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstanceFleetModifyConfig"} + if s.InstanceFleetId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceFleetId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceFleetId sets the InstanceFleetId field's value. +func (s *InstanceFleetModifyConfig) SetInstanceFleetId(v string) *InstanceFleetModifyConfig { + s.InstanceFleetId = &v + return s +} + +// SetTargetOnDemandCapacity sets the TargetOnDemandCapacity field's value. +func (s *InstanceFleetModifyConfig) SetTargetOnDemandCapacity(v int64) *InstanceFleetModifyConfig { + s.TargetOnDemandCapacity = &v + return s +} + +// SetTargetSpotCapacity sets the TargetSpotCapacity field's value. +func (s *InstanceFleetModifyConfig) SetTargetSpotCapacity(v int64) *InstanceFleetModifyConfig { + s.TargetSpotCapacity = &v + return s +} + +// The launch specification for Spot instances in the fleet, which determines +// the defined duration and provisioning timeout behavior. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceFleetProvisioningSpecifications +type InstanceFleetProvisioningSpecifications struct { + _ struct{} `type:"structure"` + + // The launch specification for Spot instances in the fleet, which determines + // the defined duration and provisioning timeout behavior. + // + // SpotSpecification is a required field + SpotSpecification *SpotProvisioningSpecification `type:"structure" required:"true"` +} + +// String returns the string representation +func (s InstanceFleetProvisioningSpecifications) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceFleetProvisioningSpecifications) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstanceFleetProvisioningSpecifications) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstanceFleetProvisioningSpecifications"} + if s.SpotSpecification == nil { + invalidParams.Add(request.NewErrParamRequired("SpotSpecification")) + } + if s.SpotSpecification != nil { + if err := s.SpotSpecification.Validate(); err != nil { + invalidParams.AddNested("SpotSpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSpotSpecification sets the SpotSpecification field's value. +func (s *InstanceFleetProvisioningSpecifications) SetSpotSpecification(v *SpotProvisioningSpecification) *InstanceFleetProvisioningSpecifications { + s.SpotSpecification = v + return s +} + +// Provides status change reason details for the instance fleet. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceFleetStateChangeReason +type InstanceFleetStateChangeReason struct { + _ struct{} `type:"structure"` + + // A code corresponding to the reason the state change occurred. + Code *string `type:"string" enum:"InstanceFleetStateChangeReasonCode"` + + // An explanatory message. + Message *string `type:"string"` +} + +// String returns the string representation +func (s InstanceFleetStateChangeReason) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceFleetStateChangeReason) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *InstanceFleetStateChangeReason) SetCode(v string) *InstanceFleetStateChangeReason { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *InstanceFleetStateChangeReason) SetMessage(v string) *InstanceFleetStateChangeReason { + s.Message = &v + return s +} + +// The status of the instance fleet. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceFleetStatus +type InstanceFleetStatus struct { + _ struct{} `type:"structure"` + + // A code representing the instance fleet status. + State *string `type:"string" enum:"InstanceFleetState"` + + // Provides status change reason details for the instance fleet. + StateChangeReason *InstanceFleetStateChangeReason `type:"structure"` + + // Provides historical timestamps for the instance fleet, including the time + // of creation, the time it became ready to run jobs, and the time of termination. + Timeline *InstanceFleetTimeline `type:"structure"` +} + +// String returns the string representation +func (s InstanceFleetStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceFleetStatus) GoString() string { + return s.String() +} + +// SetState sets the State field's value. +func (s *InstanceFleetStatus) SetState(v string) *InstanceFleetStatus { + s.State = &v + return s +} + +// SetStateChangeReason sets the StateChangeReason field's value. +func (s *InstanceFleetStatus) SetStateChangeReason(v *InstanceFleetStateChangeReason) *InstanceFleetStatus { + s.StateChangeReason = v + return s +} + +// SetTimeline sets the Timeline field's value. +func (s *InstanceFleetStatus) SetTimeline(v *InstanceFleetTimeline) *InstanceFleetStatus { + s.Timeline = v + return s +} + +// Provides historical timestamps for the instance fleet, including the time +// of creation, the time it became ready to run jobs, and the time of termination. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceFleetTimeline +type InstanceFleetTimeline struct { + _ struct{} `type:"structure"` + + // The time and date the instance fleet was created. + CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time and date the instance fleet terminated. + EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + + // The time and date the instance fleet was ready to run jobs. + ReadyDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` +} + +// String returns the string representation +func (s InstanceFleetTimeline) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceFleetTimeline) GoString() string { + return s.String() +} + +// SetCreationDateTime sets the CreationDateTime field's value. +func (s *InstanceFleetTimeline) SetCreationDateTime(v time.Time) *InstanceFleetTimeline { + s.CreationDateTime = &v + return s +} + +// SetEndDateTime sets the EndDateTime field's value. +func (s *InstanceFleetTimeline) SetEndDateTime(v time.Time) *InstanceFleetTimeline { + s.EndDateTime = &v + return s +} + +// SetReadyDateTime sets the ReadyDateTime field's value. +func (s *InstanceFleetTimeline) SetReadyDateTime(v time.Time) *InstanceFleetTimeline { + s.ReadyDateTime = &v + return s +} + // This entity represents an instance group, which is a group of instances that // have common purpose. For example, CORE instance group is used for HDFS. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceGroup @@ -5069,7 +6018,208 @@ func (s *InstanceTimeline) SetReadyDateTime(v time.Time) *InstanceTimeline { return s } -// A description of a job flow. +// An instance type configuration for each instance type in an instance fleet, +// which determines the EC2 instances Amazon EMR attempts to provision to fulfill +// On-Demand and Spot target capacities. There can be a maximum of 5 instance +// type configurations in a fleet. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceTypeConfig +type InstanceTypeConfig struct { + _ struct{} `type:"structure"` + + // The bid price for each EC2 Spot instance type as defined by InstanceType. + // Expressed in USD. If neither BidPrice nor BidPriceAsPercentageOfOnDemandPrice + // is provided, BidPriceAsPercentageOfOnDemandPrice defaults to 100%. + BidPrice *string `type:"string"` + + // The bid price, as a percentage of On-Demand price, for each EC2 Spot instance + // as defined by InstanceType. Expressed as a number between 0 and 1000 (for + // example, 20 specifies 20%). If neither BidPrice nor BidPriceAsPercentageOfOnDemandPrice + // is provided, BidPriceAsPercentageOfOnDemandPrice defaults to 100%. + BidPriceAsPercentageOfOnDemandPrice *float64 `type:"double"` + + // A configuration classification that applies when provisioning cluster instances, + // which can include configurations for applications and software that run on + // the cluster. + Configurations []*Configuration `type:"list"` + + // The configuration of Amazon Elastic Block Storage (EBS) attached to each + // instance as defined by InstanceType. + EbsConfiguration *EbsConfiguration `type:"structure"` + + // An EC2 instance type, such as m3.xlarge. + // + // InstanceType is a required field + InstanceType *string `min:"1" type:"string" required:"true"` + + // The number of units that a provisioned instance of this type provides toward + // fulfilling the target capacities defined in InstanceFleetConfig. This value + // is 1 for a master instance fleet, and must be greater than 0 for core and + // task instance fleets. + WeightedCapacity *int64 `type:"integer"` +} + +// String returns the string representation +func (s InstanceTypeConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceTypeConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstanceTypeConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstanceTypeConfig"} + if s.InstanceType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceType")) + } + if s.InstanceType != nil && len(*s.InstanceType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceType", 1)) + } + if s.EbsConfiguration != nil { + if err := s.EbsConfiguration.Validate(); err != nil { + invalidParams.AddNested("EbsConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBidPrice sets the BidPrice field's value. +func (s *InstanceTypeConfig) SetBidPrice(v string) *InstanceTypeConfig { + s.BidPrice = &v + return s +} + +// SetBidPriceAsPercentageOfOnDemandPrice sets the BidPriceAsPercentageOfOnDemandPrice field's value. +func (s *InstanceTypeConfig) SetBidPriceAsPercentageOfOnDemandPrice(v float64) *InstanceTypeConfig { + s.BidPriceAsPercentageOfOnDemandPrice = &v + return s +} + +// SetConfigurations sets the Configurations field's value. +func (s *InstanceTypeConfig) SetConfigurations(v []*Configuration) *InstanceTypeConfig { + s.Configurations = v + return s +} + +// SetEbsConfiguration sets the EbsConfiguration field's value. +func (s *InstanceTypeConfig) SetEbsConfiguration(v *EbsConfiguration) *InstanceTypeConfig { + s.EbsConfiguration = v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *InstanceTypeConfig) SetInstanceType(v string) *InstanceTypeConfig { + s.InstanceType = &v + return s +} + +// SetWeightedCapacity sets the WeightedCapacity field's value. +func (s *InstanceTypeConfig) SetWeightedCapacity(v int64) *InstanceTypeConfig { + s.WeightedCapacity = &v + return s +} + +// The configuration specification for each instance type in an instance fleet. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/InstanceTypeSpecification +type InstanceTypeSpecification struct { + _ struct{} `type:"structure"` + + // The bid price for each EC2 Spot instance type as defined by InstanceType. + // Expressed in USD. + BidPrice *string `type:"string"` + + // The bid price, as a percentage of On-Demand price, for each EC2 Spot instance + // as defined by InstanceType. Expressed as a number (for example, 20 specifies + // 20%). + BidPriceAsPercentageOfOnDemandPrice *float64 `type:"double"` + + // A configuration classification that applies when provisioning cluster instances, + // which can include configurations for applications and software bundled with + // Amazon EMR. + Configurations []*Configuration `type:"list"` + + // The configuration of Amazon Elastic Block Storage (EBS) attached to each + // instance as defined by InstanceType. + EbsBlockDevices []*EbsBlockDevice `type:"list"` + + // Evaluates to TRUE when the specified InstanceType is EBS-optimized. + EbsOptimized *bool `type:"boolean"` + + // The EC2 instance type, for example m3.xlarge. + InstanceType *string `min:"1" type:"string"` + + // The number of units that a provisioned instance of this type provides toward + // fulfilling the target capacities defined in InstanceFleetConfig. Capacity + // values represent performance characteristics such as vCPUs, memory, or I/O. + // If not specified, the default value is 1. + WeightedCapacity *int64 `type:"integer"` +} + +// String returns the string representation +func (s InstanceTypeSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceTypeSpecification) GoString() string { + return s.String() +} + +// SetBidPrice sets the BidPrice field's value. +func (s *InstanceTypeSpecification) SetBidPrice(v string) *InstanceTypeSpecification { + s.BidPrice = &v + return s +} + +// SetBidPriceAsPercentageOfOnDemandPrice sets the BidPriceAsPercentageOfOnDemandPrice field's value. +func (s *InstanceTypeSpecification) SetBidPriceAsPercentageOfOnDemandPrice(v float64) *InstanceTypeSpecification { + s.BidPriceAsPercentageOfOnDemandPrice = &v + return s +} + +// SetConfigurations sets the Configurations field's value. +func (s *InstanceTypeSpecification) SetConfigurations(v []*Configuration) *InstanceTypeSpecification { + s.Configurations = v + return s +} + +// SetEbsBlockDevices sets the EbsBlockDevices field's value. +func (s *InstanceTypeSpecification) SetEbsBlockDevices(v []*EbsBlockDevice) *InstanceTypeSpecification { + s.EbsBlockDevices = v + return s +} + +// SetEbsOptimized sets the EbsOptimized field's value. +func (s *InstanceTypeSpecification) SetEbsOptimized(v bool) *InstanceTypeSpecification { + s.EbsOptimized = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *InstanceTypeSpecification) SetInstanceType(v string) *InstanceTypeSpecification { + s.InstanceType = &v + return s +} + +// SetWeightedCapacity sets the WeightedCapacity field's value. +func (s *InstanceTypeSpecification) SetWeightedCapacity(v int64) *InstanceTypeSpecification { + s.WeightedCapacity = &v + return s +} + +// A description of a cluster (job flow). // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/JobFlowDetail type JobFlowDetail struct { _ struct{} `type:"structure"` @@ -5142,12 +6292,12 @@ type JobFlowDetail struct { // is empty. SupportedProducts []*string `type:"list"` - // Specifies whether the job flow is visible to all IAM users of the AWS account - // associated with the job flow. If this value is set to true, all IAM users + // Specifies whether the cluster is visible to all IAM users of the AWS account + // associated with the cluster. If this value is set to true, all IAM users // of that AWS account can view and (if they have the proper policy permissions - // set) manage the job flow. If it is set to false, only the IAM user that created - // the job flow can view and manage it. This value can be changed using the - // SetVisibleToAllUsers action. + // set) manage the cluster. If it is set to false, only the IAM user that created + // the cluster can view and manage it. This value can be changed using the SetVisibleToAllUsers + // action. VisibleToAllUsers *bool `type:"boolean"` } @@ -5245,7 +6395,7 @@ func (s *JobFlowDetail) SetVisibleToAllUsers(v bool) *JobFlowDetail { return s } -// Describes the status of the job flow. +// Describes the status of the cluster (job flow). // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/JobFlowExecutionStatusDetail type JobFlowExecutionStatusDetail struct { _ struct{} `type:"structure"` @@ -5320,10 +6470,11 @@ func (s *JobFlowExecutionStatusDetail) SetState(v string) *JobFlowExecutionStatu return s } -// A description of the Amazon EC2 instance running the job flow. A valid JobFlowInstancesConfig -// must contain at least InstanceGroups, which is the recommended configuration. -// However, a valid alternative is to have MasterInstanceType, SlaveInstanceType, -// and InstanceCount (all three must be present). +// A description of the Amazon EC2 instance on which the cluster (job flow) +// runs. A valid JobFlowInstancesConfig must contain either InstanceGroups or +// InstanceFleets, which is the recommended configuration. They cannot be used +// together. You may also have MasterInstanceType, SlaveInstanceType, and InstanceCount +// (all three must be present), but we don't recommend this configuration. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/JobFlowInstancesConfig type JobFlowInstancesConfig struct { _ struct{} `type:"structure"` @@ -5338,43 +6489,61 @@ type JobFlowInstancesConfig struct { // the user called "hadoop." Ec2KeyName *string `type:"string"` - // To launch the job flow in Amazon Virtual Private Cloud (Amazon VPC), set - // this parameter to the identifier of the Amazon VPC subnet where you want - // the job flow to launch. If you do not specify this value, the job flow is - // launched in the normal Amazon Web Services cloud, outside of an Amazon VPC. + // Applies to clusters that use the uniform instance group configuration. To + // launch the cluster in Amazon Virtual Private Cloud (Amazon VPC), set this + // parameter to the identifier of the Amazon VPC subnet where you want the cluster + // to launch. If you do not specify this value, the cluster launches in the + // normal Amazon Web Services cloud, outside of an Amazon VPC, if the account + // launching the cluster supports EC2 Classic networks in the region where the + // cluster launches. // // Amazon VPC currently does not support cluster compute quadruple extra large // (cc1.4xlarge) instances. Thus you cannot specify the cc1.4xlarge instance - // type for nodes of a job flow launched in a Amazon VPC. + // type for clusters launched in an Amazon VPC. Ec2SubnetId *string `type:"string"` + // Applies to clusters that use the instance fleet configuration. When multiple + // EC2 subnet IDs are specified, Amazon EMR evaluates them and launches instances + // in the optimal subnet. + // + // The instance fleet configuration is available only in Amazon EMR versions + // 4.8.0 and later, excluding 5.0.x versions. + Ec2SubnetIds []*string `type:"list"` + // The identifier of the Amazon EC2 security group for the master node. EmrManagedMasterSecurityGroup *string `type:"string"` // The identifier of the Amazon EC2 security group for the slave nodes. EmrManagedSlaveSecurityGroup *string `type:"string"` - // The Hadoop version for the job flow. Valid inputs are "0.18" (deprecated), + // The Hadoop version for the cluster. Valid inputs are "0.18" (deprecated), // "0.20" (deprecated), "0.20.205" (deprecated), "1.0.3", "2.2.0", or "2.4.0". // If you do not set this value, the default of 0.18 is used, unless the AmiVersion // parameter is set in the RunJobFlow call, in which case the default version // of Hadoop for that AMI version is used. HadoopVersion *string `type:"string"` - // The number of EC2 instances used to execute the job flow. + // The number of EC2 instances in the cluster. InstanceCount *int64 `type:"integer"` - // Configuration for the job flow's instance groups. + // The instance fleet configuration is available only in Amazon EMR versions + // 4.8.0 and later, excluding 5.0.x versions. + // + // Describes the EC2 instances and instance configurations for clusters that + // use the instance fleet configuration. + InstanceFleets []*InstanceFleetConfig `type:"list"` + + // Configuration for the instance groups in a cluster. InstanceGroups []*InstanceGroupConfig `type:"list"` - // Specifies whether the job flow should be kept alive after completing all + // Specifies whether the cluster should remain available after completing all // steps. KeepJobFlowAliveWhenNoSteps *bool `type:"boolean"` // The EC2 instance type of the master node. MasterInstanceType *string `min:"1" type:"string"` - // The Availability Zone the job flow will run in. + // The Availability Zone in which the cluster runs. Placement *PlacementType `type:"structure"` // The identifier of the Amazon EC2 security group for the Amazon EMR service @@ -5384,9 +6553,9 @@ type JobFlowInstancesConfig struct { // The EC2 instance type of the slave nodes. SlaveInstanceType *string `min:"1" type:"string"` - // Specifies whether to lock the job flow to prevent the Amazon EC2 instances + // Specifies whether to lock the cluster to prevent the Amazon EC2 instances // from being terminated by API call, user intervention, or in the event of - // a job flow error. + // a job-flow error. TerminationProtected *bool `type:"boolean"` } @@ -5409,6 +6578,16 @@ func (s *JobFlowInstancesConfig) Validate() error { if s.SlaveInstanceType != nil && len(*s.SlaveInstanceType) < 1 { invalidParams.Add(request.NewErrParamMinLen("SlaveInstanceType", 1)) } + if s.InstanceFleets != nil { + for i, v := range s.InstanceFleets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InstanceFleets", i), err.(request.ErrInvalidParams)) + } + } + } if s.InstanceGroups != nil { for i, v := range s.InstanceGroups { if v == nil { @@ -5419,11 +6598,6 @@ func (s *JobFlowInstancesConfig) Validate() error { } } } - if s.Placement != nil { - if err := s.Placement.Validate(); err != nil { - invalidParams.AddNested("Placement", err.(request.ErrInvalidParams)) - } - } if invalidParams.Len() > 0 { return invalidParams @@ -5455,6 +6629,12 @@ func (s *JobFlowInstancesConfig) SetEc2SubnetId(v string) *JobFlowInstancesConfi return s } +// SetEc2SubnetIds sets the Ec2SubnetIds field's value. +func (s *JobFlowInstancesConfig) SetEc2SubnetIds(v []*string) *JobFlowInstancesConfig { + s.Ec2SubnetIds = v + return s +} + // SetEmrManagedMasterSecurityGroup sets the EmrManagedMasterSecurityGroup field's value. func (s *JobFlowInstancesConfig) SetEmrManagedMasterSecurityGroup(v string) *JobFlowInstancesConfig { s.EmrManagedMasterSecurityGroup = &v @@ -5479,6 +6659,12 @@ func (s *JobFlowInstancesConfig) SetInstanceCount(v int64) *JobFlowInstancesConf return s } +// SetInstanceFleets sets the InstanceFleets field's value. +func (s *JobFlowInstancesConfig) SetInstanceFleets(v []*InstanceFleetConfig) *JobFlowInstancesConfig { + s.InstanceFleets = v + return s +} + // SetInstanceGroups sets the InstanceGroups field's value. func (s *JobFlowInstancesConfig) SetInstanceGroups(v []*InstanceGroupConfig) *JobFlowInstancesConfig { s.InstanceGroups = v @@ -5521,20 +6707,21 @@ func (s *JobFlowInstancesConfig) SetTerminationProtected(v bool) *JobFlowInstanc return s } -// Specify the type of Amazon EC2 instances to run the job flow on. +// Specify the type of Amazon EC2 instances that the cluster (job flow) runs +// on. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/JobFlowInstancesDetail type JobFlowInstancesDetail struct { _ struct{} `type:"structure"` // The name of an Amazon EC2 key pair that can be used to ssh to the master - // node of job flow. + // node. Ec2KeyName *string `type:"string"` - // For job flows launched within Amazon Virtual Private Cloud, this value specifies - // the identifier of the subnet where the job flow was launched. + // For clusters launched within Amazon Virtual Private Cloud, this is the identifier + // of the subnet where the cluster was launched. Ec2SubnetId *string `type:"string"` - // The Hadoop version for the job flow. + // The Hadoop version for the cluster. HadoopVersion *string `type:"string"` // The number of Amazon EC2 instances in the cluster. If the value is 1, the @@ -5544,10 +6731,11 @@ type JobFlowInstancesDetail struct { // InstanceCount is a required field InstanceCount *int64 `type:"integer" required:"true"` - // Details about the job flow's instance groups. + // Details about the instance groups in a cluster. InstanceGroups []*InstanceGroupDetail `type:"list"` - // Specifies whether the job flow should terminate after completing all steps. + // Specifies whether the cluster should remain available after completing all + // steps. KeepJobFlowAliveWhenNoSteps *bool `type:"boolean"` // The Amazon EC2 instance identifier of the master node. @@ -5561,7 +6749,7 @@ type JobFlowInstancesDetail struct { // The DNS name of the master node. MasterPublicDnsName *string `type:"string"` - // An approximation of the cost of the job flow, represented in m1.small/hours. + // An approximation of the cost of the cluster, represented in m1.small/hours. // This value is incremented one time for every hour that an m1.small runs. // Larger instances are weighted more, so an Amazon EC2 instance that is roughly // four times more expensive would result in the normalized instance hours being @@ -5569,7 +6757,7 @@ type JobFlowInstancesDetail struct { // the actual billing rate. NormalizedInstanceHours *int64 `type:"integer"` - // The Amazon EC2 Availability Zone for the job flow. + // The Amazon EC2 Availability Zone for the cluster. Placement *PlacementType `type:"structure"` // The Amazon EC2 slave node instance type. @@ -5578,7 +6766,7 @@ type JobFlowInstancesDetail struct { SlaveInstanceType *string `min:"1" type:"string" required:"true"` // Specifies whether the Amazon EC2 instances in the cluster are protected from - // termination by API calls, user intervention, or in the event of a job flow + // termination by API calls, user intervention, or in the event of a job-flow // error. TerminationProtected *bool `type:"boolean"` } @@ -5876,6 +7064,87 @@ func (s *ListClustersOutput) SetMarker(v string) *ListClustersOutput { return s } +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ListInstanceFleetsInput +type ListInstanceFleetsInput struct { + _ struct{} `type:"structure"` + + // The unique identifier of the cluster. + // + // ClusterId is a required field + ClusterId *string `type:"string" required:"true"` + + // The pagination token that indicates the next set of results to retrieve. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s ListInstanceFleetsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceFleetsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListInstanceFleetsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInstanceFleetsInput"} + if s.ClusterId == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterId sets the ClusterId field's value. +func (s *ListInstanceFleetsInput) SetClusterId(v string) *ListInstanceFleetsInput { + s.ClusterId = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListInstanceFleetsInput) SetMarker(v string) *ListInstanceFleetsInput { + s.Marker = &v + return s +} + +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ListInstanceFleetsOutput +type ListInstanceFleetsOutput struct { + _ struct{} `type:"structure"` + + // The list of instance fleets for the cluster and given filters. + InstanceFleets []*InstanceFleet `type:"list"` + + // The pagination token that indicates the next set of results to retrieve. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s ListInstanceFleetsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceFleetsOutput) GoString() string { + return s.String() +} + +// SetInstanceFleets sets the InstanceFleets field's value. +func (s *ListInstanceFleetsOutput) SetInstanceFleets(v []*InstanceFleet) *ListInstanceFleetsOutput { + s.InstanceFleets = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListInstanceFleetsOutput) SetMarker(v string) *ListInstanceFleetsOutput { + s.Marker = &v + return s +} + // This input determines which instance groups to retrieve. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ListInstanceGroupsInput type ListInstanceGroupsInput struct { @@ -5969,6 +7238,12 @@ type ListInstancesInput struct { // ClusterId is a required field ClusterId *string `type:"string" required:"true"` + // The unique identifier of the instance fleet. + InstanceFleetId *string `type:"string"` + + // The node type of the instance fleet. For example MASTER, CORE, or TASK. + InstanceFleetType *string `type:"string" enum:"InstanceFleetType"` + // The identifier of the instance group for which to list the instances. InstanceGroupId *string `type:"string"` @@ -6012,6 +7287,18 @@ func (s *ListInstancesInput) SetClusterId(v string) *ListInstancesInput { return s } +// SetInstanceFleetId sets the InstanceFleetId field's value. +func (s *ListInstancesInput) SetInstanceFleetId(v string) *ListInstancesInput { + s.InstanceFleetId = &v + return s +} + +// SetInstanceFleetType sets the InstanceFleetType field's value. +func (s *ListInstancesInput) SetInstanceFleetType(v string) *ListInstancesInput { + s.InstanceFleetType = &v + return s +} + // SetInstanceGroupId sets the InstanceGroupId field's value. func (s *ListInstancesInput) SetInstanceGroupId(v string) *ListInstancesInput { s.InstanceGroupId = &v @@ -6234,9 +7521,8 @@ func (s *ListStepsOutput) SetSteps(v []*StepSummary) *ListStepsOutput { // A CloudWatch dimension, which is specified using a Key (known as a Name in // CloudWatch), Value pair. By default, Amazon EMR uses one dimension whose // Key is JobFlowID and Value is a variable representing the cluster ID, which -// is ${emr:cluster_id}. This enables the rule to bootstrap when the cluster -// ID becomes available, and also enables a single automatic scaling policy -// to be reused for multiple clusters and instance groups. +// is ${emr.clusterId}. This enables the rule to bootstrap when the cluster +// ID becomes available. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/MetricDimension type MetricDimension struct { _ struct{} `type:"structure"` @@ -6270,6 +7556,79 @@ func (s *MetricDimension) SetValue(v string) *MetricDimension { return s } +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ModifyInstanceFleetInput +type ModifyInstanceFleetInput struct { + _ struct{} `type:"structure"` + + // The unique identifier of the cluster. + // + // ClusterId is a required field + ClusterId *string `type:"string" required:"true"` + + // The unique identifier of the instance fleet. + // + // InstanceFleet is a required field + InstanceFleet *InstanceFleetModifyConfig `type:"structure" required:"true"` +} + +// String returns the string representation +func (s ModifyInstanceFleetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceFleetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyInstanceFleetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyInstanceFleetInput"} + if s.ClusterId == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterId")) + } + if s.InstanceFleet == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceFleet")) + } + if s.InstanceFleet != nil { + if err := s.InstanceFleet.Validate(); err != nil { + invalidParams.AddNested("InstanceFleet", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterId sets the ClusterId field's value. +func (s *ModifyInstanceFleetInput) SetClusterId(v string) *ModifyInstanceFleetInput { + s.ClusterId = &v + return s +} + +// SetInstanceFleet sets the InstanceFleet field's value. +func (s *ModifyInstanceFleetInput) SetInstanceFleet(v *InstanceFleetModifyConfig) *ModifyInstanceFleetInput { + s.InstanceFleet = v + return s +} + +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ModifyInstanceFleetOutput +type ModifyInstanceFleetOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ModifyInstanceFleetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceFleetOutput) GoString() string { + return s.String() +} + // Change the size of some instance groups. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/ModifyInstanceGroupsInput type ModifyInstanceGroupsInput struct { @@ -6339,15 +7698,24 @@ func (s ModifyInstanceGroupsOutput) GoString() string { return s.String() } -// The Amazon EC2 location for the job flow. +// The Amazon EC2 Availability Zone configuration of the cluster (job flow). // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/PlacementType type PlacementType struct { _ struct{} `type:"structure"` - // The Amazon EC2 Availability Zone for the job flow. + // The Amazon EC2 Availability Zone for the cluster. AvailabilityZone is used + // for uniform instance groups, while AvailabilityZones (plural) is used for + // instance fleets. + AvailabilityZone *string `type:"string"` + + // When multiple Availability Zones are specified, Amazon EMR evaluates them + // and launches instances in the optimal Availability Zone. AvailabilityZones + // is used for instance fleets, while AvailabilityZone (singular) is used for + // uniform instance groups. // - // AvailabilityZone is a required field - AvailabilityZone *string `type:"string" required:"true"` + // The instance fleet configuration is available only in Amazon EMR versions + // 4.8.0 and later, excluding 5.0.x versions. + AvailabilityZones []*string `type:"list"` } // String returns the string representation @@ -6360,25 +7728,18 @@ func (s PlacementType) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *PlacementType) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "PlacementType"} - if s.AvailabilityZone == nil { - invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - // SetAvailabilityZone sets the AvailabilityZone field's value. func (s *PlacementType) SetAvailabilityZone(v string) *PlacementType { s.AvailabilityZone = &v return s } +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *PlacementType) SetAvailabilityZones(v []*string) *PlacementType { + s.AvailabilityZones = v + return s +} + // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/PutAutoScalingPolicyInput type PutAutoScalingPolicyInput struct { _ struct{} `type:"structure"` @@ -6678,8 +8039,7 @@ type RunJobFlowInput struct { // to launch and terminate EC2 instances in an instance group. AutoScalingRole *string `type:"string"` - // A list of bootstrap actions that will be run before Hadoop is started on - // the cluster nodes. + // A list of bootstrap actions to run before Hadoop starts on the cluster nodes. BootstrapActions []*BootstrapActionConfig `type:"list"` // Amazon EMR releases 4.x or later. @@ -6687,8 +8047,7 @@ type RunJobFlowInput struct { // The list of configurations supplied for the EMR cluster you are creating. Configurations []*Configuration `type:"list"` - // A specification of the number and type of Amazon EC2 instances on which to - // run the job flow. + // A specification of the number and type of Amazon EC2 instances. // // Instances is a required field Instances *JobFlowInstancesConfig `type:"structure" required:"true"` @@ -6714,9 +8073,9 @@ type RunJobFlowInput struct { // A list of strings that indicates third-party software to use with the job // flow that accepts a user argument list. EMR accepts and forwards the argument // list to the corresponding installation script as bootstrap action arguments. - // For more information, see Launch a Job Flow on the MapR Distribution for - // Hadoop (http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-mapr.html). - // Currently supported values are: + // For more information, see "Launch a Job Flow on the MapR Distribution for + // Hadoop" in the Amazon EMR Developer Guide (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/emr/latest/DeveloperGuide/emr-dg.pdf). + // Supported values are: // // * "mapr-m3" - launch the cluster using MapR M3 Edition. // @@ -6763,15 +8122,14 @@ type RunJobFlowInput struct { // resources on your behalf. ServiceRole *string `type:"string"` - // A list of steps to be executed by the job flow. + // A list of steps to run. Steps []*StepConfig `type:"list"` // For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and greater, // use Applications. // - // A list of strings that indicates third-party software to use with the job - // flow. For more information, see Use Third Party Applications with Amazon - // EMR (http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-supported-products.html). + // A list of strings that indicates third-party software to use. For more information, + // see Use Third Party Applications with Amazon EMR (http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-supported-products.html). // Currently supported values are: // // * "mapr-m3" - launch the job flow using MapR M3 Edition. @@ -6782,11 +8140,11 @@ type RunJobFlowInput struct { // A list of tags to associate with a cluster and propagate to Amazon EC2 instances. Tags []*Tag `type:"list"` - // Whether the job flow is visible to all IAM users of the AWS account associated - // with the job flow. If this value is set to true, all IAM users of that AWS + // Whether the cluster is visible to all IAM users of the AWS account associated + // with the cluster. If this value is set to true, all IAM users of that AWS // account can view and (if they have the proper policy permissions set) manage - // the job flow. If it is set to false, only the IAM user that created the job - // flow can view and manage it. + // the cluster. If it is set to false, only the IAM user that created the cluster + // can view and manage it. VisibleToAllUsers *bool `type:"boolean"` } @@ -7324,16 +8682,16 @@ func (s *SecurityConfigurationSummary) SetName(v string) *SecurityConfigurationS type SetTerminationProtectionInput struct { _ struct{} `type:"structure"` - // A list of strings that uniquely identify the job flows to protect. This identifier + // A list of strings that uniquely identify the clusters to protect. This identifier // is returned by RunJobFlow and can also be obtained from DescribeJobFlows // . // // JobFlowIds is a required field JobFlowIds []*string `type:"list" required:"true"` - // A Boolean that indicates whether to protect the job flow and prevent the - // Amazon EC2 instances in the cluster from shutting down due to API calls, - // user intervention, or job-flow error. + // A Boolean that indicates whether to protect the cluster and prevent the Amazon + // EC2 instances in the cluster from shutting down due to API calls, user intervention, + // or job-flow error. // // TerminationProtected is a required field TerminationProtected *bool `type:"boolean" required:"true"` @@ -7402,11 +8760,11 @@ type SetVisibleToAllUsersInput struct { // JobFlowIds is a required field JobFlowIds []*string `type:"list" required:"true"` - // Whether the specified job flows are visible to all IAM users of the AWS account - // associated with the job flow. If this value is set to True, all IAM users + // Whether the specified clusters are visible to all IAM users of the AWS account + // associated with the cluster. If this value is set to True, all IAM users // of that AWS account can view and, if they have the proper IAM policy permissions - // set, manage the job flows. If it is set to False, only the IAM user that - // created a job flow can view and manage it. + // set, manage the clusters. If it is set to False, only the IAM user that created + // a cluster can view and manage it. // // VisibleToAllUsers is a required field VisibleToAllUsers *bool `type:"boolean" required:"true"` @@ -7515,7 +8873,7 @@ type SimpleScalingPolicyConfiguration struct { // indicates that the EC2 instance count increments or decrements by ScalingAdjustment, // which should be expressed as an integer. PERCENT_CHANGE_IN_CAPACITY indicates // the instance count increments or decrements by the percentage specified by - // ScalingAdjustment, which should be expressed as a decimal, for example, 0.20 + // ScalingAdjustment, which should be expressed as a decimal. For example, 0.20 // indicates an increase in 20% increments of cluster capacity. EXACT_CAPACITY // indicates the scaling activity results in an instance group with the number // of EC2 instances specified by ScalingAdjustment, which should be expressed @@ -7580,6 +8938,86 @@ func (s *SimpleScalingPolicyConfiguration) SetScalingAdjustment(v int64) *Simple return s } +// The launch specification for Spot instances in the instance fleet, which +// determines the defined duration and provisioning timeout behavior. +// +// The instance fleet configuration is available only in Amazon EMR versions +// 4.8.0 and later, excluding 5.0.x versions. +// Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/SpotProvisioningSpecification +type SpotProvisioningSpecification struct { + _ struct{} `type:"structure"` + + // The defined duration for Spot instances (also known as Spot blocks) in minutes. + // When specified, the Spot instance does not terminate before the defined duration + // expires, and defined duration pricing for Spot instances applies. Valid values + // are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as + // a Spot instance receives its instance ID. At the end of the duration, Amazon + // EC2 marks the Spot instance for termination and provides a Spot instance + // termination notice, which gives the instance a two-minute warning before + // it terminates. + BlockDurationMinutes *int64 `type:"integer"` + + // The action to take when TargetSpotCapacity has not been fulfilled when the + // TimeoutDurationMinutes has expired. Spot instances are not uprovisioned within + // the Spot provisioining timeout. Valid values are TERMINATE_CLUSTER and SWITCH_TO_ON_DEMAND + // to fulfill the remaining capacity. + // + // TimeoutAction is a required field + TimeoutAction *string `type:"string" required:"true" enum:"SpotProvisioningTimeoutAction"` + + // The spot provisioning timeout period in minutes. If Spot instances are not + // provisioned within this time period, the TimeOutAction is taken. Minimum + // value is 5 and maximum value is 1440. The timeout applies only during initial + // provisioning, when the cluster is first created. + // + // TimeoutDurationMinutes is a required field + TimeoutDurationMinutes *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s SpotProvisioningSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SpotProvisioningSpecification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SpotProvisioningSpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SpotProvisioningSpecification"} + if s.TimeoutAction == nil { + invalidParams.Add(request.NewErrParamRequired("TimeoutAction")) + } + if s.TimeoutDurationMinutes == nil { + invalidParams.Add(request.NewErrParamRequired("TimeoutDurationMinutes")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBlockDurationMinutes sets the BlockDurationMinutes field's value. +func (s *SpotProvisioningSpecification) SetBlockDurationMinutes(v int64) *SpotProvisioningSpecification { + s.BlockDurationMinutes = &v + return s +} + +// SetTimeoutAction sets the TimeoutAction field's value. +func (s *SpotProvisioningSpecification) SetTimeoutAction(v string) *SpotProvisioningSpecification { + s.TimeoutAction = &v + return s +} + +// SetTimeoutDurationMinutes sets the TimeoutDurationMinutes field's value. +func (s *SpotProvisioningSpecification) SetTimeoutDurationMinutes(v int64) *SpotProvisioningSpecification { + s.TimeoutDurationMinutes = &v + return s +} + // This represents a step in a cluster. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/Step type Step struct { @@ -7642,20 +9080,20 @@ func (s *Step) SetStatus(v *StepStatus) *Step { return s } -// Specification of a job flow step. +// Specification of a cluster (job flow) step. // Please also see https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/StepConfig type StepConfig struct { _ struct{} `type:"structure"` - // The action to take if the job flow step fails. + // The action to take if the step fails. ActionOnFailure *string `type:"string" enum:"ActionOnFailure"` - // The JAR file used for the job flow step. + // The JAR file used for the step. // // HadoopJarStep is a required field HadoopJarStep *HadoopJarStepConfig `type:"structure" required:"true"` - // The name of the job flow step. + // The name of the step. // // Name is a required field Name *string `type:"string" required:"true"` @@ -7767,7 +9205,7 @@ type StepExecutionStatusDetail struct { // The start date and time of the step. StartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` - // The state of the job flow step. + // The state of the step. // // State is a required field State *string `type:"string" required:"true" enum:"StepExecutionState"` @@ -8326,6 +9764,62 @@ const ( ComparisonOperatorLessThanOrEqual = "LESS_THAN_OR_EQUAL" ) +const ( + // InstanceCollectionTypeInstanceFleet is a InstanceCollectionType enum value + InstanceCollectionTypeInstanceFleet = "INSTANCE_FLEET" + + // InstanceCollectionTypeInstanceGroup is a InstanceCollectionType enum value + InstanceCollectionTypeInstanceGroup = "INSTANCE_GROUP" +) + +const ( + // InstanceFleetStateProvisioning is a InstanceFleetState enum value + InstanceFleetStateProvisioning = "PROVISIONING" + + // InstanceFleetStateBootstrapping is a InstanceFleetState enum value + InstanceFleetStateBootstrapping = "BOOTSTRAPPING" + + // InstanceFleetStateRunning is a InstanceFleetState enum value + InstanceFleetStateRunning = "RUNNING" + + // InstanceFleetStateResizing is a InstanceFleetState enum value + InstanceFleetStateResizing = "RESIZING" + + // InstanceFleetStateSuspended is a InstanceFleetState enum value + InstanceFleetStateSuspended = "SUSPENDED" + + // InstanceFleetStateTerminating is a InstanceFleetState enum value + InstanceFleetStateTerminating = "TERMINATING" + + // InstanceFleetStateTerminated is a InstanceFleetState enum value + InstanceFleetStateTerminated = "TERMINATED" +) + +const ( + // InstanceFleetStateChangeReasonCodeInternalError is a InstanceFleetStateChangeReasonCode enum value + InstanceFleetStateChangeReasonCodeInternalError = "INTERNAL_ERROR" + + // InstanceFleetStateChangeReasonCodeValidationError is a InstanceFleetStateChangeReasonCode enum value + InstanceFleetStateChangeReasonCodeValidationError = "VALIDATION_ERROR" + + // InstanceFleetStateChangeReasonCodeInstanceFailure is a InstanceFleetStateChangeReasonCode enum value + InstanceFleetStateChangeReasonCodeInstanceFailure = "INSTANCE_FAILURE" + + // InstanceFleetStateChangeReasonCodeClusterTerminated is a InstanceFleetStateChangeReasonCode enum value + InstanceFleetStateChangeReasonCodeClusterTerminated = "CLUSTER_TERMINATED" +) + +const ( + // InstanceFleetTypeMaster is a InstanceFleetType enum value + InstanceFleetTypeMaster = "MASTER" + + // InstanceFleetTypeCore is a InstanceFleetType enum value + InstanceFleetTypeCore = "CORE" + + // InstanceFleetTypeTask is a InstanceFleetType enum value + InstanceFleetTypeTask = "TASK" +) + const ( // InstanceGroupStateProvisioning is a InstanceGroupState enum value InstanceGroupStateProvisioning = "PROVISIONING" @@ -8471,6 +9965,14 @@ const ( ScaleDownBehaviorTerminateAtTaskCompletion = "TERMINATE_AT_TASK_COMPLETION" ) +const ( + // SpotProvisioningTimeoutActionSwitchToOnDemand is a SpotProvisioningTimeoutAction enum value + SpotProvisioningTimeoutActionSwitchToOnDemand = "SWITCH_TO_ON_DEMAND" + + // SpotProvisioningTimeoutActionTerminateCluster is a SpotProvisioningTimeoutAction enum value + SpotProvisioningTimeoutActionTerminateCluster = "TERMINATE_CLUSTER" +) + const ( // StatisticSampleCount is a Statistic enum value StatisticSampleCount = "SAMPLE_COUNT" diff --git a/vendor/github.com/aws/aws-sdk-go/service/emr/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/emr/waiters.go index ccf17eb963..443240d3da 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/emr/waiters.go +++ b/vendor/github.com/aws/aws-sdk-go/service/emr/waiters.go @@ -57,6 +57,39 @@ func (c *EMR) WaitUntilClusterRunning(input *DescribeClusterInput) error { return w.Wait() } +// WaitUntilClusterTerminated uses the Amazon EMR API operation +// DescribeCluster to wait for a condition to be met before returning. +// If the condition is not meet within the max attempt window an error will +// be returned. +func (c *EMR) WaitUntilClusterTerminated(input *DescribeClusterInput) error { + waiterCfg := waiter.Config{ + Operation: "DescribeCluster", + Delay: 30, + MaxAttempts: 60, + Acceptors: []waiter.WaitAcceptor{ + { + State: "success", + Matcher: "path", + Argument: "Cluster.Status.State", + Expected: "TERMINATED", + }, + { + State: "failure", + Matcher: "path", + Argument: "Cluster.Status.State", + Expected: "TERMINATED_WITH_ERRORS", + }, + }, + } + + w := waiter.Waiter{ + Client: c, + Input: input, + Config: waiterCfg, + } + return w.Wait() +} + // WaitUntilStepComplete uses the Amazon EMR API operation // DescribeStep to wait for a condition to be met before returning. // If the condition is not meet within the max attempt window an error will diff --git a/vendor/github.com/aws/aws-sdk-go/service/rds/customizations.go b/vendor/github.com/aws/aws-sdk-go/service/rds/customizations.go index d3023d1f76..d412fb282b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/rds/customizations.go +++ b/vendor/github.com/aws/aws-sdk-go/service/rds/customizations.go @@ -29,6 +29,8 @@ func fillPresignedURL(r *request.Request) { fns := map[string]func(r *request.Request){ opCopyDBSnapshot: copyDBSnapshotPresign, opCreateDBInstanceReadReplica: createDBInstanceReadReplicaPresign, + opCopyDBClusterSnapshot: copyDBClusterSnapshotPresign, + opCreateDBCluster: createDBClusterPresign, } if !r.ParamsFilled() { return @@ -41,7 +43,7 @@ func fillPresignedURL(r *request.Request) { func copyDBSnapshotPresign(r *request.Request) { originParams := r.Params.(*CopyDBSnapshotInput) - if originParams.PreSignedUrl != nil || originParams.DestinationRegion != nil { + if originParams.SourceRegion == nil || originParams.PreSignedUrl != nil || originParams.DestinationRegion != nil { return } @@ -53,7 +55,7 @@ func copyDBSnapshotPresign(r *request.Request) { func createDBInstanceReadReplicaPresign(r *request.Request) { originParams := r.Params.(*CreateDBInstanceReadReplicaInput) - if originParams.PreSignedUrl != nil || originParams.DestinationRegion != nil { + if originParams.SourceRegion == nil || originParams.PreSignedUrl != nil || originParams.DestinationRegion != nil { return } @@ -62,6 +64,30 @@ func createDBInstanceReadReplicaPresign(r *request.Request) { originParams.PreSignedUrl = presignURL(r, originParams.SourceRegion, newParams) } +func copyDBClusterSnapshotPresign(r *request.Request) { + originParams := r.Params.(*CopyDBClusterSnapshotInput) + + if originParams.SourceRegion == nil || originParams.PreSignedUrl != nil || originParams.DestinationRegion != nil { + return + } + + originParams.DestinationRegion = r.Config.Region + newParams := awsutil.CopyOf(r.Params).(*CopyDBClusterSnapshotInput) + originParams.PreSignedUrl = presignURL(r, originParams.SourceRegion, newParams) +} + +func createDBClusterPresign(r *request.Request) { + originParams := r.Params.(*CreateDBClusterInput) + + if originParams.SourceRegion == nil || originParams.PreSignedUrl != nil || originParams.DestinationRegion != nil { + return + } + + originParams.DestinationRegion = r.Config.Region + newParams := awsutil.CopyOf(r.Params).(*CreateDBClusterInput) + originParams.PreSignedUrl = presignURL(r, originParams.SourceRegion, newParams) +} + // presignURL will presign the request by using SoureRegion to sign with. SourceRegion is not // sent to the service, and is only used to not have the SDKs parsing ARNs. func presignURL(r *request.Request, sourceRegion *string, newParams interface{}) *string { diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/LICENSE b/vendor/github.com/circonus-labs/circonus-gometrics/LICENSE new file mode 100644 index 0000000000..761798c3b3 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/LICENSE @@ -0,0 +1,28 @@ +Copyright (c) 2016, Circonus, Inc. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. + * Neither the name Circonus, Inc. nor the names + of its contributors may be used to endorse or promote products + derived from this software without specific prior written + permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/README.md b/vendor/github.com/circonus-labs/circonus-gometrics/api/README.md new file mode 100644 index 0000000000..8f286b79f7 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/README.md @@ -0,0 +1,163 @@ +## Circonus API package + +Full api documentation (for using *this* package) is available at [godoc.org](https://godoc.org/github.com/circonus-labs/circonus-gometrics/api). Links in the lists below go directly to the generic Circonus API documentation for the endpoint. + +### Straight [raw] API access + +* Get +* Post (for creates) +* Put (for updates) +* Delete + +### Helpers for currently supported API endpoints + +> Note, these interfaces are still being actively developed. For example, many of the `New*` methods only return an empty struct; sensible defaults will be added going forward. Other, common helper methods for the various endpoints may be added as use cases emerge. The organization +of the API may change if common use contexts would benefit significantly. + +* [Account](https://login.circonus.com/resources/api/calls/account) + * FetchAccount + * FetchAccounts + * UpdateAccount + * SearchAccounts +* [Acknowledgement](https://login.circonus.com/resources/api/calls/acknowledgement) + * NewAcknowledgement + * FetchAcknowledgement + * FetchAcknowledgements + * UpdateAcknowledgement + * CreateAcknowledgement + * DeleteAcknowledgement + * DeleteAcknowledgementByCID + * SearchAcknowledgements +* [Alert](https://login.circonus.com/resources/api/calls/alert) + * FetchAlert + * FetchAlerts + * SearchAlerts +* [Annotation](https://login.circonus.com/resources/api/calls/annotation) + * NewAnnotation + * FetchAnnotation + * FetchAnnotations + * UpdateAnnotation + * CreateAnnotation + * DeleteAnnotation + * DeleteAnnotationByCID + * SearchAnnotations +* [Broker](https://login.circonus.com/resources/api/calls/broker) + * FetchBroker + * FetchBrokers + * SearchBrokers +* [Check Bundle](https://login.circonus.com/resources/api/calls/check_bundle) + * NewCheckBundle + * FetchCheckBundle + * FetchCheckBundles + * UpdateCheckBundle + * CreateCheckBundle + * DeleteCheckBundle + * DeleteCheckBundleByCID + * SearchCheckBundles +* [Check Bundle Metrics](https://login.circonus.com/resources/api/calls/check_bundle_metrics) + * FetchCheckBundleMetrics + * UpdateCheckBundleMetrics +* [Check](https://login.circonus.com/resources/api/calls/check) + * FetchCheck + * FetchChecks + * SearchChecks +* [Contact Group](https://login.circonus.com/resources/api/calls/contact_group) + * NewContactGroup + * FetchContactGroup + * FetchContactGroups + * UpdateContactGroup + * CreateContactGroup + * DeleteContactGroup + * DeleteContactGroupByCID + * SearchContactGroups +* [Dashboard](https://login.circonus.com/resources/api/calls/dashboard) -- note, this is a work in progress, the methods/types may still change + * NewDashboard + * FetchDashboard + * FetchDashboards + * UpdateDashboard + * CreateDashboard + * DeleteDashboard + * DeleteDashboardByCID + * SearchDashboards +* [Graph](https://login.circonus.com/resources/api/calls/graph) + * NewGraph + * FetchGraph + * FetchGraphs + * UpdateGraph + * CreateGraph + * DeleteGraph + * DeleteGraphByCID + * SearchGraphs +* [Metric Cluster](https://login.circonus.com/resources/api/calls/metric_cluster) + * NewMetricCluster + * FetchMetricCluster + * FetchMetricClusters + * UpdateMetricCluster + * CreateMetricCluster + * DeleteMetricCluster + * DeleteMetricClusterByCID + * SearchMetricClusters +* [Metric](https://login.circonus.com/resources/api/calls/metric) + * FetchMetric + * FetchMetrics + * UpdateMetric + * SearchMetrics +* [Maintenance window](https://login.circonus.com/resources/api/calls/maintenance) + * NewMaintenanceWindow + * FetchMaintenanceWindow + * FetchMaintenanceWindows + * UpdateMaintenanceWindow + * CreateMaintenanceWindow + * DeleteMaintenanceWindow + * DeleteMaintenanceWindowByCID + * SearchMaintenanceWindows +* [Outlier Report](https://login.circonus.com/resources/api/calls/outlier_report) + * NewOutlierReport + * FetchOutlierReport + * FetchOutlierReports + * UpdateOutlierReport + * CreateOutlierReport + * DeleteOutlierReport + * DeleteOutlierReportByCID + * SearchOutlierReports +* [Provision Broker](https://login.circonus.com/resources/api/calls/provision_broker) + * NewProvisionBroker + * FetchProvisionBroker + * UpdateProvisionBroker + * CreateProvisionBroker +* [Rule Set](https://login.circonus.com/resources/api/calls/rule_set) + * NewRuleset + * FetchRuleset + * FetchRulesets + * UpdateRuleset + * CreateRuleset + * DeleteRuleset + * DeleteRulesetByCID + * SearchRulesets +* [Rule Set Group](https://login.circonus.com/resources/api/calls/rule_set_group) + * NewRulesetGroup + * FetchRulesetGroup + * FetchRulesetGroups + * UpdateRulesetGroup + * CreateRulesetGroup + * DeleteRulesetGroup + * DeleteRulesetGroupByCID + * SearchRulesetGroups +* [User](https://login.circonus.com/resources/api/calls/user) + * FetchUser + * FetchUsers + * UpdateUser + * SearchUsers +* [Worksheet](https://login.circonus.com/resources/api/calls/worksheet) + * NewWorksheet + * FetchWorksheet + * FetchWorksheets + * UpdateWorksheet + * CreateWorksheet + * DeleteWorksheet + * DeleteWorksheetByCID + * SearchWorksheets + +--- + +Unless otherwise noted, the source files are distributed under the BSD-style license found in the LICENSE file. diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/account.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/account.go new file mode 100644 index 0000000000..dd8ff577d1 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/account.go @@ -0,0 +1,181 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Account API support - Fetch and Update +// See: https://login.circonus.com/resources/api/calls/account +// Note: Create and Delete are not supported for Accounts via the API + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// AccountLimit defines a usage limit imposed on account +type AccountLimit struct { + Limit uint `json:"_limit,omitempty"` // uint >=0 + Type string `json:"_type,omitempty"` // string + Used uint `json:"_used,omitempty"` // uint >=0 +} + +// AccountInvite defines outstanding invites +type AccountInvite struct { + Email string `json:"email"` // string + Role string `json:"role"` // string +} + +// AccountUser defines current users +type AccountUser struct { + Role string `json:"role"` // string + UserCID string `json:"user"` // string +} + +// Account defines an account. See https://login.circonus.com/resources/api/calls/account for more information. +type Account struct { + Address1 *string `json:"address1,omitempty"` // string or null + Address2 *string `json:"address2,omitempty"` // string or null + CCEmail *string `json:"cc_email,omitempty"` // string or null + CID string `json:"_cid,omitempty"` // string + City *string `json:"city,omitempty"` // string or null + ContactGroups []string `json:"_contact_groups,omitempty"` // [] len >= 0 + Country string `json:"country_code,omitempty"` // string + Description *string `json:"description,omitempty"` // string or null + Invites []AccountInvite `json:"invites,omitempty"` // [] len >= 0 + Name string `json:"name,omitempty"` // string + OwnerCID string `json:"_owner,omitempty"` // string + StateProv *string `json:"state_prov,omitempty"` // string or null + Timezone string `json:"timezone,omitempty"` // string + UIBaseURL string `json:"_ui_base_url,omitempty"` // string + Usage []AccountLimit `json:"_usage,omitempty"` // [] len >= 0 + Users []AccountUser `json:"users,omitempty"` // [] len >= 0 +} + +// FetchAccount retrieves account with passed cid. Pass nil for '/account/current'. +func (a *API) FetchAccount(cid CIDType) (*Account, error) { + var accountCID string + + if cid == nil || *cid == "" { + accountCID = config.AccountPrefix + "/current" + } else { + accountCID = string(*cid) + } + + matched, err := regexp.MatchString(config.AccountCIDRegex, accountCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid account CID [%s]", accountCID) + } + + result, err := a.Get(accountCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] account fetch, received JSON: %s", string(result)) + } + + account := new(Account) + if err := json.Unmarshal(result, account); err != nil { + return nil, err + } + + return account, nil +} + +// FetchAccounts retrieves all accounts available to the API Token. +func (a *API) FetchAccounts() (*[]Account, error) { + result, err := a.Get(config.AccountPrefix) + if err != nil { + return nil, err + } + + var accounts []Account + if err := json.Unmarshal(result, &accounts); err != nil { + return nil, err + } + + return &accounts, nil +} + +// UpdateAccount updates passed account. +func (a *API) UpdateAccount(cfg *Account) (*Account, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid account config [nil]") + } + + accountCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.AccountCIDRegex, accountCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid account CID [%s]", accountCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] account update, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(accountCID, jsonCfg) + if err != nil { + return nil, err + } + + account := &Account{} + if err := json.Unmarshal(result, account); err != nil { + return nil, err + } + + return account, nil +} + +// SearchAccounts returns accounts matching a filter (search queries are not +// suppoted by the account endpoint). Pass nil as filter for all accounts the +// API Token can access. +func (a *API) SearchAccounts(filterCriteria *SearchFilterType) (*[]Account, error) { + q := url.Values{} + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchAccounts() + } + + reqURL := url.URL{ + Path: config.AccountPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var accounts []Account + if err := json.Unmarshal(result, &accounts); err != nil { + return nil, err + } + + return &accounts, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/acknowledgement.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/acknowledgement.go new file mode 100644 index 0000000000..f6da51d4d4 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/acknowledgement.go @@ -0,0 +1,190 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Acknowledgement API support - Fetch, Create, Update, Delete*, and Search +// See: https://login.circonus.com/resources/api/calls/acknowledgement +// * : delete (cancel) by updating with AcknowledgedUntil set to 0 + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// Acknowledgement defines a acknowledgement. See https://login.circonus.com/resources/api/calls/acknowledgement for more information. +type Acknowledgement struct { + AcknowledgedBy string `json:"_acknowledged_by,omitempty"` // string + AcknowledgedOn uint `json:"_acknowledged_on,omitempty"` // uint + AcknowledgedUntil interface{} `json:"acknowledged_until,omitempty"` // NOTE received as uint; can be set using string or uint + Active bool `json:"_active,omitempty"` // bool + AlertCID string `json:"alert,omitempty"` // string + CID string `json:"_cid,omitempty"` // string + LastModified uint `json:"_last_modified,omitempty"` // uint + LastModifiedBy string `json:"_last_modified_by,omitempty"` // string + Notes string `json:"notes,omitempty"` // string +} + +// NewAcknowledgement returns new Acknowledgement (with defaults, if applicable). +func NewAcknowledgement() *Acknowledgement { + return &Acknowledgement{} +} + +// FetchAcknowledgement retrieves acknowledgement with passed cid. +func (a *API) FetchAcknowledgement(cid CIDType) (*Acknowledgement, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid acknowledgement CID [none]") + } + + acknowledgementCID := string(*cid) + + matched, err := regexp.MatchString(config.AcknowledgementCIDRegex, acknowledgementCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid acknowledgement CID [%s]", acknowledgementCID) + } + + result, err := a.Get(acknowledgementCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] acknowledgement fetch, received JSON: %s", string(result)) + } + + acknowledgement := &Acknowledgement{} + if err := json.Unmarshal(result, acknowledgement); err != nil { + return nil, err + } + + return acknowledgement, nil +} + +// FetchAcknowledgements retrieves all acknowledgements available to the API Token. +func (a *API) FetchAcknowledgements() (*[]Acknowledgement, error) { + result, err := a.Get(config.AcknowledgementPrefix) + if err != nil { + return nil, err + } + + var acknowledgements []Acknowledgement + if err := json.Unmarshal(result, &acknowledgements); err != nil { + return nil, err + } + + return &acknowledgements, nil +} + +// UpdateAcknowledgement updates passed acknowledgement. +func (a *API) UpdateAcknowledgement(cfg *Acknowledgement) (*Acknowledgement, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid acknowledgement config [nil]") + } + + acknowledgementCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.AcknowledgementCIDRegex, acknowledgementCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid acknowledgement CID [%s]", acknowledgementCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] acknowledgement update, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(acknowledgementCID, jsonCfg) + if err != nil { + return nil, err + } + + acknowledgement := &Acknowledgement{} + if err := json.Unmarshal(result, acknowledgement); err != nil { + return nil, err + } + + return acknowledgement, nil +} + +// CreateAcknowledgement creates a new acknowledgement. +func (a *API) CreateAcknowledgement(cfg *Acknowledgement) (*Acknowledgement, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid acknowledgement config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + result, err := a.Post(config.AcknowledgementPrefix, jsonCfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] acknowledgement create, sending JSON: %s", string(jsonCfg)) + } + + acknowledgement := &Acknowledgement{} + if err := json.Unmarshal(result, acknowledgement); err != nil { + return nil, err + } + + return acknowledgement, nil +} + +// SearchAcknowledgements returns acknowledgements matching +// the specified search query and/or filter. If nil is passed for +// both parameters all acknowledgements will be returned. +func (a *API) SearchAcknowledgements(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Acknowledgement, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchAcknowledgements() + } + + reqURL := url.URL{ + Path: config.AcknowledgementPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var acknowledgements []Acknowledgement + if err := json.Unmarshal(result, &acknowledgements); err != nil { + return nil, err + } + + return &acknowledgements, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/alert.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/alert.go new file mode 100644 index 0000000000..a242d3d858 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/alert.go @@ -0,0 +1,131 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Alert API support - Fetch and Search +// See: https://login.circonus.com/resources/api/calls/alert + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// Alert defines a alert. See https://login.circonus.com/resources/api/calls/alert for more information. +type Alert struct { + AcknowledgementCID *string `json:"_acknowledgement,omitempty"` // string or null + AlertURL string `json:"_alert_url,omitempty"` // string + BrokerCID string `json:"_broker,omitempty"` // string + CheckCID string `json:"_check,omitempty"` // string + CheckName string `json:"_check_name,omitempty"` // string + CID string `json:"_cid,omitempty"` // string + ClearedOn *uint `json:"_cleared_on,omitempty"` // uint or null + ClearedValue *string `json:"_cleared_value,omitempty"` // string or null + Maintenance []string `json:"_maintenance,omitempty"` // [] len >= 0 + MetricLinkURL *string `json:"_metric_link,omitempty"` // string or null + MetricName string `json:"_metric_name,omitempty"` // string + MetricNotes *string `json:"_metric_notes,omitempty"` // string or null + OccurredOn uint `json:"_occurred_on,omitempty"` // uint + RuleSetCID string `json:"_rule_set,omitempty"` // string + Severity uint `json:"_severity,omitempty"` // uint + Tags []string `json:"_tags,omitempty"` // [] len >= 0 + Value string `json:"_value,omitempty"` // string +} + +// NewAlert returns a new alert (with defaults, if applicable) +func NewAlert() *Alert { + return &Alert{} +} + +// FetchAlert retrieves alert with passed cid. +func (a *API) FetchAlert(cid CIDType) (*Alert, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid alert CID [none]") + } + + alertCID := string(*cid) + + matched, err := regexp.MatchString(config.AlertCIDRegex, alertCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid alert CID [%s]", alertCID) + } + + result, err := a.Get(alertCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch alert, received JSON: %s", string(result)) + } + + alert := &Alert{} + if err := json.Unmarshal(result, alert); err != nil { + return nil, err + } + + return alert, nil +} + +// FetchAlerts retrieves all alerts available to the API Token. +func (a *API) FetchAlerts() (*[]Alert, error) { + result, err := a.Get(config.AlertPrefix) + if err != nil { + return nil, err + } + + var alerts []Alert + if err := json.Unmarshal(result, &alerts); err != nil { + return nil, err + } + + return &alerts, nil +} + +// SearchAlerts returns alerts matching the specified search query +// and/or filter. If nil is passed for both parameters all alerts +// will be returned. +func (a *API) SearchAlerts(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Alert, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchAlerts() + } + + reqURL := url.URL{ + Path: config.AlertPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var alerts []Alert + if err := json.Unmarshal(result, &alerts); err != nil { + return nil, err + } + + return &alerts, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/annotation.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/annotation.go new file mode 100644 index 0000000000..589ec6da90 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/annotation.go @@ -0,0 +1,223 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Annotation API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/annotation + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// Annotation defines a annotation. See https://login.circonus.com/resources/api/calls/annotation for more information. +type Annotation struct { + Category string `json:"category"` // string + CID string `json:"_cid,omitempty"` // string + Created uint `json:"_created,omitempty"` // uint + Description string `json:"description"` // string + LastModified uint `json:"_last_modified,omitempty"` // uint + LastModifiedBy string `json:"_last_modified_by,omitempty"` // string + RelatedMetrics []string `json:"rel_metrics"` // [] len >= 0 + Start uint `json:"start"` // uint + Stop uint `json:"stop"` // uint + Title string `json:"title"` // string +} + +// NewAnnotation returns a new Annotation (with defaults, if applicable) +func NewAnnotation() *Annotation { + return &Annotation{} +} + +// FetchAnnotation retrieves annotation with passed cid. +func (a *API) FetchAnnotation(cid CIDType) (*Annotation, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid annotation CID [none]") + } + + annotationCID := string(*cid) + + matched, err := regexp.MatchString(config.AnnotationCIDRegex, annotationCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid annotation CID [%s]", annotationCID) + } + + result, err := a.Get(annotationCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch annotation, received JSON: %s", string(result)) + } + + annotation := &Annotation{} + if err := json.Unmarshal(result, annotation); err != nil { + return nil, err + } + + return annotation, nil +} + +// FetchAnnotations retrieves all annotations available to the API Token. +func (a *API) FetchAnnotations() (*[]Annotation, error) { + result, err := a.Get(config.AnnotationPrefix) + if err != nil { + return nil, err + } + + var annotations []Annotation + if err := json.Unmarshal(result, &annotations); err != nil { + return nil, err + } + + return &annotations, nil +} + +// UpdateAnnotation updates passed annotation. +func (a *API) UpdateAnnotation(cfg *Annotation) (*Annotation, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid annotation config [nil]") + } + + annotationCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.AnnotationCIDRegex, annotationCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid annotation CID [%s]", annotationCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update annotation, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(annotationCID, jsonCfg) + if err != nil { + return nil, err + } + + annotation := &Annotation{} + if err := json.Unmarshal(result, annotation); err != nil { + return nil, err + } + + return annotation, nil +} + +// CreateAnnotation creates a new annotation. +func (a *API) CreateAnnotation(cfg *Annotation) (*Annotation, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid annotation config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create annotation, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.AnnotationPrefix, jsonCfg) + if err != nil { + return nil, err + } + + annotation := &Annotation{} + if err := json.Unmarshal(result, annotation); err != nil { + return nil, err + } + + return annotation, nil +} + +// DeleteAnnotation deletes passed annotation. +func (a *API) DeleteAnnotation(cfg *Annotation) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid annotation config [nil]") + } + + return a.DeleteAnnotationByCID(CIDType(&cfg.CID)) +} + +// DeleteAnnotationByCID deletes annotation with passed cid. +func (a *API) DeleteAnnotationByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid annotation CID [none]") + } + + annotationCID := string(*cid) + + matched, err := regexp.MatchString(config.AnnotationCIDRegex, annotationCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid annotation CID [%s]", annotationCID) + } + + _, err = a.Delete(annotationCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchAnnotations returns annotations matching the specified +// search query and/or filter. If nil is passed for both parameters +// all annotations will be returned. +func (a *API) SearchAnnotations(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Annotation, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchAnnotations() + } + + reqURL := url.URL{ + Path: config.AnnotationPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var annotations []Annotation + if err := json.Unmarshal(result, &annotations); err != nil { + return nil, err + } + + return &annotations, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/api.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/api.go new file mode 100644 index 0000000000..b9265aa7e6 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/api.go @@ -0,0 +1,371 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package api + +import ( + "bytes" + crand "crypto/rand" + "crypto/tls" + "crypto/x509" + "errors" + "fmt" + "io/ioutil" + "log" + "math" + "math/big" + "math/rand" + "net" + "net/http" + "net/url" + "os" + "regexp" + "strings" + "sync" + "time" + + "github.com/hashicorp/go-retryablehttp" +) + +func init() { + n, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64)) + if err != nil { + rand.Seed(time.Now().UTC().UnixNano()) + return + } + rand.Seed(n.Int64()) +} + +const ( + // a few sensible defaults + defaultAPIURL = "https://api.circonus.com/v2" + defaultAPIApp = "circonus-gometrics" + minRetryWait = 1 * time.Second + maxRetryWait = 15 * time.Second + maxRetries = 4 // equating to 1 + maxRetries total attempts +) + +// TokenKeyType - Circonus API Token key +type TokenKeyType string + +// TokenAppType - Circonus API Token app name +type TokenAppType string + +// CIDType Circonus object cid +type CIDType *string + +// IDType Circonus object id +type IDType int + +// URLType submission url type +type URLType string + +// SearchQueryType search query (see: https://login.circonus.com/resources/api#searching) +type SearchQueryType string + +// SearchFilterType search filter (see: https://login.circonus.com/resources/api#filtering) +type SearchFilterType map[string][]string + +// TagType search/select/custom tag(s) type +type TagType []string + +// Config options for Circonus API +type Config struct { + URL string + TokenKey string + TokenApp string + CACert *x509.CertPool + Log *log.Logger + Debug bool +} + +// API Circonus API +type API struct { + apiURL *url.URL + key TokenKeyType + app TokenAppType + caCert *x509.CertPool + Debug bool + Log *log.Logger + useExponentialBackoff bool + useExponentialBackoffmu sync.Mutex +} + +// NewClient returns a new Circonus API (alias for New) +func NewClient(ac *Config) (*API, error) { + return New(ac) +} + +// NewAPI returns a new Circonus API (alias for New) +func NewAPI(ac *Config) (*API, error) { + return New(ac) +} + +// New returns a new Circonus API +func New(ac *Config) (*API, error) { + + if ac == nil { + return nil, errors.New("Invalid API configuration (nil)") + } + + key := TokenKeyType(ac.TokenKey) + if key == "" { + return nil, errors.New("API Token is required") + } + + app := TokenAppType(ac.TokenApp) + if app == "" { + app = defaultAPIApp + } + + au := string(ac.URL) + if au == "" { + au = defaultAPIURL + } + if !strings.Contains(au, "/") { + // if just a hostname is passed, ASSume "https" and a path prefix of "/v2" + au = fmt.Sprintf("https://%s/v2", ac.URL) + } + if last := len(au) - 1; last >= 0 && au[last] == '/' { + // strip off trailing '/' + au = au[:last] + } + apiURL, err := url.Parse(au) + if err != nil { + return nil, err + } + + a := &API{ + apiURL: apiURL, + key: key, + app: app, + caCert: ac.CACert, + Debug: ac.Debug, + Log: ac.Log, + useExponentialBackoff: false, + } + + a.Debug = ac.Debug + a.Log = ac.Log + if a.Debug && a.Log == nil { + a.Log = log.New(os.Stderr, "", log.LstdFlags) + } + if a.Log == nil { + a.Log = log.New(ioutil.Discard, "", log.LstdFlags) + } + + return a, nil +} + +// EnableExponentialBackoff enables use of exponential backoff for next API call(s) +// and use exponential backoff for all API calls until exponential backoff is disabled. +func (a *API) EnableExponentialBackoff() { + a.useExponentialBackoffmu.Lock() + a.useExponentialBackoff = true + a.useExponentialBackoffmu.Unlock() +} + +// DisableExponentialBackoff disables use of exponential backoff. If a request using +// exponential backoff is currently running, it will stop using exponential backoff +// on its next iteration (if needed). +func (a *API) DisableExponentialBackoff() { + a.useExponentialBackoffmu.Lock() + a.useExponentialBackoff = false + a.useExponentialBackoffmu.Unlock() +} + +// Get API request +func (a *API) Get(reqPath string) ([]byte, error) { + return a.apiRequest("GET", reqPath, nil) +} + +// Delete API request +func (a *API) Delete(reqPath string) ([]byte, error) { + return a.apiRequest("DELETE", reqPath, nil) +} + +// Post API request +func (a *API) Post(reqPath string, data []byte) ([]byte, error) { + return a.apiRequest("POST", reqPath, data) +} + +// Put API request +func (a *API) Put(reqPath string, data []byte) ([]byte, error) { + return a.apiRequest("PUT", reqPath, data) +} + +func backoff(interval uint) float64 { + return math.Floor(((float64(interval) * (1 + rand.Float64())) / 2) + .5) +} + +// apiRequest manages retry strategy for exponential backoffs +func (a *API) apiRequest(reqMethod string, reqPath string, data []byte) ([]byte, error) { + backoffs := []uint{2, 4, 8, 16, 32} + attempts := 0 + success := false + + var result []byte + var err error + + for !success { + result, err = a.apiCall(reqMethod, reqPath, data) + if err == nil { + success = true + } + + // break and return error if not using exponential backoff + if err != nil { + if !a.useExponentialBackoff { + break + } + if matched, _ := regexp.MatchString("code 403", err.Error()); matched { + break + } + } + + if !success { + var wait float64 + if attempts >= len(backoffs) { + wait = backoff(backoffs[len(backoffs)-1]) + } else { + wait = backoff(backoffs[attempts]) + } + attempts++ + a.Log.Printf("[WARN] API call failed %s, retrying in %d seconds.\n", err.Error(), uint(wait)) + time.Sleep(time.Duration(wait) * time.Second) + } + } + + return result, err +} + +// apiCall call Circonus API +func (a *API) apiCall(reqMethod string, reqPath string, data []byte) ([]byte, error) { + reqURL := a.apiURL.String() + + if reqPath == "" { + return nil, errors.New("Invalid URL path") + } + if reqPath[:1] != "/" { + reqURL += "/" + } + if len(reqPath) >= 3 && reqPath[:3] == "/v2" { + reqURL += reqPath[3:] + } else { + reqURL += reqPath + } + + // keep last HTTP error in the event of retry failure + var lastHTTPError error + retryPolicy := func(resp *http.Response, err error) (bool, error) { + if err != nil { + lastHTTPError = err + return true, err + } + // Check the response code. We retry on 500-range responses to allow + // the server time to recover, as 500's are typically not permanent + // errors and may relate to outages on the server side. This will catch + // invalid response codes as well, like 0 and 999. + // Retry on 429 (rate limit) as well. + if resp.StatusCode == 0 || // wtf?! + resp.StatusCode >= 500 || // rutroh + resp.StatusCode == 429 { // rate limit + body, readErr := ioutil.ReadAll(resp.Body) + if readErr != nil { + lastHTTPError = fmt.Errorf("- response: %d %s", resp.StatusCode, readErr.Error()) + } else { + lastHTTPError = fmt.Errorf("- response: %d %s", resp.StatusCode, strings.TrimSpace(string(body))) + } + return true, nil + } + return false, nil + } + + dataReader := bytes.NewReader(data) + + req, err := retryablehttp.NewRequest(reqMethod, reqURL, dataReader) + if err != nil { + return nil, fmt.Errorf("[ERROR] creating API request: %s %+v", reqURL, err) + } + req.Header.Add("Accept", "application/json") + req.Header.Add("X-Circonus-Auth-Token", string(a.key)) + req.Header.Add("X-Circonus-App-Name", string(a.app)) + + client := retryablehttp.NewClient() + if a.apiURL.Scheme == "https" && a.caCert != nil { + client.HTTPClient.Transport = &http.Transport{ + Proxy: http.ProxyFromEnvironment, + Dial: (&net.Dialer{ + Timeout: 30 * time.Second, + KeepAlive: 30 * time.Second, + }).Dial, + TLSHandshakeTimeout: 10 * time.Second, + TLSClientConfig: &tls.Config{RootCAs: a.caCert}, + DisableKeepAlives: true, + MaxIdleConnsPerHost: -1, + DisableCompression: true, + } + } else { + client.HTTPClient.Transport = &http.Transport{ + Proxy: http.ProxyFromEnvironment, + Dial: (&net.Dialer{ + Timeout: 30 * time.Second, + KeepAlive: 30 * time.Second, + }).Dial, + TLSHandshakeTimeout: 10 * time.Second, + DisableKeepAlives: true, + MaxIdleConnsPerHost: -1, + DisableCompression: true, + } + } + + a.useExponentialBackoffmu.Lock() + eb := a.useExponentialBackoff + a.useExponentialBackoffmu.Unlock() + + if eb { + // limit to one request if using exponential backoff + client.RetryWaitMin = 1 + client.RetryWaitMax = 2 + client.RetryMax = 0 + } else { + client.RetryWaitMin = minRetryWait + client.RetryWaitMax = maxRetryWait + client.RetryMax = maxRetries + } + + // retryablehttp only groks log or no log + if a.Debug { + client.Logger = a.Log + } else { + client.Logger = log.New(ioutil.Discard, "", log.LstdFlags) + } + + client.CheckRetry = retryPolicy + + resp, err := client.Do(req) + if err != nil { + if lastHTTPError != nil { + return nil, lastHTTPError + } + return nil, fmt.Errorf("[ERROR] %s: %+v", reqURL, err) + } + + defer resp.Body.Close() + body, err := ioutil.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("[ERROR] reading response %+v", err) + } + + if resp.StatusCode < 200 || resp.StatusCode >= 300 { + msg := fmt.Sprintf("API response code %d: %s", resp.StatusCode, string(body)) + if a.Debug { + a.Log.Printf("[DEBUG] %s\n", msg) + } + + return nil, fmt.Errorf("[ERROR] %s", msg) + } + + return body, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/broker.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/broker.go new file mode 100644 index 0000000000..459fda6df8 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/broker.go @@ -0,0 +1,131 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Broker API support - Fetch and Search +// See: https://login.circonus.com/resources/api/calls/broker + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// BrokerDetail defines instance attributes +type BrokerDetail struct { + CN string `json:"cn"` // string + ExternalHost *string `json:"external_host"` // string or null + ExternalPort uint16 `json:"external_port"` // uint16 + IP *string `json:"ipaddress"` // string or null + MinVer uint `json:"minimum_version_required"` // uint + Modules []string `json:"modules"` // [] len >= 0 + Port *uint16 `json:"port"` // uint16 or null + Skew *string `json:"skew"` // BUG doc: floating point number, api object: string or null + Status string `json:"status"` // string + Version *uint `json:"version"` // uint or null +} + +// Broker defines a broker. See https://login.circonus.com/resources/api/calls/broker for more information. +type Broker struct { + CID string `json:"_cid"` // string + Details []BrokerDetail `json:"_details"` // [] len >= 1 + Latitude *string `json:"_latitude"` // string or null + Longitude *string `json:"_longitude"` // string or null + Name string `json:"_name"` // string + Tags []string `json:"_tags"` // [] len >= 0 + Type string `json:"_type"` // string +} + +// FetchBroker retrieves broker with passed cid. +func (a *API) FetchBroker(cid CIDType) (*Broker, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid broker CID [none]") + } + + brokerCID := string(*cid) + + matched, err := regexp.MatchString(config.BrokerCIDRegex, brokerCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid broker CID [%s]", brokerCID) + } + + result, err := a.Get(brokerCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch broker, received JSON: %s", string(result)) + } + + response := new(Broker) + if err := json.Unmarshal(result, &response); err != nil { + return nil, err + } + + return response, nil + +} + +// FetchBrokers returns all brokers available to the API Token. +func (a *API) FetchBrokers() (*[]Broker, error) { + result, err := a.Get(config.BrokerPrefix) + if err != nil { + return nil, err + } + + var response []Broker + if err := json.Unmarshal(result, &response); err != nil { + return nil, err + } + + return &response, nil +} + +// SearchBrokers returns brokers matching the specified search +// query and/or filter. If nil is passed for both parameters +// all brokers will be returned. +func (a *API) SearchBrokers(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Broker, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchBrokers() + } + + reqURL := url.URL{ + Path: config.BrokerPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var brokers []Broker + if err := json.Unmarshal(result, &brokers); err != nil { + return nil, err + } + + return &brokers, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/check.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/check.go new file mode 100644 index 0000000000..047d719355 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/check.go @@ -0,0 +1,119 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Check API support - Fetch and Search +// See: https://login.circonus.com/resources/api/calls/check +// Notes: checks do not directly support create, update, and delete - see check bundle. + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// CheckDetails contains [undocumented] check type specific information +type CheckDetails map[config.Key]string + +// Check defines a check. See https://login.circonus.com/resources/api/calls/check for more information. +type Check struct { + Active bool `json:"_active"` // bool + BrokerCID string `json:"_broker"` // string + CheckBundleCID string `json:"_check_bundle"` // string + CheckUUID string `json:"_check_uuid"` // string + CID string `json:"_cid"` // string + Details CheckDetails `json:"_details"` // NOTE contents of details are check type specific, map len >= 0 +} + +// FetchCheck retrieves check with passed cid. +func (a *API) FetchCheck(cid CIDType) (*Check, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid check CID [none]") + } + + checkCID := string(*cid) + + matched, err := regexp.MatchString(config.CheckCIDRegex, checkCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid check CID [%s]", checkCID) + } + + result, err := a.Get(checkCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch check, received JSON: %s", string(result)) + } + + check := new(Check) + if err := json.Unmarshal(result, check); err != nil { + return nil, err + } + + return check, nil +} + +// FetchChecks retrieves all checks available to the API Token. +func (a *API) FetchChecks() (*[]Check, error) { + result, err := a.Get(config.CheckPrefix) + if err != nil { + return nil, err + } + + var checks []Check + if err := json.Unmarshal(result, &checks); err != nil { + return nil, err + } + + return &checks, nil +} + +// SearchChecks returns checks matching the specified search query +// and/or filter. If nil is passed for both parameters all checks +// will be returned. +func (a *API) SearchChecks(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Check, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchChecks() + } + + reqURL := url.URL{ + Path: config.CheckPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, err + } + + var checks []Check + if err := json.Unmarshal(result, &checks); err != nil { + return nil, err + } + + return &checks, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/check_bundle.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/check_bundle.go new file mode 100644 index 0000000000..c202853c2e --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/check_bundle.go @@ -0,0 +1,255 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Check bundle API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/check_bundle + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// CheckBundleMetric individual metric configuration +type CheckBundleMetric struct { + Name string `json:"name"` // string + Result *string `json:"result,omitempty"` // string or null, NOTE not settable - return/information value only + Status string `json:"status,omitempty"` // string + Tags []string `json:"tags"` // [] len >= 0 + Type string `json:"type"` // string + Units *string `json:"units,omitempty"` // string or null + +} + +// CheckBundleConfig contains the check type specific configuration settings +// as k/v pairs (see https://login.circonus.com/resources/api/calls/check_bundle +// for the specific settings available for each distinct check type) +type CheckBundleConfig map[config.Key]string + +// CheckBundle defines a check bundle. See https://login.circonus.com/resources/api/calls/check_bundle for more information. +type CheckBundle struct { + Brokers []string `json:"brokers"` // [] len >= 0 + Checks []string `json:"_checks,omitempty"` // [] len >= 0 + CheckUUIDs []string `json:"_check_uuids,omitempty"` // [] len >= 0 + CID string `json:"_cid,omitempty"` // string + Config CheckBundleConfig `json:"config"` // NOTE contents of config are check type specific, map len >= 0 + Created uint `json:"_created,omitempty"` // uint + DisplayName string `json:"display_name"` // string + LastModifedBy string `json:"_last_modifed_by,omitempty"` // string + LastModified uint `json:"_last_modified,omitempty"` // uint + MetricLimit int `json:"metric_limit,omitempty"` // int + Metrics []CheckBundleMetric `json:"metrics"` // [] >= 0 + Notes *string `json:"notes,omitempty"` // string or null + Period uint `json:"period,omitempty"` // uint + ReverseConnectURLs []string `json:"_reverse_connection_urls,omitempty"` // [] len >= 0 + Status string `json:"status,omitempty"` // string + Tags []string `json:"tags,omitempty"` // [] len >= 0 + Target string `json:"target"` // string + Timeout float32 `json:"timeout,omitempty"` // float32 + Type string `json:"type"` // string +} + +// NewCheckBundle returns new CheckBundle (with defaults, if applicable) +func NewCheckBundle() *CheckBundle { + return &CheckBundle{ + Config: make(CheckBundleConfig, config.DefaultConfigOptionsSize), + MetricLimit: config.DefaultCheckBundleMetricLimit, + Period: config.DefaultCheckBundlePeriod, + Timeout: config.DefaultCheckBundleTimeout, + Status: config.DefaultCheckBundleStatus, + } +} + +// FetchCheckBundle retrieves check bundle with passed cid. +func (a *API) FetchCheckBundle(cid CIDType) (*CheckBundle, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid check bundle CID [none]") + } + + bundleCID := string(*cid) + + matched, err := regexp.MatchString(config.CheckBundleCIDRegex, bundleCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid check bundle CID [%v]", bundleCID) + } + + result, err := a.Get(bundleCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch check bundle, received JSON: %s", string(result)) + } + + checkBundle := &CheckBundle{} + if err := json.Unmarshal(result, checkBundle); err != nil { + return nil, err + } + + return checkBundle, nil +} + +// FetchCheckBundles retrieves all check bundles available to the API Token. +func (a *API) FetchCheckBundles() (*[]CheckBundle, error) { + result, err := a.Get(config.CheckBundlePrefix) + if err != nil { + return nil, err + } + + var checkBundles []CheckBundle + if err := json.Unmarshal(result, &checkBundles); err != nil { + return nil, err + } + + return &checkBundles, nil +} + +// UpdateCheckBundle updates passed check bundle. +func (a *API) UpdateCheckBundle(cfg *CheckBundle) (*CheckBundle, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid check bundle config [nil]") + } + + bundleCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.CheckBundleCIDRegex, bundleCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid check bundle CID [%s]", bundleCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update check bundle, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(bundleCID, jsonCfg) + if err != nil { + return nil, err + } + + checkBundle := &CheckBundle{} + if err := json.Unmarshal(result, checkBundle); err != nil { + return nil, err + } + + return checkBundle, nil +} + +// CreateCheckBundle creates a new check bundle (check). +func (a *API) CreateCheckBundle(cfg *CheckBundle) (*CheckBundle, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid check bundle config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create check bundle, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.CheckBundlePrefix, jsonCfg) + if err != nil { + return nil, err + } + + checkBundle := &CheckBundle{} + if err := json.Unmarshal(result, checkBundle); err != nil { + return nil, err + } + + return checkBundle, nil +} + +// DeleteCheckBundle deletes passed check bundle. +func (a *API) DeleteCheckBundle(cfg *CheckBundle) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid check bundle config [nil]") + } + return a.DeleteCheckBundleByCID(CIDType(&cfg.CID)) +} + +// DeleteCheckBundleByCID deletes check bundle with passed cid. +func (a *API) DeleteCheckBundleByCID(cid CIDType) (bool, error) { + + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid check bundle CID [none]") + } + + bundleCID := string(*cid) + + matched, err := regexp.MatchString(config.CheckBundleCIDRegex, bundleCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid check bundle CID [%v]", bundleCID) + } + + _, err = a.Delete(bundleCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchCheckBundles returns check bundles matching the specified +// search query and/or filter. If nil is passed for both parameters +// all check bundles will be returned. +func (a *API) SearchCheckBundles(searchCriteria *SearchQueryType, filterCriteria *map[string][]string) (*[]CheckBundle, error) { + + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchCheckBundles() + } + + reqURL := url.URL{ + Path: config.CheckBundlePrefix, + RawQuery: q.Encode(), + } + + resp, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var results []CheckBundle + if err := json.Unmarshal(resp, &results); err != nil { + return nil, err + } + + return &results, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/check_bundle_metrics.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/check_bundle_metrics.go new file mode 100644 index 0000000000..817c7b8910 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/check_bundle_metrics.go @@ -0,0 +1,95 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// CheckBundleMetrics API support - Fetch, Create*, Update, and Delete** +// See: https://login.circonus.com/resources/api/calls/check_bundle_metrics +// * : create metrics by adding to array with a status of 'active' +// ** : delete (distable collection of) metrics by changing status from 'active' to 'available' + +package api + +import ( + "encoding/json" + "fmt" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// CheckBundleMetrics defines metrics for a specific check bundle. See https://login.circonus.com/resources/api/calls/check_bundle_metrics for more information. +type CheckBundleMetrics struct { + CID string `json:"_cid,omitempty"` // string + Metrics []CheckBundleMetric `json:"metrics"` // See check_bundle.go for CheckBundleMetric definition +} + +// FetchCheckBundleMetrics retrieves metrics for the check bundle with passed cid. +func (a *API) FetchCheckBundleMetrics(cid CIDType) (*CheckBundleMetrics, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid check bundle metrics CID [none]") + } + + metricsCID := string(*cid) + + matched, err := regexp.MatchString(config.CheckBundleMetricsCIDRegex, metricsCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid check bundle metrics CID [%s]", metricsCID) + } + + result, err := a.Get(metricsCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch check bundle metrics, received JSON: %s", string(result)) + } + + metrics := &CheckBundleMetrics{} + if err := json.Unmarshal(result, metrics); err != nil { + return nil, err + } + + return metrics, nil +} + +// UpdateCheckBundleMetrics updates passed metrics. +func (a *API) UpdateCheckBundleMetrics(cfg *CheckBundleMetrics) (*CheckBundleMetrics, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid check bundle metrics config [nil]") + } + + metricsCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.CheckBundleMetricsCIDRegex, metricsCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid check bundle metrics CID [%s]", metricsCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update check bundle metrics, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(metricsCID, jsonCfg) + if err != nil { + return nil, err + } + + metrics := &CheckBundleMetrics{} + if err := json.Unmarshal(result, metrics); err != nil { + return nil, err + } + + return metrics, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/config/consts.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/config/consts.go new file mode 100644 index 0000000000..bbca43d036 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/config/consts.go @@ -0,0 +1,538 @@ +package config + +// Key for CheckBundleConfig options and CheckDetails info +type Key string + +// Constants per type as defined in +// https://login.circonus.com/resources/api/calls/check_bundle +const ( + // + // default settings for api.NewCheckBundle() + // + DefaultCheckBundleMetricLimit = -1 // unlimited + DefaultCheckBundleStatus = "active" + DefaultCheckBundlePeriod = 60 + DefaultCheckBundleTimeout = 10 + DefaultConfigOptionsSize = 20 + + // + // common (apply to more than one check type) + // + AsyncMetrics = Key("asynch_metrics") + AuthMethod = Key("auth_method") + AuthPassword = Key("auth_password") + AuthUser = Key("auth_user") + BaseURL = Key("base_url") + CAChain = Key("ca_chain") + CertFile = Key("certificate_file") + Ciphers = Key("ciphers") + Command = Key("command") + DSN = Key("dsn") + HeaderPrefix = Key("header_") + HTTPVersion = Key("http_version") + KeyFile = Key("key_file") + Method = Key("method") + Password = Key("password") + Payload = Key("payload") + Port = Key("port") + Query = Key("query") + ReadLimit = Key("read_limit") + Secret = Key("secret") + SQL = Key("sql") + URI = Key("uri") + URL = Key("url") + Username = Key("username") + UseSSL = Key("use_ssl") + User = Key("user") + SASLAuthentication = Key("sasl_authentication") + SASLUser = Key("sasl_user") + SecurityLevel = Key("security_level") + Version = Key("version") + AppendColumnName = Key("append_column_name") + Database = Key("database") + JDBCPrefix = Key("jdbc_") + + // + // CAQL check + // + // Common items: + // Query + + // + // Circonus Windows Agent + // + // Common items: + // AuthPassword + // AuthUser + // Port + // URL + Calculated = Key("calculated") + Category = Key("category") + + // + // Cloudwatch + // + // Notes: + // DimPrefix is special because the actual key is dynamic and matches: `dim_(.+)` + // Common items: + // URL + // Version + APIKey = Key("api_key") + APISecret = Key("api_secret") + CloudwatchMetrics = Key("cloudwatch_metrics") + DimPrefix = Key("dim_") + Granularity = Key("granularity") + Namespace = Key("namespace") + Statistics = Key("statistics") + + // + // Collectd + // + // Common items: + // AsyncMetrics + // Username + // Secret + // SecurityLevel + + // + // Composite + // + CompositeMetricName = Key("composite_metric_name") + Formula = Key("formula") + + // + // DHCP + // + HardwareAddress = Key("hardware_addr") + HostIP = Key("host_ip") + RequestType = Key("request_type") + SendPort = Key("send_port") + + // + // DNS + // + // Common items: + // Query + CType = Key("ctype") + Nameserver = Key("nameserver") + RType = Key("rtype") + + // + // EC Console + // + // Common items: + // Command + // Port + // SASLAuthentication + // SASLUser + Objects = Key("objects") + XPath = Key("xpath") + + // + // Elastic Search + // + // Common items: + // Port + // URL + + // + // Ganglia + // + // Common items: + // AsyncMetrics + + // + // Google Analytics + // + // Common items: + // Password + // Username + OAuthToken = Key("oauth_token") + OAuthTokenSecret = Key("oauth_token_secret") + OAuthVersion = Key("oauth_version") + TableID = Key("table_id") + UseOAuth = Key("use_oauth") + + // + // HA Proxy + // + // Common items: + // AuthPassword + // AuthUser + // Port + // UseSSL + Host = Key("host") + Select = Key("select") + + // + // HTTP + // + // Notes: + // HeaderPrefix is special because the actual key is dynamic and matches: `header_(\S+)` + // Common items: + // AuthMethod + // AuthPassword + // AuthUser + // CAChain + // CertFile + // Ciphers + // KeyFile + // URL + // HeaderPrefix + // HTTPVersion + // Method + // Payload + // ReadLimit + Body = Key("body") + Code = Key("code") + Extract = Key("extract") + Redirects = Key("redirects") + + // + // HTTPTRAP + // + // Common items: + // AsyncMetrics + // Secret + + // + // IMAP + // + // Common items: + // AuthPassword + // AuthUser + // CAChain + // CertFile + // Ciphers + // KeyFile + // Port + // UseSSL + Fetch = Key("fetch") + Folder = Key("folder") + HeaderHost = Key("header_Host") + Search = Key("search") + + // + // JMX + // + // Common items: + // Password + // Port + // URI + // Username + MbeanDomains = Key("mbean_domains") + + // + // JSON + // + // Common items: + // AuthMethod + // AuthPassword + // AuthUser + // CAChain + // CertFile + // Ciphers + // HeaderPrefix + // HTTPVersion + // KeyFile + // Method + // Payload + // Port + // ReadLimit + // URL + + // + // Keynote + // + // Notes: + // SlotAliasPrefix is special because the actual key is dynamic and matches: `slot_alias_(\d+)` + // Common items: + // APIKey + // BaseURL + PageComponent = Key("pagecomponent") + SlotAliasPrefix = Key("slot_alias_") + SlotIDList = Key("slot_id_list") + TransPageList = Key("transpagelist") + + // + // Keynote Pulse + // + // Common items: + // BaseURL + // Password + // User + AgreementID = Key("agreement_id") + + // + // LDAP + // + // Common items: + // Password + // Port + AuthType = Key("authtype") + DN = Key("dn") + SecurityPrincipal = Key("security_principal") + + // + // Memcached + // + // Common items: + // Port + + // + // MongoDB + // + // Common items: + // Command + // Password + // Port + // Username + DBName = Key("dbname") + + // + // Munin + // + // Note: no configuration options + + // + // MySQL + // + // Common items: + // DSN + // SQL + + // + // Newrelic rpm + // + // Common items: + // APIKey + AccountID = Key("acct_id") + ApplicationID = Key("application_id") + LicenseKey = Key("license_key") + + // + // Nginx + // + // Common items: + // CAChain + // CertFile + // Ciphers + // KeyFile + // URL + + // + // NRPE + // + // Common items: + // Command + // Port + // UseSSL + AppendUnits = Key("append_uom") + + // + // NTP + // + // Common items: + // Port + Control = Key("control") + + // + // Oracle + // + // Notes: + // JDBCPrefix is special because the actual key is dynamic and matches: `jdbc_(\S+)` + // Common items: + // AppendColumnName + // Database + // JDBCPrefix + // Password + // Port + // SQL + // User + + // + // Ping ICMP + // + AvailNeeded = Key("avail_needed") + Count = Key("count") + Interval = Key("interval") + + // + // PostgreSQL + // + // Common items: + // DSN + // SQL + + // + // Redis + // + // Common items: + // Command + // Password + // Port + DBIndex = Key("dbindex") + + // + // Resmon + // + // Notes: + // HeaderPrefix is special because the actual key is dynamic and matches: `header_(\S+)` + // Common items: + // AuthMethod + // AuthPassword + // AuthUser + // CAChain + // CertFile + // Ciphers + // HeaderPrefix + // HTTPVersion + // KeyFile + // Method + // Payload + // Port + // ReadLimit + // URL + + // + // SMTP + // + // Common items: + // Payload + // Port + // SASLAuthentication + // SASLUser + EHLO = Key("ehlo") + From = Key("from") + SASLAuthID = Key("sasl_auth_id") + SASLPassword = Key("sasl_password") + StartTLS = Key("starttls") + To = Key("to") + + // + // SNMP + // + // Notes: + // OIDPrefix is special because the actual key is dynamic and matches: `oid_(.+)` + // TypePrefix is special because the actual key is dynamic and matches: `type_(.+)` + // Common items: + // Port + // SecurityLevel + // Version + AuthPassphrase = Key("auth_passphrase") + AuthProtocol = Key("auth_protocol") + Community = Key("community") + ContextEngine = Key("context_engine") + ContextName = Key("context_name") + OIDPrefix = Key("oid_") + PrivacyPassphrase = Key("privacy_passphrase") + PrivacyProtocol = Key("privacy_protocol") + SecurityEngine = Key("security_engine") + SecurityName = Key("security_name") + SeparateQueries = Key("separate_queries") + TypePrefix = Key("type_") + + // + // SQLServer + // + // Notes: + // JDBCPrefix is special because the actual key is dynamic and matches: `jdbc_(\S+)` + // Common items: + // AppendColumnName + // Database + // JDBCPrefix + // Password + // Port + // SQL + // User + + // + // SSH v2 + // + // Common items: + // Port + MethodCompCS = Key("method_comp_cs") + MethodCompSC = Key("method_comp_sc") + MethodCryptCS = Key("method_crypt_cs") + MethodCryptSC = Key("method_crypt_sc") + MethodHostKey = Key("method_hostkey") + MethodKeyExchange = Key("method_kex") + MethodMacCS = Key("method_mac_cs") + MethodMacSC = Key("method_mac_sc") + + // + // StatsD + // + // Note: no configuration options + + // + // TCP + // + // Common items: + // CAChain + // CertFile + // Ciphers + // KeyFile + // Port + // UseSSL + BannerMatch = Key("banner_match") + + // + // Varnish + // + // Note: no configuration options + + // + // reserved - config option(s) can't actually be set - here for r/o access + // + ReverseSecretKey = Key("reverse:secret_key") + SubmissionURL = Key("submission_url") + + // + // Endpoint prefix & cid regex + // + DefaultCIDRegex = "[0-9]+" + DefaultUUIDRegex = "[[:xdigit:]]{8}-[[:xdigit:]]{4}-[[:xdigit:]]{4}-[[:xdigit:]]{4}-[[:xdigit:]]{12}" + AccountPrefix = "/account" + AccountCIDRegex = "^(" + AccountPrefix + "/(" + DefaultCIDRegex + "|current))$" + AcknowledgementPrefix = "/acknowledgement" + AcknowledgementCIDRegex = "^(" + AcknowledgementPrefix + "/(" + DefaultCIDRegex + "))$" + AlertPrefix = "/alert" + AlertCIDRegex = "^(" + AlertPrefix + "/(" + DefaultCIDRegex + "))$" + AnnotationPrefix = "/annotation" + AnnotationCIDRegex = "^(" + AnnotationPrefix + "/(" + DefaultCIDRegex + "))$" + BrokerPrefix = "/broker" + BrokerCIDRegex = "^(" + BrokerPrefix + "/(" + DefaultCIDRegex + "))$" + CheckBundleMetricsPrefix = "/check_bundle_metrics" + CheckBundleMetricsCIDRegex = "^(" + CheckBundleMetricsPrefix + "/(" + DefaultCIDRegex + "))$" + CheckBundlePrefix = "/check_bundle" + CheckBundleCIDRegex = "^(" + CheckBundlePrefix + "/(" + DefaultCIDRegex + "))$" + CheckPrefix = "/check" + CheckCIDRegex = "^(" + CheckPrefix + "/(" + DefaultCIDRegex + "))$" + ContactGroupPrefix = "/contact_group" + ContactGroupCIDRegex = "^(" + ContactGroupPrefix + "/(" + DefaultCIDRegex + "))$" + DashboardPrefix = "/dashboard" + DashboardCIDRegex = "^(" + DashboardPrefix + "/(" + DefaultCIDRegex + "))$" + GraphPrefix = "/graph" + GraphCIDRegex = "^(" + GraphPrefix + "/(" + DefaultUUIDRegex + "))$" + MaintenancePrefix = "/maintenance" + MaintenanceCIDRegex = "^(" + MaintenancePrefix + "/(" + DefaultCIDRegex + "))$" + MetricClusterPrefix = "/metric_cluster" + MetricClusterCIDRegex = "^(" + MetricClusterPrefix + "/(" + DefaultCIDRegex + "))$" + MetricPrefix = "/metric" + MetricCIDRegex = "^(" + MetricPrefix + "/((" + DefaultCIDRegex + ")_([^[:space:]]+)))$" + OutlierReportPrefix = "/outlier_report" + OutlierReportCIDRegex = "^(" + OutlierReportPrefix + "/(" + DefaultCIDRegex + "))$" + ProvisionBrokerPrefix = "/provision_broker" + ProvisionBrokerCIDRegex = "^(" + ProvisionBrokerPrefix + "/([a-z0-9]+-[a-z0-9]+))$" + RuleSetGroupPrefix = "/rule_set_group" + RuleSetGroupCIDRegex = "^(" + RuleSetGroupPrefix + "/(" + DefaultCIDRegex + "))$" + RuleSetPrefix = "/rule_set" + RuleSetCIDRegex = "^(" + RuleSetPrefix + "/((" + DefaultCIDRegex + ")_([^[:space:]]+)))$" + UserPrefix = "/user" + UserCIDRegex = "^(" + UserPrefix + "/(" + DefaultCIDRegex + "|current))$" + WorksheetPrefix = "/worksheet" + WorksheetCIDRegex = "^(" + WorksheetPrefix + "/(" + DefaultUUIDRegex + "))$" + // contact group serverity levels + NumSeverityLevels = 5 +) diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/contact_group.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/contact_group.go new file mode 100644 index 0000000000..578a2e8988 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/contact_group.go @@ -0,0 +1,263 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Contact Group API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/contact_group + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// ContactGroupAlertFormats define alert formats +type ContactGroupAlertFormats struct { + LongMessage *string `json:"long_message"` // string or null + LongSubject *string `json:"long_subject"` // string or null + LongSummary *string `json:"long_summary"` // string or null + ShortMessage *string `json:"short_message"` // string or null + ShortSummary *string `json:"short_summary"` // string or null +} + +// ContactGroupContactsExternal external contacts +type ContactGroupContactsExternal struct { + Info string `json:"contact_info"` // string + Method string `json:"method"` // string +} + +// ContactGroupContactsUser user contacts +type ContactGroupContactsUser struct { + Info string `json:"_contact_info,omitempty"` // string + Method string `json:"method"` // string + UserCID string `json:"user"` // string +} + +// ContactGroupContacts list of contacts +type ContactGroupContacts struct { + External []ContactGroupContactsExternal `json:"external"` // [] len >= 0 + Users []ContactGroupContactsUser `json:"users"` // [] len >= 0 +} + +// ContactGroupEscalation defines escalations for severity levels +type ContactGroupEscalation struct { + After uint `json:"after"` // uint + ContactGroupCID string `json:"contact_group"` // string +} + +// ContactGroup defines a contact group. See https://login.circonus.com/resources/api/calls/contact_group for more information. +type ContactGroup struct { + AggregationWindow uint `json:"aggregation_window,omitempty"` // uint + AlertFormats ContactGroupAlertFormats `json:"alert_formats,omitempty"` // ContactGroupAlertFormats + CID string `json:"_cid,omitempty"` // string + Contacts ContactGroupContacts `json:"contacts,omitempty"` // ContactGroupContacts + Escalations []*ContactGroupEscalation `json:"escalations,omitempty"` // [] len == 5, elements: ContactGroupEscalation or null + LastModified uint `json:"_last_modified,omitempty"` // uint + LastModifiedBy string `json:"_last_modified_by,omitempty"` // string + Name string `json:"name,omitempty"` // string + Reminders []uint `json:"reminders,omitempty"` // [] len == 5 + Tags []string `json:"tags,omitempty"` // [] len >= 0 +} + +// NewContactGroup returns a ContactGroup (with defaults, if applicable) +func NewContactGroup() *ContactGroup { + return &ContactGroup{ + Escalations: make([]*ContactGroupEscalation, config.NumSeverityLevels), + Reminders: make([]uint, config.NumSeverityLevels), + Contacts: ContactGroupContacts{ + External: []ContactGroupContactsExternal{}, + Users: []ContactGroupContactsUser{}, + }, + } +} + +// FetchContactGroup retrieves contact group with passed cid. +func (a *API) FetchContactGroup(cid CIDType) (*ContactGroup, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid contact group CID [none]") + } + + groupCID := string(*cid) + + matched, err := regexp.MatchString(config.ContactGroupCIDRegex, groupCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid contact group CID [%s]", groupCID) + } + + result, err := a.Get(groupCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch contact group, received JSON: %s", string(result)) + } + + group := new(ContactGroup) + if err := json.Unmarshal(result, group); err != nil { + return nil, err + } + + return group, nil +} + +// FetchContactGroups retrieves all contact groups available to the API Token. +func (a *API) FetchContactGroups() (*[]ContactGroup, error) { + result, err := a.Get(config.ContactGroupPrefix) + if err != nil { + return nil, err + } + + var groups []ContactGroup + if err := json.Unmarshal(result, &groups); err != nil { + return nil, err + } + + return &groups, nil +} + +// UpdateContactGroup updates passed contact group. +func (a *API) UpdateContactGroup(cfg *ContactGroup) (*ContactGroup, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid contact group config [nil]") + } + + groupCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.ContactGroupCIDRegex, groupCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid contact group CID [%s]", groupCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update contact group, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(groupCID, jsonCfg) + if err != nil { + return nil, err + } + + group := &ContactGroup{} + if err := json.Unmarshal(result, group); err != nil { + return nil, err + } + + return group, nil +} + +// CreateContactGroup creates a new contact group. +func (a *API) CreateContactGroup(cfg *ContactGroup) (*ContactGroup, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid contact group config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create contact group, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.ContactGroupPrefix, jsonCfg) + if err != nil { + return nil, err + } + + group := &ContactGroup{} + if err := json.Unmarshal(result, group); err != nil { + return nil, err + } + + return group, nil +} + +// DeleteContactGroup deletes passed contact group. +func (a *API) DeleteContactGroup(cfg *ContactGroup) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid contact group config [nil]") + } + return a.DeleteContactGroupByCID(CIDType(&cfg.CID)) +} + +// DeleteContactGroupByCID deletes contact group with passed cid. +func (a *API) DeleteContactGroupByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid contact group CID [none]") + } + + groupCID := string(*cid) + + matched, err := regexp.MatchString(config.ContactGroupCIDRegex, groupCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid contact group CID [%s]", groupCID) + } + + _, err = a.Delete(groupCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchContactGroups returns contact groups matching the specified +// search query and/or filter. If nil is passed for both parameters +// all contact groups will be returned. +func (a *API) SearchContactGroups(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]ContactGroup, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchContactGroups() + } + + reqURL := url.URL{ + Path: config.ContactGroupPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var groups []ContactGroup + if err := json.Unmarshal(result, &groups); err != nil { + return nil, err + } + + return &groups, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/dashboard.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/dashboard.go new file mode 100644 index 0000000000..b219873874 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/dashboard.go @@ -0,0 +1,399 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Dashboard API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/dashboard + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// DashboardGridLayout defines layout +type DashboardGridLayout struct { + Height uint `json:"height"` + Width uint `json:"width"` +} + +// DashboardAccessConfig defines access config +type DashboardAccessConfig struct { + BlackDash bool `json:"black_dash,omitempty"` + Enabled bool `json:"enabled,omitempty"` + Fullscreen bool `json:"fullscreen,omitempty"` + FullscreenHideTitle bool `json:"fullscreen_hide_title,omitempty"` + Nickname string `json:"nickname,omitempty"` + ScaleText bool `json:"scale_text,omitempty"` + SharedID string `json:"shared_id,omitempty"` + TextSize uint `json:"text_size,omitempty"` +} + +// DashboardOptions defines options +type DashboardOptions struct { + AccessConfigs []DashboardAccessConfig `json:"access_configs,omitempty"` + FullscreenHideTitle bool `json:"fullscreen_hide_title,omitempty"` + HideGrid bool `json:"hide_grid,omitempty"` + Linkages [][]string `json:"linkages,omitempty"` + ScaleText bool `json:"scale_text,omitempty"` + TextSize uint `json:"text_size,omitempty"` +} + +// ChartTextWidgetDatapoint defines datapoints for charts +type ChartTextWidgetDatapoint struct { + AccountID string `json:"account_id,omitempty"` // metric cluster, metric + CheckID uint `json:"_check_id,omitempty"` // metric + ClusterID uint `json:"cluster_id,omitempty"` // metric cluster + ClusterTitle string `json:"_cluster_title,omitempty"` // metric cluster + Label string `json:"label,omitempty"` // metric + Label2 string `json:"_label,omitempty"` // metric cluster + Metric string `json:"metric,omitempty"` // metric + MetricType string `json:"_metric_type,omitempty"` // metric + NumericOnly bool `json:"numeric_only,omitempty"` // metric cluster +} + +// ChartWidgetDefinitionLegend defines chart widget definition legend +type ChartWidgetDefinitionLegend struct { + Show bool `json:"show,omitempty"` + Type string `json:"type,omitempty"` +} + +// ChartWidgetWedgeLabels defines chart widget wedge labels +type ChartWidgetWedgeLabels struct { + OnChart bool `json:"on_chart,omitempty"` + ToolTips bool `json:"tooltips,omitempty"` +} + +// ChartWidgetWedgeValues defines chart widget wedge values +type ChartWidgetWedgeValues struct { + Angle string `json:"angle,omitempty"` + Color string `json:"color,omitempty"` + Show bool `json:"show,omitempty"` +} + +// ChartWidgtDefinition defines chart widget definition +type ChartWidgtDefinition struct { + Datasource string `json:"datasource,omitempty"` + Derive string `json:"derive,omitempty"` + DisableAutoformat bool `json:"disable_autoformat,omitempty"` + Formula string `json:"formula,omitempty"` + Legend ChartWidgetDefinitionLegend `json:"legend,omitempty"` + Period uint `json:"period,omitempty"` + PopOnHover bool `json:"pop_onhover,omitempty"` + WedgeLabels ChartWidgetWedgeLabels `json:"wedge_labels,omitempty"` + WedgeValues ChartWidgetWedgeValues `json:"wedge_values,omitempty"` +} + +// ForecastGaugeWidgetThresholds defines forecast widget thresholds +type ForecastGaugeWidgetThresholds struct { + Colors []string `json:"colors,omitempty"` // forecasts, gauges + Flip bool `json:"flip,omitempty"` // gauges + Values []string `json:"values,omitempty"` // forecasts, gauges +} + +// StatusWidgetAgentStatusSettings defines agent status settings +type StatusWidgetAgentStatusSettings struct { + Search string `json:"search,omitempty"` + ShowAgentTypes string `json:"show_agent_types,omitempty"` + ShowContact bool `json:"show_contact,omitempty"` + ShowFeeds bool `json:"show_feeds,omitempty"` + ShowSetup bool `json:"show_setup,omitempty"` + ShowSkew bool `json:"show_skew,omitempty"` + ShowUpdates bool `json:"show_updates,omitempty"` +} + +// StatusWidgetHostStatusSettings defines host status settings +type StatusWidgetHostStatusSettings struct { + LayoutStyle string `json:"layout_style,omitempty"` + Search string `json:"search,omitempty"` + SortBy string `json:"sort_by,omitempty"` + TagFilterSet []string `json:"tag_filter_set,omitempty"` +} + +// DashboardWidgetSettings defines settings specific to widget +type DashboardWidgetSettings struct { + AccountID string `json:"account_id,omitempty"` // alerts, clusters, gauges, graphs, lists, status + Acknowledged string `json:"acknowledged,omitempty"` // alerts + AgentStatusSettings StatusWidgetAgentStatusSettings `json:"agent_status_settings,omitempty"` // status + Algorithm string `json:"algorithm,omitempty"` // clusters + Autoformat bool `json:"autoformat,omitempty"` // text + BodyFormat string `json:"body_format,omitempty"` // text + ChartType string `json:"chart_type,omitempty"` // charts + CheckUUID string `json:"check_uuid,omitempty"` // gauges + Cleared string `json:"cleared,omitempty"` // alerts + ClusterID uint `json:"cluster_id,omitempty"` // clusters + ClusterName string `json:"cluster_name,omitempty"` // clusters + ContactGroups []uint `json:"contact_groups,omitempty"` // alerts + ContentType string `json:"content_type,omitempty"` // status + Datapoints []ChartTextWidgetDatapoint `json:"datapoints,omitempty"` // charts, text + DateWindow string `json:"date_window,omitempty"` // graphs + Definition ChartWidgtDefinition `json:"definition,omitempty"` // charts + Dependents string `json:"dependents,omitempty"` // alerts + DisableAutoformat bool `json:"disable_autoformat,omitempty"` // gauges + Display string `json:"display,omitempty"` // alerts + Format string `json:"format,omitempty"` // forecasts + Formula string `json:"formula,omitempty"` // gauges + GraphUUID string `json:"graph_id,omitempty"` // graphs + HideXAxis bool `json:"hide_xaxis,omitempty"` // graphs + HideYAxis bool `json:"hide_yaxis,omitempty"` // graphs + HostStatusSettings StatusWidgetHostStatusSettings `json:"host_status_settings,omitempty"` // status + KeyInline bool `json:"key_inline,omitempty"` // graphs + KeyLoc string `json:"key_loc,omitempty"` // graphs + KeySize uint `json:"key_size,omitempty"` // graphs + KeyWrap bool `json:"key_wrap,omitempty"` // graphs + Label string `json:"label,omitempty"` // graphs + Layout string `json:"layout,omitempty"` // clusters + Limit uint `json:"limit,omitempty"` // lists + Maintenance string `json:"maintenance,omitempty"` // alerts + Markup string `json:"markup,omitempty"` // html + MetricDisplayName string `json:"metric_display_name,omitempty"` // gauges + MetricName string `json:"metric_name,omitempty"` // gauges + MinAge string `json:"min_age,omitempty"` // alerts + OffHours []uint `json:"off_hours,omitempty"` // alerts + OverlaySetID string `json:"overlay_set_id,omitempty"` // graphs + Period uint `json:"period,omitempty"` // gauges, text, graphs + RangeHigh int `json:"range_high,omitempty"` // gauges + RangeLow int `json:"range_low,omitempty"` // gauges + Realtime bool `json:"realtime,omitempty"` // graphs + ResourceLimit string `json:"resource_limit,omitempty"` // forecasts + ResourceUsage string `json:"resource_usage,omitempty"` // forecasts + Search string `json:"search,omitempty"` // alerts, lists + Severity string `json:"severity,omitempty"` // alerts + ShowFlags bool `json:"show_flags,omitempty"` // graphs + Size string `json:"size,omitempty"` // clusters + TagFilterSet []string `json:"tag_filter_set,omitempty"` // alerts + Threshold float32 `json:"threshold,omitempty"` // clusters + Thresholds ForecastGaugeWidgetThresholds `json:"thresholds,omitempty"` // forecasts, gauges + TimeWindow string `json:"time_window,omitempty"` // alerts + Title string `json:"title,omitempty"` // alerts, charts, forecasts, gauges, html + TitleFormat string `json:"title_format,omitempty"` // text + Trend string `json:"trend,omitempty"` // forecasts + Type string `json:"type,omitempty"` // gauges, lists + UseDefault bool `json:"use_default,omitempty"` // text + ValueType string `json:"value_type,omitempty"` // gauges, text + WeekDays []string `json:"weekdays,omitempty"` // alerts +} + +// DashboardWidget defines widget +type DashboardWidget struct { + Active bool `json:"active"` + Height uint `json:"height"` + Name string `json:"name"` + Origin string `json:"origin"` + Settings DashboardWidgetSettings `json:"settings"` + Type string `json:"type"` + WidgetID string `json:"widget_id"` + Width uint `json:"width"` +} + +// Dashboard defines a dashboard. See https://login.circonus.com/resources/api/calls/dashboard for more information. +type Dashboard struct { + AccountDefault bool `json:"account_default"` + Active bool `json:"_active,omitempty"` + CID string `json:"_cid,omitempty"` + Created uint `json:"_created,omitempty"` + CreatedBy string `json:"_created_by,omitempty"` + GridLayout DashboardGridLayout `json:"grid_layout"` + LastModified uint `json:"_last_modified,omitempty"` + Options DashboardOptions `json:"options"` + Shared bool `json:"shared"` + Title string `json:"title"` + UUID string `json:"_dashboard_uuid,omitempty"` + Widgets []DashboardWidget `json:"widgets"` +} + +// NewDashboard returns a new Dashboard (with defaults, if applicable) +func NewDashboard() *Dashboard { + return &Dashboard{} +} + +// FetchDashboard retrieves dashboard with passed cid. +func (a *API) FetchDashboard(cid CIDType) (*Dashboard, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid dashboard CID [none]") + } + + dashboardCID := string(*cid) + + matched, err := regexp.MatchString(config.DashboardCIDRegex, dashboardCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid dashboard CID [%s]", dashboardCID) + } + + result, err := a.Get(string(*cid)) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch dashboard, received JSON: %s", string(result)) + } + + dashboard := new(Dashboard) + if err := json.Unmarshal(result, dashboard); err != nil { + return nil, err + } + + return dashboard, nil +} + +// FetchDashboards retrieves all dashboards available to the API Token. +func (a *API) FetchDashboards() (*[]Dashboard, error) { + result, err := a.Get(config.DashboardPrefix) + if err != nil { + return nil, err + } + + var dashboards []Dashboard + if err := json.Unmarshal(result, &dashboards); err != nil { + return nil, err + } + + return &dashboards, nil +} + +// UpdateDashboard updates passed dashboard. +func (a *API) UpdateDashboard(cfg *Dashboard) (*Dashboard, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid dashboard config [nil]") + } + + dashboardCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.DashboardCIDRegex, dashboardCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid dashboard CID [%s]", dashboardCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update dashboard, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(dashboardCID, jsonCfg) + if err != nil { + return nil, err + } + + dashboard := &Dashboard{} + if err := json.Unmarshal(result, dashboard); err != nil { + return nil, err + } + + return dashboard, nil +} + +// CreateDashboard creates a new dashboard. +func (a *API) CreateDashboard(cfg *Dashboard) (*Dashboard, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid dashboard config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create dashboard, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.DashboardPrefix, jsonCfg) + if err != nil { + return nil, err + } + + dashboard := &Dashboard{} + if err := json.Unmarshal(result, dashboard); err != nil { + return nil, err + } + + return dashboard, nil +} + +// DeleteDashboard deletes passed dashboard. +func (a *API) DeleteDashboard(cfg *Dashboard) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid dashboard config [nil]") + } + return a.DeleteDashboardByCID(CIDType(&cfg.CID)) +} + +// DeleteDashboardByCID deletes dashboard with passed cid. +func (a *API) DeleteDashboardByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid dashboard CID [none]") + } + + dashboardCID := string(*cid) + + matched, err := regexp.MatchString(config.DashboardCIDRegex, dashboardCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid dashboard CID [%s]", dashboardCID) + } + + _, err = a.Delete(dashboardCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchDashboards returns dashboards matching the specified +// search query and/or filter. If nil is passed for both parameters +// all dashboards will be returned. +func (a *API) SearchDashboards(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Dashboard, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchDashboards() + } + + reqURL := url.URL{ + Path: config.DashboardPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var dashboards []Dashboard + if err := json.Unmarshal(result, &dashboards); err != nil { + return nil, err + } + + return &dashboards, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/doc.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/doc.go new file mode 100644 index 0000000000..63904d7844 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/doc.go @@ -0,0 +1,63 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package api provides methods for interacting with the Circonus API. See the full Circonus API +Documentation at https://login.circonus.com/resources/api for more information. + +Raw REST methods + + Get - retrieve existing item(s) + Put - update an existing item + Post - create a new item + Delete - remove an existing item + +Endpoints (supported) + + Account https://login.circonus.com/resources/api/calls/account + Acknowledgement https://login.circonus.com/resources/api/calls/acknowledgement + Alert https://login.circonus.com/resources/api/calls/alert + Annotation https://login.circonus.com/resources/api/calls/annotation + Broker https://login.circonus.com/resources/api/calls/broker + Check https://login.circonus.com/resources/api/calls/check + Check Bundle https://login.circonus.com/resources/api/calls/check_bundle + Check Bundle Metrics https://login.circonus.com/resources/api/calls/check_bundle_metrics + Contact Group https://login.circonus.com/resources/api/calls/contact_group + Dashboard https://login.circonus.com/resources/api/calls/dashboard + Graph https://login.circonus.com/resources/api/calls/graph + Maintenance [window] https://login.circonus.com/resources/api/calls/maintenance + Metric https://login.circonus.com/resources/api/calls/metric + Metric Cluster https://login.circonus.com/resources/api/calls/metric_cluster + Outlier Report https://login.circonus.com/resources/api/calls/outlier_report + Provision Broker https://login.circonus.com/resources/api/calls/provision_broker + Rule Set https://login.circonus.com/resources/api/calls/rule_set + Rule Set Group https://login.circonus.com/resources/api/calls/rule_set_group + User https://login.circonus.com/resources/api/calls/user + Worksheet https://login.circonus.com/resources/api/calls/worksheet + +Endpoints (not supported) + + Support may be added for these endpoints in the future. These endpoints may currently be used + directly with the Raw REST methods above. + + CAQL https://login.circonus.com/resources/api/calls/caql + Check Move https://login.circonus.com/resources/api/calls/check_move + Data https://login.circonus.com/resources/api/calls/data + Snapshot https://login.circonus.com/resources/api/calls/snapshot + Tag https://login.circonus.com/resources/api/calls/tag + Template https://login.circonus.com/resources/api/calls/template + +Verbs + + Fetch singular/plural item(s) - e.g. FetchAnnotation, FetchAnnotations + Create create new item - e.g. CreateAnnotation + Update update an item - e.g. UpdateAnnotation + Delete remove an item - e.g. DeleteAnnotation, DeleteAnnotationByCID + Search search for item(s) - e.g. SearchAnnotations + New new item config - e.g. NewAnnotation (returns an empty item, + any applicable defautls defined) + + Not all endpoints support all verbs. +*/ +package api diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/graph.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/graph.go new file mode 100644 index 0000000000..2d28653271 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/graph.go @@ -0,0 +1,349 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Graph API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/graph + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// GraphAccessKey defines an access key for a graph +type GraphAccessKey struct { + Active bool `json:"active,omitempty"` // boolean + Height uint `json:"height,omitempty"` // uint + Key string `json:"key,omitempty"` // string + Legend bool `json:"legend,omitempty"` // boolean + LockDate bool `json:"lock_date,omitempty"` // boolean + LockMode string `json:"lock_mode,omitempty"` // string + LockRangeEnd uint `json:"lock_range_end,omitempty"` // uint + LockRangeStart uint `json:"lock_range_start,omitempty"` // uint + LockShowTimes bool `json:"lock_show_times,omitempty"` // boolean + LockZoom string `json:"lock_zoom,omitempty"` // string + Nickname string `json:"nickname,omitempty"` // string + Title bool `json:"title,omitempty"` // boolean + Width uint `json:"width,omitempty"` // uint + XLabels bool `json:"x_labels,omitempty"` // boolean + YLabels bool `json:"y_labels,omitempty"` // boolean +} + +// GraphComposite defines a composite +type GraphComposite struct { + Axis string `json:"axis,omitempty"` // string + Color string `json:"color,omitempty"` // string + DataFormula *string `json:"data_formula,omitempty"` // string or null + Hidden bool `json:"hidden,omitempty"` // boolean + LegendFormula *string `json:"legend_formula,omitempty"` // string or null + Name string `json:"name,omitempty"` // string + Stack *uint `json:"stack,omitempty"` // uint or null +} + +// GraphDatapoint defines a datapoint +type GraphDatapoint struct { + Alpha *float64 `json:"alpha,string,omitempty"` // float64 + Axis string `json:"axis,omitempty"` // string + CAQL *string `json:"caql,omitempty"` // string or null + CheckID uint `json:"check_id,omitempty"` // uint + Color *string `json:"color,omitempty"` // string + DataFormula *string `json:"data_formula"` // string or null + Derive interface{} `json:"derive,omitempty"` // BUG doc: string, api: string or boolean(for caql statements) + Hidden bool `json:"hidden"` // boolean + LegendFormula *string `json:"legend_formula"` // string or null + MetricName string `json:"metric_name,omitempty"` // string + MetricType string `json:"metric_type,omitempty"` // string + Name string `json:"name"` // string + Stack *uint `json:"stack"` // uint or null +} + +// GraphGuide defines a guide +type GraphGuide struct { + Color string `json:"color,omitempty"` // string + DataFormula *string `json:"data_formula,omitempty"` // string or null + Hidden bool `json:"hidden,omitempty"` // boolean + LegendFormula *string `json:"legend_formula,omitempty"` // string or null + Name string `json:"name,omitempty"` // string +} + +// GraphMetricCluster defines a metric cluster +type GraphMetricCluster struct { + AggregateFunc string `json:"aggregate_function,omitempty"` // string + Axis string `json:"axis,omitempty"` // string + Color *string `json:"color,omitempty"` // string + DataFormula *string `json:"data_formula"` // string or null + Hidden bool `json:"hidden"` // boolean + LegendFormula *string `json:"legend_formula"` // string or null + MetricCluster string `json:"metric_cluster,omitempty"` // string + Name string `json:"name,omitempty"` // string + Stack *uint `json:"stack"` // uint or null +} + +// OverlayDataOptions defines overlay options for data. Note, each overlay type requires +// a _subset_ of the options. See Graph API documentation (URL above) for details. +type OverlayDataOptions struct { + Alerts *int `json:"alerts,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + ArrayOutput *int `json:"array_output,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + BasePeriod *int `json:"base_period,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + Delay *int `json:"delay,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + Extension string `json:"extension,omitempty"` // string + GraphTitle string `json:"graph_title,omitempty"` // string + GraphUUID string `json:"graph_id,omitempty"` // string + InPercent *bool `json:"in_percent,string,omitempty"` // boolean encoded as string BUG doc: boolean, api: string + Inverse *int `json:"inverse,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + Method string `json:"method,omitempty"` // string + Model string `json:"model,omitempty"` // string + ModelEnd string `json:"model_end,omitempty"` // string + ModelPeriod string `json:"model_period,omitempty"` // string + ModelRelative *int `json:"model_relative,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + Out string `json:"out,omitempty"` // string + Prequel string `json:"prequel,omitempty"` // string + Presets string `json:"presets,omitempty"` // string + Quantiles string `json:"quantiles,omitempty"` // string + SeasonLength *int `json:"season_length,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + Sensitivity *int `json:"sensitivity,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + SingleValue *int `json:"single_value,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + TargetPeriod string `json:"target_period,omitempty"` // string + TimeOffset string `json:"time_offset,omitempty"` // string + TimeShift *int `json:"time_shift,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + Transform string `json:"transform,omitempty"` // string + Version *int `json:"version,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + Window *int `json:"window,string,omitempty"` // int encoded as string BUG doc: numeric, api: string + XShift string `json:"x_shift,omitempty"` // string +} + +// OverlayUISpecs defines UI specs for overlay +type OverlayUISpecs struct { + Decouple bool `json:"decouple,omitempty"` // boolean + ID string `json:"id,omitempty"` // string + Label string `json:"label,omitempty"` // string + Type string `json:"type,omitempty"` // string + Z *int `json:"z,string,omitempty"` // int encoded as string BUG doc: numeric, api: string +} + +// GraphOverlaySet defines overlays for graph +type GraphOverlaySet struct { + DataOpts OverlayDataOptions `json:"data_opts,omitempty"` // OverlayDataOptions + ID string `json:"id,omitempty"` // string + Title string `json:"title,omitempty"` // string + UISpecs OverlayUISpecs `json:"ui_specs,omitempty"` // OverlayUISpecs +} + +// Graph defines a graph. See https://login.circonus.com/resources/api/calls/graph for more information. +type Graph struct { + AccessKeys []GraphAccessKey `json:"access_keys,omitempty"` // [] len >= 0 + CID string `json:"_cid,omitempty"` // string + Composites []GraphComposite `json:"composites,omitempty"` // [] len >= 0 + Datapoints []GraphDatapoint `json:"datapoints,omitempt"` // [] len >= 0 + Description string `json:"description,omitempty"` // string + Guides []GraphGuide `json:"guides,omitempty"` // [] len >= 0 + LineStyle *string `json:"line_style"` // string or null + LogLeftY *int `json:"logarithmic_left_y,string,omitempty"` // int encoded as string or null BUG doc: number (not string) + LogRightY *int `json:"logarithmic_right_y,string,omitempty"` // int encoded as string or null BUG doc: number (not string) + MaxLeftY *float64 `json:"max_left_y,string,omitempty"` // float64 encoded as string or null BUG doc: number (not string) + MaxRightY *float64 `json:"max_right_y,string,omitempty"` // float64 encoded as string or null BUG doc: number (not string) + MetricClusters []GraphMetricCluster `json:"metric_clusters,omitempty"` // [] len >= 0 + MinLeftY *float64 `json:"min_left_y,string,omitempty"` // float64 encoded as string or null BUG doc: number (not string) + MinRightY *float64 `json:"min_right_y,string,omitempty"` // float64 encoded as string or null BUG doc: number (not string) + Notes *string `json:"notes,omitempty"` // string or null + OverlaySets *map[string]GraphOverlaySet `json:"overlay_sets,omitempty"` // GroupOverLaySets or null + Style *string `json:"style"` // string or null + Tags []string `json:"tags,omitempty"` // [] len >= 0 + Title string `json:"title,omitempty"` // string +} + +// NewGraph returns a Graph (with defaults, if applicable) +func NewGraph() *Graph { + return &Graph{} +} + +// FetchGraph retrieves graph with passed cid. +func (a *API) FetchGraph(cid CIDType) (*Graph, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid graph CID [none]") + } + + graphCID := string(*cid) + + matched, err := regexp.MatchString(config.GraphCIDRegex, graphCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid graph CID [%s]", graphCID) + } + + result, err := a.Get(graphCID) + if err != nil { + return nil, err + } + if a.Debug { + a.Log.Printf("[DEBUG] fetch graph, received JSON: %s", string(result)) + } + + graph := new(Graph) + if err := json.Unmarshal(result, graph); err != nil { + return nil, err + } + + return graph, nil +} + +// FetchGraphs retrieves all graphs available to the API Token. +func (a *API) FetchGraphs() (*[]Graph, error) { + result, err := a.Get(config.GraphPrefix) + if err != nil { + return nil, err + } + + var graphs []Graph + if err := json.Unmarshal(result, &graphs); err != nil { + return nil, err + } + + return &graphs, nil +} + +// UpdateGraph updates passed graph. +func (a *API) UpdateGraph(cfg *Graph) (*Graph, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid graph config [nil]") + } + + graphCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.GraphCIDRegex, graphCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid graph CID [%s]", graphCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update graph, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(graphCID, jsonCfg) + if err != nil { + return nil, err + } + + graph := &Graph{} + if err := json.Unmarshal(result, graph); err != nil { + return nil, err + } + + return graph, nil +} + +// CreateGraph creates a new graph. +func (a *API) CreateGraph(cfg *Graph) (*Graph, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid graph config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update graph, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.GraphPrefix, jsonCfg) + if err != nil { + return nil, err + } + + graph := &Graph{} + if err := json.Unmarshal(result, graph); err != nil { + return nil, err + } + + return graph, nil +} + +// DeleteGraph deletes passed graph. +func (a *API) DeleteGraph(cfg *Graph) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid graph config [nil]") + } + return a.DeleteGraphByCID(CIDType(&cfg.CID)) +} + +// DeleteGraphByCID deletes graph with passed cid. +func (a *API) DeleteGraphByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid graph CID [none]") + } + + graphCID := string(*cid) + + matched, err := regexp.MatchString(config.GraphCIDRegex, graphCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid graph CID [%s]", graphCID) + } + + _, err = a.Delete(graphCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchGraphs returns graphs matching the specified search query +// and/or filter. If nil is passed for both parameters all graphs +// will be returned. +func (a *API) SearchGraphs(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Graph, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchGraphs() + } + + reqURL := url.URL{ + Path: config.GraphPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var graphs []Graph + if err := json.Unmarshal(result, &graphs); err != nil { + return nil, err + } + + return &graphs, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/maintenance.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/maintenance.go new file mode 100644 index 0000000000..0e5e047297 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/maintenance.go @@ -0,0 +1,220 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Maintenance window API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/maintenance + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// Maintenance defines a maintenance window. See https://login.circonus.com/resources/api/calls/maintenance for more information. +type Maintenance struct { + CID string `json:"_cid,omitempty"` // string + Item string `json:"item,omitempty"` // string + Notes string `json:"notes,omitempty"` // string + Severities interface{} `json:"severities,omitempty"` // []string NOTE can be set with CSV string or []string + Start uint `json:"start,omitempty"` // uint + Stop uint `json:"stop,omitempty"` // uint + Tags []string `json:"tags,omitempty"` // [] len >= 0 + Type string `json:"type,omitempty"` // string +} + +// NewMaintenanceWindow returns a new Maintenance window (with defaults, if applicable) +func NewMaintenanceWindow() *Maintenance { + return &Maintenance{} +} + +// FetchMaintenanceWindow retrieves maintenance [window] with passed cid. +func (a *API) FetchMaintenanceWindow(cid CIDType) (*Maintenance, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid maintenance window CID [none]") + } + + maintenanceCID := string(*cid) + + matched, err := regexp.MatchString(config.MaintenanceCIDRegex, maintenanceCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid maintenance window CID [%s]", maintenanceCID) + } + + result, err := a.Get(maintenanceCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch maintenance window, received JSON: %s", string(result)) + } + + window := &Maintenance{} + if err := json.Unmarshal(result, window); err != nil { + return nil, err + } + + return window, nil +} + +// FetchMaintenanceWindows retrieves all maintenance [windows] available to API Token. +func (a *API) FetchMaintenanceWindows() (*[]Maintenance, error) { + result, err := a.Get(config.MaintenancePrefix) + if err != nil { + return nil, err + } + + var windows []Maintenance + if err := json.Unmarshal(result, &windows); err != nil { + return nil, err + } + + return &windows, nil +} + +// UpdateMaintenanceWindow updates passed maintenance [window]. +func (a *API) UpdateMaintenanceWindow(cfg *Maintenance) (*Maintenance, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid maintenance window config [nil]") + } + + maintenanceCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.MaintenanceCIDRegex, maintenanceCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid maintenance window CID [%s]", maintenanceCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update maintenance window, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(maintenanceCID, jsonCfg) + if err != nil { + return nil, err + } + + window := &Maintenance{} + if err := json.Unmarshal(result, window); err != nil { + return nil, err + } + + return window, nil +} + +// CreateMaintenanceWindow creates a new maintenance [window]. +func (a *API) CreateMaintenanceWindow(cfg *Maintenance) (*Maintenance, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid maintenance window config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create maintenance window, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.MaintenancePrefix, jsonCfg) + if err != nil { + return nil, err + } + + window := &Maintenance{} + if err := json.Unmarshal(result, window); err != nil { + return nil, err + } + + return window, nil +} + +// DeleteMaintenanceWindow deletes passed maintenance [window]. +func (a *API) DeleteMaintenanceWindow(cfg *Maintenance) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid maintenance window config [nil]") + } + return a.DeleteMaintenanceWindowByCID(CIDType(&cfg.CID)) +} + +// DeleteMaintenanceWindowByCID deletes maintenance [window] with passed cid. +func (a *API) DeleteMaintenanceWindowByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid maintenance window CID [none]") + } + + maintenanceCID := string(*cid) + + matched, err := regexp.MatchString(config.MaintenanceCIDRegex, maintenanceCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid maintenance window CID [%s]", maintenanceCID) + } + + _, err = a.Delete(maintenanceCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchMaintenanceWindows returns maintenance [windows] matching +// the specified search query and/or filter. If nil is passed for +// both parameters all maintenance [windows] will be returned. +func (a *API) SearchMaintenanceWindows(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Maintenance, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchMaintenanceWindows() + } + + reqURL := url.URL{ + Path: config.MaintenancePrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var windows []Maintenance + if err := json.Unmarshal(result, &windows); err != nil { + return nil, err + } + + return &windows, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/metric.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/metric.go new file mode 100644 index 0000000000..3608b06ff9 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/metric.go @@ -0,0 +1,162 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Metric API support - Fetch, Create*, Update, Delete*, and Search +// See: https://login.circonus.com/resources/api/calls/metric +// * : create and delete are handled via check_bundle or check_bundle_metrics + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// Metric defines a metric. See https://login.circonus.com/resources/api/calls/metric for more information. +type Metric struct { + Active bool `json:"_active,omitempty"` // boolean + CheckActive bool `json:"_check_active,omitempty"` // boolean + CheckBundleCID string `json:"_check_bundle,omitempty"` // string + CheckCID string `json:"_check,omitempty"` // string + CheckTags []string `json:"_check_tags,omitempty"` // [] len >= 0 + CheckUUID string `json:"_check_uuid,omitempty"` // string + CID string `json:"_cid,omitempty"` // string + Histogram string `json:"_histogram,omitempty"` // string + Link *string `json:"link,omitempty"` // string or null + MetricName string `json:"_metric_name,omitempty"` // string + MetricType string `json:"_metric_type,omitempty"` // string + Notes *string `json:"notes,omitempty"` // string or null + Tags []string `json:"tags,omitempty"` // [] len >= 0 + Units *string `json:"units,omitempty"` // string or null +} + +// FetchMetric retrieves metric with passed cid. +func (a *API) FetchMetric(cid CIDType) (*Metric, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid metric CID [none]") + } + + metricCID := string(*cid) + + matched, err := regexp.MatchString(config.MetricCIDRegex, metricCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid metric CID [%s]", metricCID) + } + + result, err := a.Get(metricCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch metric, received JSON: %s", string(result)) + } + + metric := &Metric{} + if err := json.Unmarshal(result, metric); err != nil { + return nil, err + } + + return metric, nil +} + +// FetchMetrics retrieves all metrics available to API Token. +func (a *API) FetchMetrics() (*[]Metric, error) { + result, err := a.Get(config.MetricPrefix) + if err != nil { + return nil, err + } + + var metrics []Metric + if err := json.Unmarshal(result, &metrics); err != nil { + return nil, err + } + + return &metrics, nil +} + +// UpdateMetric updates passed metric. +func (a *API) UpdateMetric(cfg *Metric) (*Metric, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid metric config [nil]") + } + + metricCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.MetricCIDRegex, metricCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid metric CID [%s]", metricCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update metric, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(metricCID, jsonCfg) + if err != nil { + return nil, err + } + + metric := &Metric{} + if err := json.Unmarshal(result, metric); err != nil { + return nil, err + } + + return metric, nil +} + +// SearchMetrics returns metrics matching the specified search query +// and/or filter. If nil is passed for both parameters all metrics +// will be returned. +func (a *API) SearchMetrics(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Metric, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchMetrics() + } + + reqURL := url.URL{ + Path: config.MetricPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var metrics []Metric + if err := json.Unmarshal(result, &metrics); err != nil { + return nil, err + } + + return &metrics, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/metric_cluster.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/metric_cluster.go new file mode 100644 index 0000000000..d29c5a674f --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/metric_cluster.go @@ -0,0 +1,261 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Metric Cluster API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/metric_cluster + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// MetricQuery object +type MetricQuery struct { + Query string `json:"query"` + Type string `json:"type"` +} + +// MetricCluster defines a metric cluster. See https://login.circonus.com/resources/api/calls/metric_cluster for more information. +type MetricCluster struct { + CID string `json:"_cid,omitempty"` // string + Description string `json:"description"` // string + MatchingMetrics []string `json:"_matching_metrics,omitempty"` // [] len >= 1 (result info only, if query has extras - cannot be set) + MatchingUUIDMetrics map[string][]string `json:"_matching_uuid_metrics,omitempty"` // [] len >= 1 (result info only, if query has extras - cannot be set) + Name string `json:"name"` // string + Queries []MetricQuery `json:"queries"` // [] len >= 1 + Tags []string `json:"tags"` // [] len >= 0 +} + +// NewMetricCluster returns a new MetricCluster (with defaults, if applicable) +func NewMetricCluster() *MetricCluster { + return &MetricCluster{} +} + +// FetchMetricCluster retrieves metric cluster with passed cid. +func (a *API) FetchMetricCluster(cid CIDType, extras string) (*MetricCluster, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid metric cluster CID [none]") + } + + clusterCID := string(*cid) + + matched, err := regexp.MatchString(config.MetricClusterCIDRegex, clusterCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid metric cluster CID [%s]", clusterCID) + } + + reqURL := url.URL{ + Path: clusterCID, + } + + extra := "" + switch extras { + case "metrics": + extra = "_matching_metrics" + case "uuids": + extra = "_matching_uuid_metrics" + } + + if extra != "" { + q := url.Values{} + q.Set("extra", extra) + reqURL.RawQuery = q.Encode() + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch metric cluster, received JSON: %s", string(result)) + } + + cluster := &MetricCluster{} + if err := json.Unmarshal(result, cluster); err != nil { + return nil, err + } + + return cluster, nil +} + +// FetchMetricClusters retrieves all metric clusters available to API Token. +func (a *API) FetchMetricClusters(extras string) (*[]MetricCluster, error) { + reqURL := url.URL{ + Path: config.MetricClusterPrefix, + } + + extra := "" + switch extras { + case "metrics": + extra = "_matching_metrics" + case "uuids": + extra = "_matching_uuid_metrics" + } + + if extra != "" { + q := url.Values{} + q.Set("extra", extra) + reqURL.RawQuery = q.Encode() + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, err + } + + var clusters []MetricCluster + if err := json.Unmarshal(result, &clusters); err != nil { + return nil, err + } + + return &clusters, nil +} + +// UpdateMetricCluster updates passed metric cluster. +func (a *API) UpdateMetricCluster(cfg *MetricCluster) (*MetricCluster, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid metric cluster config [nil]") + } + + clusterCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.MetricClusterCIDRegex, clusterCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid metric cluster CID [%s]", clusterCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update metric cluster, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(clusterCID, jsonCfg) + if err != nil { + return nil, err + } + + cluster := &MetricCluster{} + if err := json.Unmarshal(result, cluster); err != nil { + return nil, err + } + + return cluster, nil +} + +// CreateMetricCluster creates a new metric cluster. +func (a *API) CreateMetricCluster(cfg *MetricCluster) (*MetricCluster, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid metric cluster config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create metric cluster, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.MetricClusterPrefix, jsonCfg) + if err != nil { + return nil, err + } + + cluster := &MetricCluster{} + if err := json.Unmarshal(result, cluster); err != nil { + return nil, err + } + + return cluster, nil +} + +// DeleteMetricCluster deletes passed metric cluster. +func (a *API) DeleteMetricCluster(cfg *MetricCluster) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid metric cluster config [nil]") + } + return a.DeleteMetricClusterByCID(CIDType(&cfg.CID)) +} + +// DeleteMetricClusterByCID deletes metric cluster with passed cid. +func (a *API) DeleteMetricClusterByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid metric cluster CID [none]") + } + + clusterCID := string(*cid) + + matched, err := regexp.MatchString(config.MetricClusterCIDRegex, clusterCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid metric cluster CID [%s]", clusterCID) + } + + _, err = a.Delete(clusterCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchMetricClusters returns metric clusters matching the specified +// search query and/or filter. If nil is passed for both parameters +// all metric clusters will be returned. +func (a *API) SearchMetricClusters(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]MetricCluster, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchMetricClusters("") + } + + reqURL := url.URL{ + Path: config.MetricClusterPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var clusters []MetricCluster + if err := json.Unmarshal(result, &clusters); err != nil { + return nil, err + } + + return &clusters, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/outlier_report.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/outlier_report.go new file mode 100644 index 0000000000..bc1a4d2b3b --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/outlier_report.go @@ -0,0 +1,221 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// OutlierReport API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/report + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// OutlierReport defines a outlier report. See https://login.circonus.com/resources/api/calls/report for more information. +type OutlierReport struct { + CID string `json:"_cid,omitempty"` // string + Config string `json:"config,omitempty"` // string + Created uint `json:"_created,omitempty"` // uint + CreatedBy string `json:"_created_by,omitempty"` // string + LastModified uint `json:"_last_modified,omitempty"` // uint + LastModifiedBy string `json:"_last_modified_by,omitempty"` // string + MetricClusterCID string `json:"metric_cluster,omitempty"` // st ring + Tags []string `json:"tags,omitempty"` // [] len >= 0 + Title string `json:"title,omitempty"` // string +} + +// NewOutlierReport returns a new OutlierReport (with defaults, if applicable) +func NewOutlierReport() *OutlierReport { + return &OutlierReport{} +} + +// FetchOutlierReport retrieves outlier report with passed cid. +func (a *API) FetchOutlierReport(cid CIDType) (*OutlierReport, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid outlier report CID [none]") + } + + reportCID := string(*cid) + + matched, err := regexp.MatchString(config.OutlierReportCIDRegex, reportCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid outlier report CID [%s]", reportCID) + } + + result, err := a.Get(reportCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch outlier report, received JSON: %s", string(result)) + } + + report := &OutlierReport{} + if err := json.Unmarshal(result, report); err != nil { + return nil, err + } + + return report, nil +} + +// FetchOutlierReports retrieves all outlier reports available to API Token. +func (a *API) FetchOutlierReports() (*[]OutlierReport, error) { + result, err := a.Get(config.OutlierReportPrefix) + if err != nil { + return nil, err + } + + var reports []OutlierReport + if err := json.Unmarshal(result, &reports); err != nil { + return nil, err + } + + return &reports, nil +} + +// UpdateOutlierReport updates passed outlier report. +func (a *API) UpdateOutlierReport(cfg *OutlierReport) (*OutlierReport, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid outlier report config [nil]") + } + + reportCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.OutlierReportCIDRegex, reportCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid outlier report CID [%s]", reportCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update outlier report, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(reportCID, jsonCfg) + if err != nil { + return nil, err + } + + report := &OutlierReport{} + if err := json.Unmarshal(result, report); err != nil { + return nil, err + } + + return report, nil +} + +// CreateOutlierReport creates a new outlier report. +func (a *API) CreateOutlierReport(cfg *OutlierReport) (*OutlierReport, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid outlier report config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create outlier report, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.OutlierReportPrefix, jsonCfg) + if err != nil { + return nil, err + } + + report := &OutlierReport{} + if err := json.Unmarshal(result, report); err != nil { + return nil, err + } + + return report, nil +} + +// DeleteOutlierReport deletes passed outlier report. +func (a *API) DeleteOutlierReport(cfg *OutlierReport) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid outlier report config [nil]") + } + return a.DeleteOutlierReportByCID(CIDType(&cfg.CID)) +} + +// DeleteOutlierReportByCID deletes outlier report with passed cid. +func (a *API) DeleteOutlierReportByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid outlier report CID [none]") + } + + reportCID := string(*cid) + + matched, err := regexp.MatchString(config.OutlierReportCIDRegex, reportCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid outlier report CID [%s]", reportCID) + } + + _, err = a.Delete(reportCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchOutlierReports returns outlier report matching the +// specified search query and/or filter. If nil is passed for +// both parameters all outlier report will be returned. +func (a *API) SearchOutlierReports(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]OutlierReport, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchOutlierReports() + } + + reqURL := url.URL{ + Path: config.OutlierReportPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var reports []OutlierReport + if err := json.Unmarshal(result, &reports); err != nil { + return nil, err + } + + return &reports, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/provision_broker.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/provision_broker.go new file mode 100644 index 0000000000..5b432a2363 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/provision_broker.go @@ -0,0 +1,151 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// ProvisionBroker API support - Fetch, Create, and Update +// See: https://login.circonus.com/resources/api/calls/provision_broker +// Note that the provision_broker endpoint does not return standard cid format +// of '/object/item' (e.g. /provision_broker/abc-123) it just returns 'item' + +package api + +import ( + "encoding/json" + "fmt" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// BrokerStratcon defines stratcons for broker +type BrokerStratcon struct { + CN string `json:"cn,omitempty"` // string + Host string `json:"host,omitempty"` // string + Port string `json:"port,omitempty"` // string +} + +// ProvisionBroker defines a provision broker [request]. See https://login.circonus.com/resources/api/calls/provision_broker for more details. +type ProvisionBroker struct { + Cert string `json:"_cert,omitempty"` // string + CID string `json:"_cid,omitempty"` // string + CSR string `json:"_csr,omitempty"` // string + ExternalHost string `json:"external_host,omitempty"` // string + ExternalPort string `json:"external_port,omitempty"` // string + IPAddress string `json:"ipaddress,omitempty"` // string + Latitude string `json:"latitude,omitempty"` // string + Longitude string `json:"longitude,omitempty"` // string + Name string `json:"noit_name,omitempty"` // string + Port string `json:"port,omitempty"` // string + PreferReverseConnection bool `json:"prefer_reverse_connection,omitempty"` // boolean + Rebuild bool `json:"rebuild,omitempty"` // boolean + Stratcons []BrokerStratcon `json:"_stratcons,omitempty"` // [] len >= 1 + Tags []string `json:"tags,omitempty"` // [] len >= 0 +} + +// NewProvisionBroker returns a new ProvisionBroker (with defaults, if applicable) +func NewProvisionBroker() *ProvisionBroker { + return &ProvisionBroker{} +} + +// FetchProvisionBroker retrieves provision broker [request] with passed cid. +func (a *API) FetchProvisionBroker(cid CIDType) (*ProvisionBroker, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid provision broker request CID [none]") + } + + brokerCID := string(*cid) + + matched, err := regexp.MatchString(config.ProvisionBrokerCIDRegex, brokerCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid provision broker request CID [%s]", brokerCID) + } + + result, err := a.Get(brokerCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch broker provision request, received JSON: %s", string(result)) + } + + broker := &ProvisionBroker{} + if err := json.Unmarshal(result, broker); err != nil { + return nil, err + } + + return broker, nil +} + +// UpdateProvisionBroker updates a broker definition [request]. +func (a *API) UpdateProvisionBroker(cid CIDType, cfg *ProvisionBroker) (*ProvisionBroker, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid provision broker request config [nil]") + } + + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid provision broker request CID [none]") + } + + brokerCID := string(*cid) + + matched, err := regexp.MatchString(config.ProvisionBrokerCIDRegex, brokerCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid provision broker request CID [%s]", brokerCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update broker provision request, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(brokerCID, jsonCfg) + if err != nil { + return nil, err + } + + broker := &ProvisionBroker{} + if err := json.Unmarshal(result, broker); err != nil { + return nil, err + } + + return broker, nil +} + +// CreateProvisionBroker creates a new provison broker [request]. +func (a *API) CreateProvisionBroker(cfg *ProvisionBroker) (*ProvisionBroker, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid provision broker request config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create broker provision request, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.ProvisionBrokerPrefix, jsonCfg) + if err != nil { + return nil, err + } + + broker := &ProvisionBroker{} + if err := json.Unmarshal(result, broker); err != nil { + return nil, err + } + + return broker, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/rule_set.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/rule_set.go new file mode 100644 index 0000000000..3da0907f75 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/rule_set.go @@ -0,0 +1,234 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Rule Set API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/rule_set + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// RuleSetRule defines a ruleset rule +type RuleSetRule struct { + Criteria string `json:"criteria"` // string + Severity uint `json:"severity"` // uint + Value interface{} `json:"value"` // BUG doc: string, api: actual type returned switches based on Criteria + Wait uint `json:"wait"` // uint + WindowingDuration uint `json:"windowing_duration,omitempty"` // uint + WindowingFunction *string `json:"windowing_function,omitempty"` // string or null +} + +// RuleSet defines a ruleset. See https://login.circonus.com/resources/api/calls/rule_set for more information. +type RuleSet struct { + CheckCID string `json:"check"` // string + CID string `json:"_cid,omitempty"` // string + ContactGroups map[uint8][]string `json:"contact_groups"` // [] len 5 + Derive *string `json:"derive,omitempty"` // string or null + Link *string `json:"link"` // string or null + MetricName string `json:"metric_name"` // string + MetricTags []string `json:"metric_tags"` // [] len >= 0 + MetricType string `json:"metric_type"` // string + Notes *string `json:"notes"` // string or null + Parent *string `json:"parent,omitempty"` // string or null + Rules []RuleSetRule `json:"rules"` // [] len >= 1 + Tags []string `json:"tags"` // [] len >= 0 +} + +// NewRuleSet returns a new RuleSet (with defaults if applicable) +func NewRuleSet() *RuleSet { + return &RuleSet{} +} + +// FetchRuleSet retrieves rule set with passed cid. +func (a *API) FetchRuleSet(cid CIDType) (*RuleSet, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid rule set CID [none]") + } + + rulesetCID := string(*cid) + + matched, err := regexp.MatchString(config.RuleSetCIDRegex, rulesetCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid rule set CID [%s]", rulesetCID) + } + + result, err := a.Get(rulesetCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch rule set, received JSON: %s", string(result)) + } + + ruleset := &RuleSet{} + if err := json.Unmarshal(result, ruleset); err != nil { + return nil, err + } + + return ruleset, nil +} + +// FetchRuleSets retrieves all rule sets available to API Token. +func (a *API) FetchRuleSets() (*[]RuleSet, error) { + result, err := a.Get(config.RuleSetPrefix) + if err != nil { + return nil, err + } + + var rulesets []RuleSet + if err := json.Unmarshal(result, &rulesets); err != nil { + return nil, err + } + + return &rulesets, nil +} + +// UpdateRuleSet updates passed rule set. +func (a *API) UpdateRuleSet(cfg *RuleSet) (*RuleSet, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid rule set config [nil]") + } + + rulesetCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.RuleSetCIDRegex, rulesetCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid rule set CID [%s]", rulesetCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update rule set, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(rulesetCID, jsonCfg) + if err != nil { + return nil, err + } + + ruleset := &RuleSet{} + if err := json.Unmarshal(result, ruleset); err != nil { + return nil, err + } + + return ruleset, nil +} + +// CreateRuleSet creates a new rule set. +func (a *API) CreateRuleSet(cfg *RuleSet) (*RuleSet, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid rule set config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create rule set, sending JSON: %s", string(jsonCfg)) + } + + resp, err := a.Post(config.RuleSetPrefix, jsonCfg) + if err != nil { + return nil, err + } + + ruleset := &RuleSet{} + if err := json.Unmarshal(resp, ruleset); err != nil { + return nil, err + } + + return ruleset, nil +} + +// DeleteRuleSet deletes passed rule set. +func (a *API) DeleteRuleSet(cfg *RuleSet) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid rule set config [nil]") + } + return a.DeleteRuleSetByCID(CIDType(&cfg.CID)) +} + +// DeleteRuleSetByCID deletes rule set with passed cid. +func (a *API) DeleteRuleSetByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid rule set CID [none]") + } + + rulesetCID := string(*cid) + + matched, err := regexp.MatchString(config.RuleSetCIDRegex, rulesetCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid rule set CID [%s]", rulesetCID) + } + + _, err = a.Delete(rulesetCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchRuleSets returns rule sets matching the specified search +// query and/or filter. If nil is passed for both parameters all +// rule sets will be returned. +func (a *API) SearchRuleSets(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]RuleSet, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchRuleSets() + } + + reqURL := url.URL{ + Path: config.RuleSetPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var rulesets []RuleSet + if err := json.Unmarshal(result, &rulesets); err != nil { + return nil, err + } + + return &rulesets, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/rule_set_group.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/rule_set_group.go new file mode 100644 index 0000000000..a157430617 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/rule_set_group.go @@ -0,0 +1,231 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// RuleSetGroup API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/rule_set_group + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// RuleSetGroupFormula defines a formula for raising alerts +type RuleSetGroupFormula struct { + Expression interface{} `json:"expression"` // string or uint BUG doc: string, api: string or numeric + RaiseSeverity uint `json:"raise_severity"` // uint + Wait uint `json:"wait"` // uint +} + +// RuleSetGroupCondition defines conditions for raising alerts +type RuleSetGroupCondition struct { + MatchingSeverities []string `json:"matching_serverities"` // [] len >= 1 + RuleSetCID string `json:"rule_set"` // string +} + +// RuleSetGroup defines a ruleset group. See https://login.circonus.com/resources/api/calls/rule_set_group for more information. +type RuleSetGroup struct { + CID string `json:"_cid,omitempty"` // string + ContactGroups map[uint8][]string `json:"contact_groups"` // [] len == 5 + Formulas []RuleSetGroupFormula `json:"formulas"` // [] len >= 0 + Name string `json:"name"` // string + RuleSetConditions []RuleSetGroupCondition `json:"rule_set_conditions"` // [] len >= 1 + Tags []string `json:"tags"` // [] len >= 0 +} + +// NewRuleSetGroup returns a new RuleSetGroup (with defaults, if applicable) +func NewRuleSetGroup() *RuleSetGroup { + return &RuleSetGroup{} +} + +// FetchRuleSetGroup retrieves rule set group with passed cid. +func (a *API) FetchRuleSetGroup(cid CIDType) (*RuleSetGroup, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid rule set group CID [none]") + } + + groupCID := string(*cid) + + matched, err := regexp.MatchString(config.RuleSetGroupCIDRegex, groupCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid rule set group CID [%s]", groupCID) + } + + result, err := a.Get(groupCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch rule set group, received JSON: %s", string(result)) + } + + rulesetGroup := &RuleSetGroup{} + if err := json.Unmarshal(result, rulesetGroup); err != nil { + return nil, err + } + + return rulesetGroup, nil +} + +// FetchRuleSetGroups retrieves all rule set groups available to API Token. +func (a *API) FetchRuleSetGroups() (*[]RuleSetGroup, error) { + result, err := a.Get(config.RuleSetGroupPrefix) + if err != nil { + return nil, err + } + + var rulesetGroups []RuleSetGroup + if err := json.Unmarshal(result, &rulesetGroups); err != nil { + return nil, err + } + + return &rulesetGroups, nil +} + +// UpdateRuleSetGroup updates passed rule set group. +func (a *API) UpdateRuleSetGroup(cfg *RuleSetGroup) (*RuleSetGroup, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid rule set group config [nil]") + } + + groupCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.RuleSetGroupCIDRegex, groupCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid rule set group CID [%s]", groupCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update rule set group, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(groupCID, jsonCfg) + if err != nil { + return nil, err + } + + groups := &RuleSetGroup{} + if err := json.Unmarshal(result, groups); err != nil { + return nil, err + } + + return groups, nil +} + +// CreateRuleSetGroup creates a new rule set group. +func (a *API) CreateRuleSetGroup(cfg *RuleSetGroup) (*RuleSetGroup, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid rule set group config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create rule set group, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.RuleSetGroupPrefix, jsonCfg) + if err != nil { + return nil, err + } + + group := &RuleSetGroup{} + if err := json.Unmarshal(result, group); err != nil { + return nil, err + } + + return group, nil +} + +// DeleteRuleSetGroup deletes passed rule set group. +func (a *API) DeleteRuleSetGroup(cfg *RuleSetGroup) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid rule set group config [nil]") + } + return a.DeleteRuleSetGroupByCID(CIDType(&cfg.CID)) +} + +// DeleteRuleSetGroupByCID deletes rule set group wiht passed cid. +func (a *API) DeleteRuleSetGroupByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid rule set group CID [none]") + } + + groupCID := string(*cid) + + matched, err := regexp.MatchString(config.RuleSetGroupCIDRegex, groupCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid rule set group CID [%s]", groupCID) + } + + _, err = a.Delete(groupCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchRuleSetGroups returns rule set groups matching the +// specified search query and/or filter. If nil is passed for +// both parameters all rule set groups will be returned. +func (a *API) SearchRuleSetGroups(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]RuleSetGroup, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchRuleSetGroups() + } + + reqURL := url.URL{ + Path: config.RuleSetGroupPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var groups []RuleSetGroup + if err := json.Unmarshal(result, &groups); err != nil { + return nil, err + } + + return &groups, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/user.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/user.go new file mode 100644 index 0000000000..7771991d3e --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/user.go @@ -0,0 +1,159 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// User API support - Fetch, Update, and Search +// See: https://login.circonus.com/resources/api/calls/user +// Note: Create and Delete are not supported directly via the User API +// endpoint. See the Account endpoint for inviting and removing users +// from specific accounts. + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// UserContactInfo defines known contact details +type UserContactInfo struct { + SMS string `json:"sms,omitempty"` // string + XMPP string `json:"xmpp,omitempty"` // string +} + +// User defines a user. See https://login.circonus.com/resources/api/calls/user for more information. +type User struct { + CID string `json:"_cid,omitempty"` // string + ContactInfo UserContactInfo `json:"contact_info,omitempty"` // UserContactInfo + Email string `json:"email"` // string + Firstname string `json:"firstname"` // string + Lastname string `json:"lastname"` // string +} + +// FetchUser retrieves user with passed cid. Pass nil for '/user/current'. +func (a *API) FetchUser(cid CIDType) (*User, error) { + var userCID string + + if cid == nil || *cid == "" { + userCID = config.UserPrefix + "/current" + } else { + userCID = string(*cid) + } + + matched, err := regexp.MatchString(config.UserCIDRegex, userCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid user CID [%s]", userCID) + } + + result, err := a.Get(userCID) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch user, received JSON: %s", string(result)) + } + + user := new(User) + if err := json.Unmarshal(result, user); err != nil { + return nil, err + } + + return user, nil +} + +// FetchUsers retrieves all users available to API Token. +func (a *API) FetchUsers() (*[]User, error) { + result, err := a.Get(config.UserPrefix) + if err != nil { + return nil, err + } + + var users []User + if err := json.Unmarshal(result, &users); err != nil { + return nil, err + } + + return &users, nil +} + +// UpdateUser updates passed user. +func (a *API) UpdateUser(cfg *User) (*User, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid user config [nil]") + } + + userCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.UserCIDRegex, userCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid user CID [%s]", userCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update user, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(userCID, jsonCfg) + if err != nil { + return nil, err + } + + user := &User{} + if err := json.Unmarshal(result, user); err != nil { + return nil, err + } + + return user, nil +} + +// SearchUsers returns users matching a filter (search queries +// are not suppoted by the user endpoint). Pass nil as filter for all +// users available to the API Token. +func (a *API) SearchUsers(filterCriteria *SearchFilterType) (*[]User, error) { + q := url.Values{} + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchUsers() + } + + reqURL := url.URL{ + Path: config.UserPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var users []User + if err := json.Unmarshal(result, &users); err != nil { + return nil, err + } + + return &users, nil +} diff --git a/vendor/github.com/circonus-labs/circonus-gometrics/api/worksheet.go b/vendor/github.com/circonus-labs/circonus-gometrics/api/worksheet.go new file mode 100644 index 0000000000..0dd5e93734 --- /dev/null +++ b/vendor/github.com/circonus-labs/circonus-gometrics/api/worksheet.go @@ -0,0 +1,232 @@ +// Copyright 2016 Circonus, Inc. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Worksheet API support - Fetch, Create, Update, Delete, and Search +// See: https://login.circonus.com/resources/api/calls/worksheet + +package api + +import ( + "encoding/json" + "fmt" + "net/url" + "regexp" + + "github.com/circonus-labs/circonus-gometrics/api/config" +) + +// WorksheetGraph defines a worksheet cid to be include in the worksheet +type WorksheetGraph struct { + GraphCID string `json:"graph"` // string +} + +// WorksheetSmartQuery defines a query to include multiple worksheets +type WorksheetSmartQuery struct { + Name string `json:"name"` + Order []string `json:"order"` + Query string `json:"query"` +} + +// Worksheet defines a worksheet. See https://login.circonus.com/resources/api/calls/worksheet for more information. +type Worksheet struct { + CID string `json:"_cid,omitempty"` // string + Description *string `json:"description"` // string or null + Favorite bool `json:"favorite"` // boolean + Graphs []WorksheetGraph `json:"worksheets,omitempty"` // [] len >= 0 + Notes *string `json:"notes"` // string or null + SmartQueries []WorksheetSmartQuery `json:"smart_queries,omitempty"` // [] len >= 0 + Tags []string `json:"tags"` // [] len >= 0 + Title string `json:"title"` // string +} + +// NewWorksheet returns a new Worksheet (with defaults, if applicable) +func NewWorksheet() *Worksheet { + return &Worksheet{} +} + +// FetchWorksheet retrieves worksheet with passed cid. +func (a *API) FetchWorksheet(cid CIDType) (*Worksheet, error) { + if cid == nil || *cid == "" { + return nil, fmt.Errorf("Invalid worksheet CID [none]") + } + + worksheetCID := string(*cid) + + matched, err := regexp.MatchString(config.WorksheetCIDRegex, worksheetCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid worksheet CID [%s]", worksheetCID) + } + + result, err := a.Get(string(*cid)) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] fetch worksheet, received JSON: %s", string(result)) + } + + worksheet := new(Worksheet) + if err := json.Unmarshal(result, worksheet); err != nil { + return nil, err + } + + return worksheet, nil +} + +// FetchWorksheets retrieves all worksheets available to API Token. +func (a *API) FetchWorksheets() (*[]Worksheet, error) { + result, err := a.Get(config.WorksheetPrefix) + if err != nil { + return nil, err + } + + var worksheets []Worksheet + if err := json.Unmarshal(result, &worksheets); err != nil { + return nil, err + } + + return &worksheets, nil +} + +// UpdateWorksheet updates passed worksheet. +func (a *API) UpdateWorksheet(cfg *Worksheet) (*Worksheet, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid worksheet config [nil]") + } + + worksheetCID := string(cfg.CID) + + matched, err := regexp.MatchString(config.WorksheetCIDRegex, worksheetCID) + if err != nil { + return nil, err + } + if !matched { + return nil, fmt.Errorf("Invalid worksheet CID [%s]", worksheetCID) + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] update worksheet, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Put(worksheetCID, jsonCfg) + if err != nil { + return nil, err + } + + worksheet := &Worksheet{} + if err := json.Unmarshal(result, worksheet); err != nil { + return nil, err + } + + return worksheet, nil +} + +// CreateWorksheet creates a new worksheet. +func (a *API) CreateWorksheet(cfg *Worksheet) (*Worksheet, error) { + if cfg == nil { + return nil, fmt.Errorf("Invalid worksheet config [nil]") + } + + jsonCfg, err := json.Marshal(cfg) + if err != nil { + return nil, err + } + + if a.Debug { + a.Log.Printf("[DEBUG] create annotation, sending JSON: %s", string(jsonCfg)) + } + + result, err := a.Post(config.WorksheetPrefix, jsonCfg) + if err != nil { + return nil, err + } + + worksheet := &Worksheet{} + if err := json.Unmarshal(result, worksheet); err != nil { + return nil, err + } + + return worksheet, nil +} + +// DeleteWorksheet deletes passed worksheet. +func (a *API) DeleteWorksheet(cfg *Worksheet) (bool, error) { + if cfg == nil { + return false, fmt.Errorf("Invalid worksheet config [nil]") + } + return a.DeleteWorksheetByCID(CIDType(&cfg.CID)) +} + +// DeleteWorksheetByCID deletes worksheet with passed cid. +func (a *API) DeleteWorksheetByCID(cid CIDType) (bool, error) { + if cid == nil || *cid == "" { + return false, fmt.Errorf("Invalid worksheet CID [none]") + } + + worksheetCID := string(*cid) + + matched, err := regexp.MatchString(config.WorksheetCIDRegex, worksheetCID) + if err != nil { + return false, err + } + if !matched { + return false, fmt.Errorf("Invalid worksheet CID [%s]", worksheetCID) + } + + _, err = a.Delete(worksheetCID) + if err != nil { + return false, err + } + + return true, nil +} + +// SearchWorksheets returns worksheets matching the specified search +// query and/or filter. If nil is passed for both parameters all +// worksheets will be returned. +func (a *API) SearchWorksheets(searchCriteria *SearchQueryType, filterCriteria *SearchFilterType) (*[]Worksheet, error) { + q := url.Values{} + + if searchCriteria != nil && *searchCriteria != "" { + q.Set("search", string(*searchCriteria)) + } + + if filterCriteria != nil && len(*filterCriteria) > 0 { + for filter, criteria := range *filterCriteria { + for _, val := range criteria { + q.Add(filter, val) + } + } + } + + if q.Encode() == "" { + return a.FetchWorksheets() + } + + reqURL := url.URL{ + Path: config.WorksheetPrefix, + RawQuery: q.Encode(), + } + + result, err := a.Get(reqURL.String()) + if err != nil { + return nil, fmt.Errorf("[ERROR] API call error %+v", err) + } + + var worksheets []Worksheet + if err := json.Unmarshal(result, &worksheets); err != nil { + return nil, err + } + + return &worksheets, nil +} diff --git a/vendor/github.com/cyberdelia/heroku-go/v3/heroku.go b/vendor/github.com/cyberdelia/heroku-go/v3/heroku.go index dbfcb955b9..2ad04dd335 100644 --- a/vendor/github.com/cyberdelia/heroku-go/v3/heroku.go +++ b/vendor/github.com/cyberdelia/heroku-go/v3/heroku.go @@ -12,6 +12,7 @@ package heroku import ( "bytes" + "context" "encoding/json" "fmt" "io" @@ -19,17 +20,20 @@ import ( "reflect" "runtime" "time" + + "github.com/google/go-querystring/query" ) const ( Version = "v3" - DefaultAPIURL = "https://api.heroku.com" DefaultUserAgent = "heroku/" + Version + " (" + runtime.GOOS + "; " + runtime.GOARCH + ")" + DefaultURL = "https://api.heroku.com" ) // Service represents your API. type Service struct { client *http.Client + URL string } // NewService creates a Service using the given, if none is provided @@ -40,11 +44,12 @@ func NewService(c *http.Client) *Service { } return &Service{ client: c, + URL: DefaultURL, } } // NewRequest generates an HTTP request, but does not perform the request. -func (s *Service) NewRequest(method, path string, body interface{}) (*http.Request, error) { +func (s *Service) NewRequest(ctx context.Context, method, path string, body interface{}, q interface{}) (*http.Request, error) { var ctype string var rbody io.Reader switch t := body.(type) { @@ -71,10 +76,22 @@ func (s *Service) NewRequest(method, path string, body interface{}) (*http.Reque rbody = bytes.NewReader(j) ctype = "application/json" } - req, err := http.NewRequest(method, DefaultAPIURL+path, rbody) + req, err := http.NewRequest(method, s.URL+path, rbody) if err != nil { return nil, err } + req = req.WithContext(ctx) + if q != nil { + v, err := query.Values(q) + if err != nil { + return nil, err + } + query := v.Encode() + if req.URL.RawQuery != "" && query != "" { + req.URL.RawQuery += "&" + } + req.URL.RawQuery += query + } req.Header.Set("Accept", "application/json") req.Header.Set("User-Agent", DefaultUserAgent) if ctype != "" { @@ -84,8 +101,8 @@ func (s *Service) NewRequest(method, path string, body interface{}) (*http.Reque } // Do sends a request and decodes the response into v. -func (s *Service) Do(v interface{}, method, path string, body interface{}, lr *ListRange) error { - req, err := s.NewRequest(method, path, body) +func (s *Service) Do(ctx context.Context, v interface{}, method, path string, body interface{}, q interface{}, lr *ListRange) error { + req, err := s.NewRequest(ctx, method, path, body, q) if err != nil { return err } @@ -108,28 +125,28 @@ func (s *Service) Do(v interface{}, method, path string, body interface{}, lr *L } // Get sends a GET request and decodes the response into v. -func (s *Service) Get(v interface{}, path string, lr *ListRange) error { - return s.Do(v, "GET", path, nil, lr) +func (s *Service) Get(ctx context.Context, v interface{}, path string, query interface{}, lr *ListRange) error { + return s.Do(ctx, v, "GET", path, nil, query, lr) } // Patch sends a Path request and decodes the response into v. -func (s *Service) Patch(v interface{}, path string, body interface{}) error { - return s.Do(v, "PATCH", path, body, nil) +func (s *Service) Patch(ctx context.Context, v interface{}, path string, body interface{}) error { + return s.Do(ctx, v, "PATCH", path, body, nil, nil) } // Post sends a POST request and decodes the response into v. -func (s *Service) Post(v interface{}, path string, body interface{}) error { - return s.Do(v, "POST", path, body, nil) +func (s *Service) Post(ctx context.Context, v interface{}, path string, body interface{}) error { + return s.Do(ctx, v, "POST", path, body, nil, nil) } // Put sends a PUT request and decodes the response into v. -func (s *Service) Put(v interface{}, path string, body interface{}) error { - return s.Do(v, "PUT", path, body, nil) +func (s *Service) Put(ctx context.Context, v interface{}, path string, body interface{}) error { + return s.Do(ctx, v, "PUT", path, body, nil, nil) } // Delete sends a DELETE request. -func (s *Service) Delete(path string) error { - return s.Do(nil, "DELETE", path, nil, nil) +func (s *Service) Delete(ctx context.Context, v interface{}, path string) error { + return s.Do(ctx, v, "DELETE", path, nil, nil, nil) } // ListRange describes a range. @@ -151,11 +168,11 @@ func (lr *ListRange) SetHeader(req *http.Request) { if lr.Max != 0 { hdrval += fmt.Sprintf("; max=%d", lr.Max) if lr.Descending { - hdrval += ", " + hdrval += "; " } } if lr.Descending { - hdrval += ", order=desc" + hdrval += "order=desc" } req.Header.Set("Range", hdrval) return @@ -192,299 +209,1352 @@ func String(v string) *string { // An account represents an individual signed up to use the Heroku // platform. type Account struct { - AllowTracking bool `json:"allow_tracking"` // whether to allow third party web activity tracking - Beta bool `json:"beta"` // whether allowed to utilize beta Heroku features - CreatedAt time.Time `json:"created_at"` // when account was created - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - LastLogin time.Time `json:"last_login"` // when account last authorized with Heroku - Name *string `json:"name"` // full name of the account owner - UpdatedAt time.Time `json:"updated_at"` // when account was updated - Verified bool `json:"verified"` // whether account has been verified with billing information + AllowTracking bool `json:"allow_tracking" url:"allow_tracking,key"` // whether to allow third party web activity tracking + Beta bool `json:"beta" url:"beta,key"` // whether allowed to utilize beta Heroku features + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account was created + DefaultOrganization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"default_organization" url:"default_organization,key"` // organization selected by default + DelinquentAt *time.Time `json:"delinquent_at" url:"delinquent_at,key"` // when account became delinquent + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + IdentityProvider *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + } `json:"identity_provider" url:"identity_provider,key"` // Identity Provider details for federated users. + LastLogin *time.Time `json:"last_login" url:"last_login,key"` // when account last authorized with Heroku + Name *string `json:"name" url:"name,key"` // full name of the account owner + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account + SuspendedAt *time.Time `json:"suspended_at" url:"suspended_at,key"` // when account was suspended + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether two-factor auth is enabled on the account + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account was updated + Verified bool `json:"verified" url:"verified,key"` // whether account has been verified with billing information +} +type AccountInfoResult struct { + AllowTracking bool `json:"allow_tracking" url:"allow_tracking,key"` // whether to allow third party web activity tracking + Beta bool `json:"beta" url:"beta,key"` // whether allowed to utilize beta Heroku features + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account was created + DefaultOrganization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"default_organization" url:"default_organization,key"` // organization selected by default + DelinquentAt *time.Time `json:"delinquent_at" url:"delinquent_at,key"` // when account became delinquent + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + IdentityProvider *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + } `json:"identity_provider" url:"identity_provider,key"` // Identity Provider details for federated users. + LastLogin *time.Time `json:"last_login" url:"last_login,key"` // when account last authorized with Heroku + Name *string `json:"name" url:"name,key"` // full name of the account owner + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account + SuspendedAt *time.Time `json:"suspended_at" url:"suspended_at,key"` // when account was suspended + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether two-factor auth is enabled on the account + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account was updated + Verified bool `json:"verified" url:"verified,key"` // whether account has been verified with billing information } // Info for account. -func (s *Service) AccountInfo() (*Account, error) { - var account Account - return &account, s.Get(&account, fmt.Sprintf("/account"), nil) +func (s *Service) AccountInfo(ctx context.Context) (*AccountInfoResult, error) { + var account AccountInfoResult + return &account, s.Get(ctx, &account, fmt.Sprintf("/account"), nil, nil) } type AccountUpdateOpts struct { - AllowTracking *bool `json:"allow_tracking,omitempty"` // whether to allow third party web activity tracking - Beta *bool `json:"beta,omitempty"` // whether allowed to utilize beta Heroku features - Name *string `json:"name,omitempty"` // full name of the account owner - Password string `json:"password"` // current password on the account + AllowTracking *bool `json:"allow_tracking,omitempty" url:"allow_tracking,omitempty,key"` // whether to allow third party web activity tracking + Beta *bool `json:"beta,omitempty" url:"beta,omitempty,key"` // whether allowed to utilize beta Heroku features + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // full name of the account owner +} +type AccountUpdateResult struct { + AllowTracking bool `json:"allow_tracking" url:"allow_tracking,key"` // whether to allow third party web activity tracking + Beta bool `json:"beta" url:"beta,key"` // whether allowed to utilize beta Heroku features + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account was created + DefaultOrganization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"default_organization" url:"default_organization,key"` // organization selected by default + DelinquentAt *time.Time `json:"delinquent_at" url:"delinquent_at,key"` // when account became delinquent + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + IdentityProvider *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + } `json:"identity_provider" url:"identity_provider,key"` // Identity Provider details for federated users. + LastLogin *time.Time `json:"last_login" url:"last_login,key"` // when account last authorized with Heroku + Name *string `json:"name" url:"name,key"` // full name of the account owner + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account + SuspendedAt *time.Time `json:"suspended_at" url:"suspended_at,key"` // when account was suspended + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether two-factor auth is enabled on the account + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account was updated + Verified bool `json:"verified" url:"verified,key"` // whether account has been verified with billing information } // Update account. -func (s *Service) AccountUpdate(o struct { - AllowTracking *bool `json:"allow_tracking,omitempty"` // whether to allow third party web activity tracking - Beta *bool `json:"beta,omitempty"` // whether allowed to utilize beta Heroku features - Name *string `json:"name,omitempty"` // full name of the account owner - Password string `json:"password"` // current password on the account -}) (*Account, error) { - var account Account - return &account, s.Patch(&account, fmt.Sprintf("/account"), o) +func (s *Service) AccountUpdate(ctx context.Context, o AccountUpdateOpts) (*AccountUpdateResult, error) { + var account AccountUpdateResult + return &account, s.Patch(ctx, &account, fmt.Sprintf("/account"), o) } -type AccountChangeEmailOpts struct { - Email string `json:"email"` // unique email address of account - Password string `json:"password"` // current password on the account +type AccountDeleteResult struct { + AllowTracking bool `json:"allow_tracking" url:"allow_tracking,key"` // whether to allow third party web activity tracking + Beta bool `json:"beta" url:"beta,key"` // whether allowed to utilize beta Heroku features + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account was created + DefaultOrganization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"default_organization" url:"default_organization,key"` // organization selected by default + DelinquentAt *time.Time `json:"delinquent_at" url:"delinquent_at,key"` // when account became delinquent + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + IdentityProvider *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + } `json:"identity_provider" url:"identity_provider,key"` // Identity Provider details for federated users. + LastLogin *time.Time `json:"last_login" url:"last_login,key"` // when account last authorized with Heroku + Name *string `json:"name" url:"name,key"` // full name of the account owner + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account + SuspendedAt *time.Time `json:"suspended_at" url:"suspended_at,key"` // when account was suspended + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether two-factor auth is enabled on the account + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account was updated + Verified bool `json:"verified" url:"verified,key"` // whether account has been verified with billing information } -// Change Email for account. -func (s *Service) AccountChangeEmail(o struct { - Email string `json:"email"` // unique email address of account - Password string `json:"password"` // current password on the account -}) (*Account, error) { - var account Account - return &account, s.Patch(&account, fmt.Sprintf("/account"), o) -} - -type AccountChangePasswordOpts struct { - NewPassword string `json:"new_password"` // the new password for the account when changing the password - Password string `json:"password"` // current password on the account -} - -// Change Password for account. -func (s *Service) AccountChangePassword(o struct { - NewPassword string `json:"new_password"` // the new password for the account when changing the password - Password string `json:"password"` // current password on the account -}) (*Account, error) { - var account Account - return &account, s.Patch(&account, fmt.Sprintf("/account"), o) +// Delete account. Note that this action cannot be undone. +func (s *Service) AccountDelete(ctx context.Context) (*AccountDeleteResult, error) { + var account AccountDeleteResult + return &account, s.Delete(ctx, &account, fmt.Sprintf("/account")) } // An account feature represents a Heroku labs capability that can be // enabled or disabled for an account on Heroku. type AccountFeature struct { - CreatedAt time.Time `json:"created_at"` // when account feature was created - Description string `json:"description"` // description of account feature - DocURL string `json:"doc_url"` // documentation URL of account feature - Enabled bool `json:"enabled"` // whether or not account feature has been enabled - ID string `json:"id"` // unique identifier of account feature - Name string `json:"name"` // unique name of account feature - State string `json:"state"` // state of account feature - UpdatedAt time.Time `json:"updated_at"` // when account feature was updated + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account feature was created + Description string `json:"description" url:"description,key"` // description of account feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of account feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of account feature + Name string `json:"name" url:"name,key"` // unique name of account feature + State string `json:"state" url:"state,key"` // state of account feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account feature was updated +} +type AccountFeatureInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account feature was created + Description string `json:"description" url:"description,key"` // description of account feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of account feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of account feature + Name string `json:"name" url:"name,key"` // unique name of account feature + State string `json:"state" url:"state,key"` // state of account feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account feature was updated } // Info for an existing account feature. -func (s *Service) AccountFeatureInfo(accountFeatureIdentity string) (*AccountFeature, error) { - var accountFeature AccountFeature - return &accountFeature, s.Get(&accountFeature, fmt.Sprintf("/account/features/%v", accountFeatureIdentity), nil) +func (s *Service) AccountFeatureInfo(ctx context.Context, accountFeatureIdentity string) (*AccountFeatureInfoResult, error) { + var accountFeature AccountFeatureInfoResult + return &accountFeature, s.Get(ctx, &accountFeature, fmt.Sprintf("/account/features/%v", accountFeatureIdentity), nil, nil) +} + +type AccountFeatureListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account feature was created + Description string `json:"description" url:"description,key"` // description of account feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of account feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of account feature + Name string `json:"name" url:"name,key"` // unique name of account feature + State string `json:"state" url:"state,key"` // state of account feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account feature was updated } // List existing account features. -func (s *Service) AccountFeatureList(lr *ListRange) ([]*AccountFeature, error) { - var accountFeatureList []*AccountFeature - return accountFeatureList, s.Get(&accountFeatureList, fmt.Sprintf("/account/features"), lr) +func (s *Service) AccountFeatureList(ctx context.Context, lr *ListRange) (AccountFeatureListResult, error) { + var accountFeature AccountFeatureListResult + return accountFeature, s.Get(ctx, &accountFeature, fmt.Sprintf("/account/features"), nil, lr) } type AccountFeatureUpdateOpts struct { - Enabled bool `json:"enabled"` // whether or not account feature has been enabled + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled +} +type AccountFeatureUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account feature was created + Description string `json:"description" url:"description,key"` // description of account feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of account feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of account feature + Name string `json:"name" url:"name,key"` // unique name of account feature + State string `json:"state" url:"state,key"` // state of account feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account feature was updated } // Update an existing account feature. -func (s *Service) AccountFeatureUpdate(accountFeatureIdentity string, o struct { - Enabled bool `json:"enabled"` // whether or not account feature has been enabled -}) (*AccountFeature, error) { - var accountFeature AccountFeature - return &accountFeature, s.Patch(&accountFeature, fmt.Sprintf("/account/features/%v", accountFeatureIdentity), o) +func (s *Service) AccountFeatureUpdate(ctx context.Context, accountFeatureIdentity string, o AccountFeatureUpdateOpts) (*AccountFeatureUpdateResult, error) { + var accountFeature AccountFeatureUpdateResult + return &accountFeature, s.Patch(ctx, &accountFeature, fmt.Sprintf("/account/features/%v", accountFeatureIdentity), o) } -// Add-ons represent add-ons that have been provisioned for an app. -type Addon struct { +// Add-ons represent add-ons that have been provisioned and attached to +// one or more apps. +type AddOn struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on AddonService struct { - ID string `json:"id"` // unique identifier of this addon-service - Name string `json:"name"` // unique name of this addon-service - } `json:"addon_service"` // identity of add-on service - ConfigVars []string `json:"config_vars"` // config vars associated with this application - CreatedAt time.Time `json:"created_at"` // when add-on was updated - ID string `json:"id"` // unique identifier of add-on - Name string `json:"name"` // name of the add-on unique within its app + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on Plan struct { - ID string `json:"id"` // unique identifier of this plan - Name string `json:"name"` // unique name of this plan - } `json:"plan"` // identity of add-on plan - ProviderID string `json:"provider_id"` // id of this add-on with its provider - UpdatedAt time.Time `json:"updated_at"` // when add-on was updated + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) } -type AddonCreateOpts struct { - Config *map[string]string `json:"config,omitempty"` // custom add-on provisioning options - Plan string `json:"plan"` // unique identifier of this plan +type AddOnCreateOpts struct { + Attachment *struct{} `json:"attachment,omitempty" url:"attachment,omitempty,key"` // name for add-on's initial attachment + Config *map[string]string `json:"config,omitempty" url:"config,omitempty,key"` // custom add-on provisioning options + Plan string `json:"plan" url:"plan,key"` // unique identifier of this plan +} +type AddOnCreateResult struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) } // Create a new add-on. -func (s *Service) AddonCreate(appIdentity string, o struct { - Config *map[string]string `json:"config,omitempty"` // custom add-on provisioning options - Plan string `json:"plan"` // unique identifier of this plan -}) (*Addon, error) { - var addon Addon - return &addon, s.Post(&addon, fmt.Sprintf("/apps/%v/addons", appIdentity), o) +func (s *Service) AddOnCreate(ctx context.Context, appIdentity string, o AddOnCreateOpts) (*AddOnCreateResult, error) { + var addOn AddOnCreateResult + return &addOn, s.Post(ctx, &addOn, fmt.Sprintf("/apps/%v/addons", appIdentity), o) +} + +type AddOnDeleteResult struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) } // Delete an existing add-on. -func (s *Service) AddonDelete(appIdentity string, addonIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v/addons/%v", appIdentity, addonIdentity)) +func (s *Service) AddOnDelete(ctx context.Context, appIdentity string, addOnIdentity string) (*AddOnDeleteResult, error) { + var addOn AddOnDeleteResult + return &addOn, s.Delete(ctx, &addOn, fmt.Sprintf("/apps/%v/addons/%v", appIdentity, addOnIdentity)) +} + +type AddOnInfoResult struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) } // Info for an existing add-on. -func (s *Service) AddonInfo(appIdentity string, addonIdentity string) (*Addon, error) { - var addon Addon - return &addon, s.Get(&addon, fmt.Sprintf("/apps/%v/addons/%v", appIdentity, addonIdentity), nil) +func (s *Service) AddOnInfo(ctx context.Context, appIdentity string, addOnIdentity string) (*AddOnInfoResult, error) { + var addOn AddOnInfoResult + return &addOn, s.Get(ctx, &addOn, fmt.Sprintf("/apps/%v/addons/%v", appIdentity, addOnIdentity), nil, nil) } -// List existing add-ons. -func (s *Service) AddonList(appIdentity string, lr *ListRange) ([]*Addon, error) { - var addonList []*Addon - return addonList, s.Get(&addonList, fmt.Sprintf("/apps/%v/addons", appIdentity), lr) +type AddOnListResult []struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) } -type AddonUpdateOpts struct { - Plan string `json:"plan"` // unique identifier of this plan +// List all existing add-ons. +func (s *Service) AddOnList(ctx context.Context, lr *ListRange) (AddOnListResult, error) { + var addOn AddOnListResult + return addOn, s.Get(ctx, &addOn, fmt.Sprintf("/addons"), nil, lr) +} + +type AddOnListByUserResult []struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) +} + +// List all existing add-ons a user has access to +func (s *Service) AddOnListByUser(ctx context.Context, accountIdentity string, lr *ListRange) (AddOnListByUserResult, error) { + var addOn AddOnListByUserResult + return addOn, s.Get(ctx, &addOn, fmt.Sprintf("/users/%v/addons", accountIdentity), nil, lr) +} + +type AddOnListByAppResult []struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) +} + +// List existing add-ons for an app. +func (s *Service) AddOnListByApp(ctx context.Context, appIdentity string, lr *ListRange) (AddOnListByAppResult, error) { + var addOn AddOnListByAppResult + return addOn, s.Get(ctx, &addOn, fmt.Sprintf("/apps/%v/addons", appIdentity), nil, lr) +} + +type AddOnUpdateOpts struct { + Plan string `json:"plan" url:"plan,key"` // unique identifier of this plan } // Change add-on plan. Some add-ons may not support changing plans. In // that case, an error will be returned. -func (s *Service) AddonUpdate(appIdentity string, addonIdentity string, o struct { - Plan string `json:"plan"` // unique identifier of this plan -}) (*Addon, error) { - var addon Addon - return &addon, s.Patch(&addon, fmt.Sprintf("/apps/%v/addons/%v", appIdentity, addonIdentity), o) +func (s *Service) AddOnUpdate(ctx context.Context, appIdentity string, addOnIdentity string, o AddOnUpdateOpts) (*AddOn, error) { + var addOn AddOn + return &addOn, s.Patch(ctx, &addOn, fmt.Sprintf("/apps/%v/addons/%v", appIdentity, addOnIdentity), o) +} + +// Add-on Actions are lifecycle operations for add-on provisioning and +// deprovisioning. They allow whitelisted add-on providers to +// (de)provision add-ons in the background and then report back when +// (de)provisioning is complete. +type AddOnAction struct{} +type AddOnActionCreateProvisionResult struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) +} + +// Mark an add-on as provisioned for use. +func (s *Service) AddOnActionCreateProvision(ctx context.Context, addOnIdentity string) (*AddOnActionCreateProvisionResult, error) { + var addOnAction AddOnActionCreateProvisionResult + return &addOnAction, s.Post(ctx, &addOnAction, fmt.Sprintf("/addons/%v/actions/provision", addOnIdentity), nil) +} + +type AddOnActionCreateDeprovisionResult struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) +} + +// Mark an add-on as deprovisioned. +func (s *Service) AddOnActionCreateDeprovision(ctx context.Context, addOnIdentity string) (*AddOnActionCreateDeprovisionResult, error) { + var addOnAction AddOnActionCreateDeprovisionResult + return &addOnAction, s.Post(ctx, &addOnAction, fmt.Sprintf("/addons/%v/actions/deprovision", addOnIdentity), nil) +} + +// An add-on attachment represents a connection between an app and an +// add-on that it has been given access to. +type AddOnAttachment struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} +type AddOnAttachmentCreateOpts struct { + Addon string `json:"addon" url:"addon,key"` // unique identifier of add-on + App string `json:"app" url:"app,key"` // unique identifier of app + Force *bool `json:"force,omitempty" url:"force,omitempty,key"` // whether or not to allow existing attachment with same name to be + // replaced + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name for this add-on attachment to this app +} +type AddOnAttachmentCreateResult struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} + +// Create a new add-on attachment. +func (s *Service) AddOnAttachmentCreate(ctx context.Context, o AddOnAttachmentCreateOpts) (*AddOnAttachmentCreateResult, error) { + var addOnAttachment AddOnAttachmentCreateResult + return &addOnAttachment, s.Post(ctx, &addOnAttachment, fmt.Sprintf("/addon-attachments"), o) +} + +type AddOnAttachmentDeleteResult struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} + +// Delete an existing add-on attachment. +func (s *Service) AddOnAttachmentDelete(ctx context.Context, addOnAttachmentIdentity string) (*AddOnAttachmentDeleteResult, error) { + var addOnAttachment AddOnAttachmentDeleteResult + return &addOnAttachment, s.Delete(ctx, &addOnAttachment, fmt.Sprintf("/addon-attachments/%v", addOnAttachmentIdentity)) +} + +type AddOnAttachmentInfoResult struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} + +// Info for existing add-on attachment. +func (s *Service) AddOnAttachmentInfo(ctx context.Context, addOnAttachmentIdentity string) (*AddOnAttachmentInfoResult, error) { + var addOnAttachment AddOnAttachmentInfoResult + return &addOnAttachment, s.Get(ctx, &addOnAttachment, fmt.Sprintf("/addon-attachments/%v", addOnAttachmentIdentity), nil, nil) +} + +type AddOnAttachmentListResult []struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} + +// List existing add-on attachments. +func (s *Service) AddOnAttachmentList(ctx context.Context, lr *ListRange) (AddOnAttachmentListResult, error) { + var addOnAttachment AddOnAttachmentListResult + return addOnAttachment, s.Get(ctx, &addOnAttachment, fmt.Sprintf("/addon-attachments"), nil, lr) +} + +type AddOnAttachmentListByAddOnResult []struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} + +// List existing add-on attachments for an add-on. +func (s *Service) AddOnAttachmentListByAddOn(ctx context.Context, addOnIdentity string, lr *ListRange) (AddOnAttachmentListByAddOnResult, error) { + var addOnAttachment AddOnAttachmentListByAddOnResult + return addOnAttachment, s.Get(ctx, &addOnAttachment, fmt.Sprintf("/addons/%v/addon-attachments", addOnIdentity), nil, lr) +} + +type AddOnAttachmentListByAppResult []struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} + +// List existing add-on attachments for an app. +func (s *Service) AddOnAttachmentListByApp(ctx context.Context, appIdentity string, lr *ListRange) (AddOnAttachmentListByAppResult, error) { + var addOnAttachment AddOnAttachmentListByAppResult + return addOnAttachment, s.Get(ctx, &addOnAttachment, fmt.Sprintf("/apps/%v/addon-attachments", appIdentity), nil, lr) +} + +type AddOnAttachmentInfoByAppResult struct { + Addon struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + } `json:"addon" url:"addon,key"` // identity of add-on + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application that is attached to add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on attachment was created + ID string `json:"id" url:"id,key"` // unique identifier of this add-on attachment + Name string `json:"name" url:"name,key"` // unique name for this add-on attachment to this app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on attachment was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on in attached app context +} + +// Info for existing add-on attachment for an app. +func (s *Service) AddOnAttachmentInfoByApp(ctx context.Context, appIdentity string, addOnAttachmentScopedIdentity string) (*AddOnAttachmentInfoByAppResult, error) { + var addOnAttachment AddOnAttachmentInfoByAppResult + return &addOnAttachment, s.Get(ctx, &addOnAttachment, fmt.Sprintf("/apps/%v/addon-attachments/%v", appIdentity, addOnAttachmentScopedIdentity), nil, nil) +} + +// Configuration of an Add-on +type AddOnConfig struct { + Name string `json:"name" url:"name,key"` // unique name of the config + Value *string `json:"value" url:"value,key"` // value of the config +} +type AddOnConfigListResult []struct { + Name string `json:"name" url:"name,key"` // unique name of the config + Value *string `json:"value" url:"value,key"` // value of the config +} + +// Get an add-on's config. Accessible by customers with access and by +// the add-on partner providing this add-on. +func (s *Service) AddOnConfigList(ctx context.Context, addOnIdentity string, lr *ListRange) (AddOnConfigListResult, error) { + var addOnConfig AddOnConfigListResult + return addOnConfig, s.Get(ctx, &addOnConfig, fmt.Sprintf("/addons/%v/config", addOnIdentity), nil, lr) +} + +type AddOnConfigUpdateOpts struct { + Config *[]*struct { + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name of the config + Value *string `json:"value,omitempty" url:"value,omitempty,key"` // value of the config + } `json:"config,omitempty" url:"config,omitempty,key"` +} +type AddOnConfigUpdateResult []struct { + Name string `json:"name" url:"name,key"` // unique name of the config + Value *string `json:"value" url:"value,key"` // value of the config +} + +// Update an add-on's config. Can only be accessed by the add-on partner +// providing this add-on. +func (s *Service) AddOnConfigUpdate(ctx context.Context, addOnIdentity string, o AddOnConfigUpdateOpts) (AddOnConfigUpdateResult, error) { + var addOnConfig AddOnConfigUpdateResult + return addOnConfig, s.Patch(ctx, &addOnConfig, fmt.Sprintf("/addons/%v/config", addOnIdentity), o) +} + +// Add-on Plan Actions are Provider functionality for specific add-on +// installations +type AddOnPlanAction struct { + Action string `json:"action" url:"action,key"` // identifier of the action to take that is sent via SSO + ID string `json:"id" url:"id,key"` // a unique identifier + Label string `json:"label" url:"label,key"` // the display text shown in Dashboard + RequiresOwner bool `json:"requires_owner" url:"requires_owner,key"` // if the action requires the user to own the app + URL string `json:"url" url:"url,key"` // absolute URL to use instead of an action +} + +// Add-on region capabilities represent the relationship between an +// Add-on Service and a specific Region. Only Beta and GA add-ons are +// returned by these endpoints. +type AddOnRegionCapability struct { + AddonService struct { + CliPluginName *string `json:"cli_plugin_name" url:"cli_plugin_name,key"` // npm package name of the add-on service's Heroku CLI plugin + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on-service was created + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + State string `json:"state" url:"state,key"` // release status for add-on service + SupportsMultipleInstallations bool `json:"supports_multiple_installations" url:"supports_multiple_installations,key"` // whether or not apps can have access to more than one instance of this + // add-on at the same time + SupportsSharing bool `json:"supports_sharing" url:"supports_sharing,key"` // whether or not apps can have access to add-ons billed to a different + // app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on-service was updated + } `json:"addon_service" url:"addon_service,key"` // Add-on services represent add-ons that may be provisioned for apps. + // Endpoints under add-on services can be accessed without + // authentication. + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-region-capability + Region struct { + Country string `json:"country" url:"country,key"` // country where the region exists + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when region was created + Description string `json:"description" url:"description,key"` // description of region + ID string `json:"id" url:"id,key"` // unique identifier of region + Locale string `json:"locale" url:"locale,key"` // area in the country where the region exists + Name string `json:"name" url:"name,key"` // unique name of region + PrivateCapable bool `json:"private_capable" url:"private_capable,key"` // whether or not region is available for creating a Private Space + Provider struct { + Name string `json:"name" url:"name,key"` // name of provider + Region string `json:"region" url:"region,key"` // region name used by provider + } `json:"provider" url:"provider,key"` // provider of underlying substrate + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when region was updated + } `json:"region" url:"region,key"` // A region represents a geographic location in which your application + // may run. + SupportsPrivateNetworking bool `json:"supports_private_networking" url:"supports_private_networking,key"` // whether the add-on can be installed to a Space +} +type AddOnRegionCapabilityListResult []struct { + AddonService struct { + CliPluginName *string `json:"cli_plugin_name" url:"cli_plugin_name,key"` // npm package name of the add-on service's Heroku CLI plugin + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on-service was created + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + State string `json:"state" url:"state,key"` // release status for add-on service + SupportsMultipleInstallations bool `json:"supports_multiple_installations" url:"supports_multiple_installations,key"` // whether or not apps can have access to more than one instance of this + // add-on at the same time + SupportsSharing bool `json:"supports_sharing" url:"supports_sharing,key"` // whether or not apps can have access to add-ons billed to a different + // app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on-service was updated + } `json:"addon_service" url:"addon_service,key"` // Add-on services represent add-ons that may be provisioned for apps. + // Endpoints under add-on services can be accessed without + // authentication. + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-region-capability + Region struct { + Country string `json:"country" url:"country,key"` // country where the region exists + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when region was created + Description string `json:"description" url:"description,key"` // description of region + ID string `json:"id" url:"id,key"` // unique identifier of region + Locale string `json:"locale" url:"locale,key"` // area in the country where the region exists + Name string `json:"name" url:"name,key"` // unique name of region + PrivateCapable bool `json:"private_capable" url:"private_capable,key"` // whether or not region is available for creating a Private Space + Provider struct { + Name string `json:"name" url:"name,key"` // name of provider + Region string `json:"region" url:"region,key"` // region name used by provider + } `json:"provider" url:"provider,key"` // provider of underlying substrate + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when region was updated + } `json:"region" url:"region,key"` // A region represents a geographic location in which your application + // may run. + SupportsPrivateNetworking bool `json:"supports_private_networking" url:"supports_private_networking,key"` // whether the add-on can be installed to a Space +} + +// List all existing add-on region capabilities. +func (s *Service) AddOnRegionCapabilityList(ctx context.Context, lr *ListRange) (AddOnRegionCapabilityListResult, error) { + var addOnRegionCapability AddOnRegionCapabilityListResult + return addOnRegionCapability, s.Get(ctx, &addOnRegionCapability, fmt.Sprintf("/addon-region-capabilities"), nil, lr) +} + +type AddOnRegionCapabilityListByAddOnServiceResult []struct { + AddonService struct { + CliPluginName *string `json:"cli_plugin_name" url:"cli_plugin_name,key"` // npm package name of the add-on service's Heroku CLI plugin + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on-service was created + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + State string `json:"state" url:"state,key"` // release status for add-on service + SupportsMultipleInstallations bool `json:"supports_multiple_installations" url:"supports_multiple_installations,key"` // whether or not apps can have access to more than one instance of this + // add-on at the same time + SupportsSharing bool `json:"supports_sharing" url:"supports_sharing,key"` // whether or not apps can have access to add-ons billed to a different + // app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on-service was updated + } `json:"addon_service" url:"addon_service,key"` // Add-on services represent add-ons that may be provisioned for apps. + // Endpoints under add-on services can be accessed without + // authentication. + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-region-capability + Region struct { + Country string `json:"country" url:"country,key"` // country where the region exists + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when region was created + Description string `json:"description" url:"description,key"` // description of region + ID string `json:"id" url:"id,key"` // unique identifier of region + Locale string `json:"locale" url:"locale,key"` // area in the country where the region exists + Name string `json:"name" url:"name,key"` // unique name of region + PrivateCapable bool `json:"private_capable" url:"private_capable,key"` // whether or not region is available for creating a Private Space + Provider struct { + Name string `json:"name" url:"name,key"` // name of provider + Region string `json:"region" url:"region,key"` // region name used by provider + } `json:"provider" url:"provider,key"` // provider of underlying substrate + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when region was updated + } `json:"region" url:"region,key"` // A region represents a geographic location in which your application + // may run. + SupportsPrivateNetworking bool `json:"supports_private_networking" url:"supports_private_networking,key"` // whether the add-on can be installed to a Space +} + +// List existing add-on region capabilities for an add-on-service +func (s *Service) AddOnRegionCapabilityListByAddOnService(ctx context.Context, addOnServiceIdentity string, lr *ListRange) (AddOnRegionCapabilityListByAddOnServiceResult, error) { + var addOnRegionCapability AddOnRegionCapabilityListByAddOnServiceResult + return addOnRegionCapability, s.Get(ctx, &addOnRegionCapability, fmt.Sprintf("/addon-services/%v/region-capabilities", addOnServiceIdentity), nil, lr) } // Add-on services represent add-ons that may be provisioned for apps. // Endpoints under add-on services can be accessed without // authentication. -type AddonService struct { - CreatedAt time.Time `json:"created_at"` // when addon-service was created - ID string `json:"id"` // unique identifier of this addon-service - Name string `json:"name"` // unique name of this addon-service - UpdatedAt time.Time `json:"updated_at"` // when addon-service was updated +type AddOnService struct { + CliPluginName *string `json:"cli_plugin_name" url:"cli_plugin_name,key"` // npm package name of the add-on service's Heroku CLI plugin + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on-service was created + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + State string `json:"state" url:"state,key"` // release status for add-on service + SupportsMultipleInstallations bool `json:"supports_multiple_installations" url:"supports_multiple_installations,key"` // whether or not apps can have access to more than one instance of this + // add-on at the same time + SupportsSharing bool `json:"supports_sharing" url:"supports_sharing,key"` // whether or not apps can have access to add-ons billed to a different + // app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on-service was updated +} +type AddOnServiceInfoResult struct { + CliPluginName *string `json:"cli_plugin_name" url:"cli_plugin_name,key"` // npm package name of the add-on service's Heroku CLI plugin + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on-service was created + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + State string `json:"state" url:"state,key"` // release status for add-on service + SupportsMultipleInstallations bool `json:"supports_multiple_installations" url:"supports_multiple_installations,key"` // whether or not apps can have access to more than one instance of this + // add-on at the same time + SupportsSharing bool `json:"supports_sharing" url:"supports_sharing,key"` // whether or not apps can have access to add-ons billed to a different + // app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on-service was updated } -// Info for existing addon-service. -func (s *Service) AddonServiceInfo(addonServiceIdentity string) (*AddonService, error) { - var addonService AddonService - return &addonService, s.Get(&addonService, fmt.Sprintf("/addon-services/%v", addonServiceIdentity), nil) +// Info for existing add-on-service. +func (s *Service) AddOnServiceInfo(ctx context.Context, addOnServiceIdentity string) (*AddOnServiceInfoResult, error) { + var addOnService AddOnServiceInfoResult + return &addOnService, s.Get(ctx, &addOnService, fmt.Sprintf("/addon-services/%v", addOnServiceIdentity), nil, nil) } -// List existing addon-services. -func (s *Service) AddonServiceList(lr *ListRange) ([]*AddonService, error) { - var addonServiceList []*AddonService - return addonServiceList, s.Get(&addonServiceList, fmt.Sprintf("/addon-services"), lr) +type AddOnServiceListResult []struct { + CliPluginName *string `json:"cli_plugin_name" url:"cli_plugin_name,key"` // npm package name of the add-on service's Heroku CLI plugin + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on-service was created + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + State string `json:"state" url:"state,key"` // release status for add-on service + SupportsMultipleInstallations bool `json:"supports_multiple_installations" url:"supports_multiple_installations,key"` // whether or not apps can have access to more than one instance of this + // add-on at the same time + SupportsSharing bool `json:"supports_sharing" url:"supports_sharing,key"` // whether or not apps can have access to add-ons billed to a different + // app + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on-service was updated +} + +// List existing add-on-services. +func (s *Service) AddOnServiceList(ctx context.Context, lr *ListRange) (AddOnServiceListResult, error) { + var addOnService AddOnServiceListResult + return addOnService, s.Get(ctx, &addOnService, fmt.Sprintf("/addon-services"), nil, lr) } // An app represents the program that you would like to deploy and run // on Heroku. type App struct { - ArchivedAt *time.Time `json:"archived_at"` // when app was archived - BuildpackProvidedDescription *string `json:"buildpack_provided_description"` // description from buildpack of app - CreatedAt time.Time `json:"created_at"` // when app was created - GitURL string `json:"git_url"` // git repo URL of app - ID string `json:"id"` // unique identifier of app - Maintenance bool `json:"maintenance"` // maintenance status of app - Name string `json:"name"` // unique name of app - Owner struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"owner"` // identity of app owner + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildStack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"build_stack" url:"build_stack,key"` // identity of the stack that will be used for new builds + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // identity of organization + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner Region struct { - ID string `json:"id"` // unique identifier of region - Name string `json:"name"` // unique name of region - } `json:"region"` // identity of app region - ReleasedAt *time.Time `json:"released_at"` // when app was released - RepoSize *int `json:"repo_size"` // git repo size in bytes of app - SlugSize *int `json:"slug_size"` // slug size in bytes of app - Stack struct { - ID string `json:"id"` // unique identifier of stack - Name string `json:"name"` // unique name of stack - } `json:"stack"` // identity of app stack - UpdatedAt time.Time `json:"updated_at"` // when app was updated - WebURL string `json:"web_url"` // web URL of app + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } type AppCreateOpts struct { - Name *string `json:"name,omitempty"` // unique name of app - Region *string `json:"region,omitempty"` // unique identifier of region - Stack *string `json:"stack,omitempty"` // unique name of stack + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name of app + Region *string `json:"region,omitempty" url:"region,omitempty,key"` // unique identifier of region + Stack *string `json:"stack,omitempty" url:"stack,omitempty,key"` // unique name of stack +} +type AppCreateResult struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildStack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"build_stack" url:"build_stack,key"` // identity of the stack that will be used for new builds + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // identity of organization + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // Create a new app. -func (s *Service) AppCreate(o struct { - Name *string `json:"name,omitempty"` // unique name of app - Region *string `json:"region,omitempty"` // unique identifier of region - Stack *string `json:"stack,omitempty"` // unique name of stack -}) (*App, error) { - var app App - return &app, s.Post(&app, fmt.Sprintf("/apps"), o) +func (s *Service) AppCreate(ctx context.Context, o AppCreateOpts) (*AppCreateResult, error) { + var app AppCreateResult + return &app, s.Post(ctx, &app, fmt.Sprintf("/apps"), o) +} + +type AppDeleteResult struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildStack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"build_stack" url:"build_stack,key"` // identity of the stack that will be used for new builds + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // identity of organization + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // Delete an existing app. -func (s *Service) AppDelete(appIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v", appIdentity)) +func (s *Service) AppDelete(ctx context.Context, appIdentity string) (*AppDeleteResult, error) { + var app AppDeleteResult + return &app, s.Delete(ctx, &app, fmt.Sprintf("/apps/%v", appIdentity)) +} + +type AppInfoResult struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildStack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"build_stack" url:"build_stack,key"` // identity of the stack that will be used for new builds + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // identity of organization + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // Info for existing app. -func (s *Service) AppInfo(appIdentity string) (*App, error) { - var app App - return &app, s.Get(&app, fmt.Sprintf("/apps/%v", appIdentity), nil) +func (s *Service) AppInfo(ctx context.Context, appIdentity string) (*AppInfoResult, error) { + var app AppInfoResult + return &app, s.Get(ctx, &app, fmt.Sprintf("/apps/%v", appIdentity), nil, nil) +} + +type AppListResult []struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildStack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"build_stack" url:"build_stack,key"` // identity of the stack that will be used for new builds + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // identity of organization + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // List existing apps. -func (s *Service) AppList(lr *ListRange) ([]*App, error) { - var appList []*App - return appList, s.Get(&appList, fmt.Sprintf("/apps"), lr) +func (s *Service) AppList(ctx context.Context, lr *ListRange) (AppListResult, error) { + var app AppListResult + return app, s.Get(ctx, &app, fmt.Sprintf("/apps"), nil, lr) +} + +type AppListOwnedAndCollaboratedResult []struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildStack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"build_stack" url:"build_stack,key"` // identity of the stack that will be used for new builds + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // identity of organization + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app +} + +// List owned and collaborated apps (excludes organization apps). +func (s *Service) AppListOwnedAndCollaborated(ctx context.Context, accountIdentity string, lr *ListRange) (AppListOwnedAndCollaboratedResult, error) { + var app AppListOwnedAndCollaboratedResult + return app, s.Get(ctx, &app, fmt.Sprintf("/users/%v/apps", accountIdentity), nil, lr) } type AppUpdateOpts struct { - Maintenance *bool `json:"maintenance,omitempty"` // maintenance status of app - Name *string `json:"name,omitempty"` // unique name of app + BuildStack *string `json:"build_stack,omitempty" url:"build_stack,omitempty,key"` // unique name of stack + Maintenance *bool `json:"maintenance,omitempty" url:"maintenance,omitempty,key"` // maintenance status of app + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name of app +} +type AppUpdateResult struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildStack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"build_stack" url:"build_stack,key"` // identity of the stack that will be used for new builds + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // identity of organization + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // Update an existing app. -func (s *Service) AppUpdate(appIdentity string, o struct { - Maintenance *bool `json:"maintenance,omitempty"` // maintenance status of app - Name *string `json:"name,omitempty"` // unique name of app -}) (*App, error) { - var app App - return &app, s.Patch(&app, fmt.Sprintf("/apps/%v", appIdentity), o) +func (s *Service) AppUpdate(ctx context.Context, appIdentity string, o AppUpdateOpts) (*AppUpdateResult, error) { + var app AppUpdateResult + return &app, s.Patch(ctx, &app, fmt.Sprintf("/apps/%v", appIdentity), o) } // An app feature represents a Heroku labs capability that can be // enabled or disabled for an app on Heroku. type AppFeature struct { - CreatedAt time.Time `json:"created_at"` // when app feature was created - Description string `json:"description"` // description of app feature - DocURL string `json:"doc_url"` // documentation URL of app feature - Enabled bool `json:"enabled"` // whether or not app feature has been enabled - ID string `json:"id"` // unique identifier of app feature - Name string `json:"name"` // unique name of app feature - State string `json:"state"` // state of app feature - UpdatedAt time.Time `json:"updated_at"` // when app feature was updated + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app feature was created + Description string `json:"description" url:"description,key"` // description of app feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of app feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not app feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of app feature + Name string `json:"name" url:"name,key"` // unique name of app feature + State string `json:"state" url:"state,key"` // state of app feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app feature was updated +} +type AppFeatureInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app feature was created + Description string `json:"description" url:"description,key"` // description of app feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of app feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not app feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of app feature + Name string `json:"name" url:"name,key"` // unique name of app feature + State string `json:"state" url:"state,key"` // state of app feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app feature was updated } // Info for an existing app feature. -func (s *Service) AppFeatureInfo(appIdentity string, appFeatureIdentity string) (*AppFeature, error) { - var appFeature AppFeature - return &appFeature, s.Get(&appFeature, fmt.Sprintf("/apps/%v/features/%v", appIdentity, appFeatureIdentity), nil) +func (s *Service) AppFeatureInfo(ctx context.Context, appIdentity string, appFeatureIdentity string) (*AppFeatureInfoResult, error) { + var appFeature AppFeatureInfoResult + return &appFeature, s.Get(ctx, &appFeature, fmt.Sprintf("/apps/%v/features/%v", appIdentity, appFeatureIdentity), nil, nil) +} + +type AppFeatureListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app feature was created + Description string `json:"description" url:"description,key"` // description of app feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of app feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not app feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of app feature + Name string `json:"name" url:"name,key"` // unique name of app feature + State string `json:"state" url:"state,key"` // state of app feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app feature was updated } // List existing app features. -func (s *Service) AppFeatureList(appIdentity string, lr *ListRange) ([]*AppFeature, error) { - var appFeatureList []*AppFeature - return appFeatureList, s.Get(&appFeatureList, fmt.Sprintf("/apps/%v/features", appIdentity), lr) +func (s *Service) AppFeatureList(ctx context.Context, appIdentity string, lr *ListRange) (AppFeatureListResult, error) { + var appFeature AppFeatureListResult + return appFeature, s.Get(ctx, &appFeature, fmt.Sprintf("/apps/%v/features", appIdentity), nil, lr) } type AppFeatureUpdateOpts struct { - Enabled bool `json:"enabled"` // whether or not app feature has been enabled + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not app feature has been enabled +} +type AppFeatureUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app feature was created + Description string `json:"description" url:"description,key"` // description of app feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of app feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not app feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of app feature + Name string `json:"name" url:"name,key"` // unique name of app feature + State string `json:"state" url:"state,key"` // state of app feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app feature was updated } // Update an existing app feature. -func (s *Service) AppFeatureUpdate(appIdentity string, appFeatureIdentity string, o struct { - Enabled bool `json:"enabled"` // whether or not app feature has been enabled -}) (*AppFeature, error) { - var appFeature AppFeature - return &appFeature, s.Patch(&appFeature, fmt.Sprintf("/apps/%v/features/%v", appIdentity, appFeatureIdentity), o) +func (s *Service) AppFeatureUpdate(ctx context.Context, appIdentity string, appFeatureIdentity string, o AppFeatureUpdateOpts) (*AppFeatureUpdateResult, error) { + var appFeature AppFeatureUpdateResult + return &appFeature, s.Patch(ctx, &appFeature, fmt.Sprintf("/apps/%v/features/%v", appIdentity, appFeatureIdentity), o) +} + +// App formation set describes the combination of process types with +// their quantities and sizes as well as application process tier +type AppFormationSet struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app being described by the formation-set + Description string `json:"description" url:"description,key"` // a string representation of the formation set + ProcessTier string `json:"process_tier" url:"process_tier,key"` // application process tier + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // last time fomation-set was updated } // An app setup represents an app on Heroku that is setup using an @@ -492,385 +1562,1161 @@ func (s *Service) AppFeatureUpdate(appIdentity string, appFeatureIdentity string // file. type AppSetup struct { App struct { - ID string `json:"id"` // unique identifier of app - Name string `json:"name"` // unique name of app - } `json:"app"` // identity of app - Build struct { - ID string `json:"id"` // unique identifier of build - Status string `json:"status"` // status of build - } `json:"build"` // identity and status of build - CreatedAt time.Time `json:"created_at"` // when app setup was created - FailureMessage *string `json:"failure_message"` // reason that app setup has failed - ID string `json:"id"` // unique identifier of app setup - ManifestErrors []string `json:"manifest_errors"` // errors associated with invalid app.json manifest file + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // identity of app + Build *struct { + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Status string `json:"status" url:"status,key"` // status of build + } `json:"build" url:"build,key"` // identity and status of build + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app setup was created + FailureMessage *string `json:"failure_message" url:"failure_message,key"` // reason that app setup has failed + ID string `json:"id" url:"id,key"` // unique identifier of app setup + ManifestErrors []string `json:"manifest_errors" url:"manifest_errors,key"` // errors associated with invalid app.json manifest file Postdeploy *struct { - ExitCode int `json:"exit_code"` // The exit code of the postdeploy script - Output string `json:"output"` // output of the postdeploy script - } `json:"postdeploy"` // result of postdeploy script - ResolvedSuccessURL *string `json:"resolved_success_url"` // fully qualified success url - Status string `json:"status"` // the overall status of app setup - UpdatedAt time.Time `json:"updated_at"` // when app setup was updated + ExitCode int `json:"exit_code" url:"exit_code,key"` // The exit code of the postdeploy script + Output string `json:"output" url:"output,key"` // output of the postdeploy script + } `json:"postdeploy" url:"postdeploy,key"` // result of postdeploy script + ResolvedSuccessURL *string `json:"resolved_success_url" url:"resolved_success_url,key"` // fully qualified success url + Status string `json:"status" url:"status,key"` // the overall status of app setup + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app setup was updated } type AppSetupCreateOpts struct { App *struct { - Locked *bool `json:"locked,omitempty"` // are other organization members forbidden from joining this app. - Name *string `json:"name,omitempty"` // unique name of app - Organization *string `json:"organization,omitempty"` // unique name of organization - Personal *bool `json:"personal,omitempty"` // force creation of the app in the user account even if a default org + Locked *bool `json:"locked,omitempty" url:"locked,omitempty,key"` // are other organization members forbidden from joining this app. + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name of app + Organization *string `json:"organization,omitempty" url:"organization,omitempty,key"` // unique name of organization + Personal *bool `json:"personal,omitempty" url:"personal,omitempty,key"` // force creation of the app in the user account even if a default org // is set. - Region *string `json:"region,omitempty"` // unique name of region - Stack *string `json:"stack,omitempty"` // unique name of stack - } `json:"app,omitempty"` // optional parameters for created app + Region *string `json:"region,omitempty" url:"region,omitempty,key"` // unique name of region + Space *string `json:"space,omitempty" url:"space,omitempty,key"` // unique name of space + Stack *string `json:"stack,omitempty" url:"stack,omitempty,key"` // unique name of stack + } `json:"app,omitempty" url:"app,omitempty,key"` // optional parameters for created app Overrides *struct { - Env *map[string]string `json:"env,omitempty"` // overrides of the env specified in the app.json manifest file - } `json:"overrides,omitempty"` // overrides of keys in the app.json manifest file + Buildpacks *[]*struct { + URL *string `json:"url,omitempty" url:"url,omitempty,key"` // location of the buildpack + } `json:"buildpacks,omitempty" url:"buildpacks,omitempty,key"` // overrides the buildpacks specified in the app.json manifest file + Env *map[string]string `json:"env,omitempty" url:"env,omitempty,key"` // overrides of the env specified in the app.json manifest file + } `json:"overrides,omitempty" url:"overrides,omitempty,key"` // overrides of keys in the app.json manifest file SourceBlob struct { - URL *string `json:"url,omitempty"` // URL of gzipped tarball of source code containing app.json manifest + Checksum *string `json:"checksum,omitempty" url:"checksum,omitempty,key"` // an optional checksum of the gzipped tarball for verifying its + // integrity + URL *string `json:"url,omitempty" url:"url,omitempty,key"` // URL of gzipped tarball of source code containing app.json manifest // file - } `json:"source_blob"` // gzipped tarball of source code containing app.json manifest file + Version *string `json:"version,omitempty" url:"version,omitempty,key"` // Version of the gzipped tarball. + } `json:"source_blob" url:"source_blob,key"` // gzipped tarball of source code containing app.json manifest file +} +type AppSetupCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // identity of app + Build *struct { + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Status string `json:"status" url:"status,key"` // status of build + } `json:"build" url:"build,key"` // identity and status of build + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app setup was created + FailureMessage *string `json:"failure_message" url:"failure_message,key"` // reason that app setup has failed + ID string `json:"id" url:"id,key"` // unique identifier of app setup + ManifestErrors []string `json:"manifest_errors" url:"manifest_errors,key"` // errors associated with invalid app.json manifest file + Postdeploy *struct { + ExitCode int `json:"exit_code" url:"exit_code,key"` // The exit code of the postdeploy script + Output string `json:"output" url:"output,key"` // output of the postdeploy script + } `json:"postdeploy" url:"postdeploy,key"` // result of postdeploy script + ResolvedSuccessURL *string `json:"resolved_success_url" url:"resolved_success_url,key"` // fully qualified success url + Status string `json:"status" url:"status,key"` // the overall status of app setup + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app setup was updated } // Create a new app setup from a gzipped tar archive containing an // app.json manifest file. -func (s *Service) AppSetupCreate(o struct { - App *struct { - Locked *bool `json:"locked,omitempty"` // are other organization members forbidden from joining this app. - Name *string `json:"name,omitempty"` // unique name of app - Organization *string `json:"organization,omitempty"` // unique name of organization - Personal *bool `json:"personal,omitempty"` // force creation of the app in the user account even if a default org - // is set. - Region *string `json:"region,omitempty"` // unique name of region - Stack *string `json:"stack,omitempty"` // unique name of stack - } `json:"app,omitempty"` // optional parameters for created app - Overrides *struct { - Env *map[string]string `json:"env,omitempty"` // overrides of the env specified in the app.json manifest file - } `json:"overrides,omitempty"` // overrides of keys in the app.json manifest file - SourceBlob struct { - URL *string `json:"url,omitempty"` // URL of gzipped tarball of source code containing app.json manifest - // file - } `json:"source_blob"` // gzipped tarball of source code containing app.json manifest file -}) (*AppSetup, error) { - var appSetup AppSetup - return &appSetup, s.Post(&appSetup, fmt.Sprintf("/app-setups"), o) +func (s *Service) AppSetupCreate(ctx context.Context, o AppSetupCreateOpts) (*AppSetupCreateResult, error) { + var appSetup AppSetupCreateResult + return &appSetup, s.Post(ctx, &appSetup, fmt.Sprintf("/app-setups"), o) +} + +type AppSetupInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // identity of app + Build *struct { + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Status string `json:"status" url:"status,key"` // status of build + } `json:"build" url:"build,key"` // identity and status of build + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app setup was created + FailureMessage *string `json:"failure_message" url:"failure_message,key"` // reason that app setup has failed + ID string `json:"id" url:"id,key"` // unique identifier of app setup + ManifestErrors []string `json:"manifest_errors" url:"manifest_errors,key"` // errors associated with invalid app.json manifest file + Postdeploy *struct { + ExitCode int `json:"exit_code" url:"exit_code,key"` // The exit code of the postdeploy script + Output string `json:"output" url:"output,key"` // output of the postdeploy script + } `json:"postdeploy" url:"postdeploy,key"` // result of postdeploy script + ResolvedSuccessURL *string `json:"resolved_success_url" url:"resolved_success_url,key"` // fully qualified success url + Status string `json:"status" url:"status,key"` // the overall status of app setup + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app setup was updated } // Get the status of an app setup. -func (s *Service) AppSetupInfo(appSetupIdentity string) (*AppSetup, error) { - var appSetup AppSetup - return &appSetup, s.Get(&appSetup, fmt.Sprintf("/app-setups/%v", appSetupIdentity), nil) +func (s *Service) AppSetupInfo(ctx context.Context, appSetupIdentity string) (*AppSetupInfoResult, error) { + var appSetup AppSetupInfoResult + return &appSetup, s.Get(ctx, &appSetup, fmt.Sprintf("/app-setups/%v", appSetupIdentity), nil, nil) } // An app transfer represents a two party interaction for transferring // ownership of an app. type AppTransfer struct { App struct { - ID string `json:"id"` // unique identifier of app - Name string `json:"name"` // unique name of app - } `json:"app"` // app involved in the transfer - CreatedAt time.Time `json:"created_at"` // when app transfer was created - ID string `json:"id"` // unique identifier of app transfer + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the transfer + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app transfer was created + ID string `json:"id" url:"id,key"` // unique identifier of app transfer Owner struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"owner"` // identity of the owner of the transfer + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of the owner of the transfer Recipient struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"recipient"` // identity of the recipient of the transfer - State string `json:"state"` // the current state of an app transfer - UpdatedAt time.Time `json:"updated_at"` // when app transfer was updated + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"recipient" url:"recipient,key"` // identity of the recipient of the transfer + State string `json:"state" url:"state,key"` // the current state of an app transfer + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app transfer was updated } type AppTransferCreateOpts struct { - App string `json:"app"` // unique identifier of app - Recipient string `json:"recipient"` // unique email address of account + App string `json:"app" url:"app,key"` // unique identifier of app + Recipient string `json:"recipient" url:"recipient,key"` // unique email address of account + Silent *bool `json:"silent,omitempty" url:"silent,omitempty,key"` // whether to suppress email notification when transferring apps +} +type AppTransferCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the transfer + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app transfer was created + ID string `json:"id" url:"id,key"` // unique identifier of app transfer + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of the owner of the transfer + Recipient struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"recipient" url:"recipient,key"` // identity of the recipient of the transfer + State string `json:"state" url:"state,key"` // the current state of an app transfer + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app transfer was updated } // Create a new app transfer. -func (s *Service) AppTransferCreate(o struct { - App string `json:"app"` // unique identifier of app - Recipient string `json:"recipient"` // unique email address of account -}) (*AppTransfer, error) { - var appTransfer AppTransfer - return &appTransfer, s.Post(&appTransfer, fmt.Sprintf("/account/app-transfers"), o) +func (s *Service) AppTransferCreate(ctx context.Context, o AppTransferCreateOpts) (*AppTransferCreateResult, error) { + var appTransfer AppTransferCreateResult + return &appTransfer, s.Post(ctx, &appTransfer, fmt.Sprintf("/account/app-transfers"), o) +} + +type AppTransferDeleteResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the transfer + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app transfer was created + ID string `json:"id" url:"id,key"` // unique identifier of app transfer + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of the owner of the transfer + Recipient struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"recipient" url:"recipient,key"` // identity of the recipient of the transfer + State string `json:"state" url:"state,key"` // the current state of an app transfer + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app transfer was updated } // Delete an existing app transfer -func (s *Service) AppTransferDelete(appTransferIdentity string) error { - return s.Delete(fmt.Sprintf("/account/app-transfers/%v", appTransferIdentity)) +func (s *Service) AppTransferDelete(ctx context.Context, appTransferIdentity string) (*AppTransferDeleteResult, error) { + var appTransfer AppTransferDeleteResult + return &appTransfer, s.Delete(ctx, &appTransfer, fmt.Sprintf("/account/app-transfers/%v", appTransferIdentity)) +} + +type AppTransferInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the transfer + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app transfer was created + ID string `json:"id" url:"id,key"` // unique identifier of app transfer + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of the owner of the transfer + Recipient struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"recipient" url:"recipient,key"` // identity of the recipient of the transfer + State string `json:"state" url:"state,key"` // the current state of an app transfer + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app transfer was updated } // Info for existing app transfer. -func (s *Service) AppTransferInfo(appTransferIdentity string) (*AppTransfer, error) { - var appTransfer AppTransfer - return &appTransfer, s.Get(&appTransfer, fmt.Sprintf("/account/app-transfers/%v", appTransferIdentity), nil) +func (s *Service) AppTransferInfo(ctx context.Context, appTransferIdentity string) (*AppTransferInfoResult, error) { + var appTransfer AppTransferInfoResult + return &appTransfer, s.Get(ctx, &appTransfer, fmt.Sprintf("/account/app-transfers/%v", appTransferIdentity), nil, nil) +} + +type AppTransferListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the transfer + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app transfer was created + ID string `json:"id" url:"id,key"` // unique identifier of app transfer + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of the owner of the transfer + Recipient struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"recipient" url:"recipient,key"` // identity of the recipient of the transfer + State string `json:"state" url:"state,key"` // the current state of an app transfer + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app transfer was updated } // List existing apps transfers. -func (s *Service) AppTransferList(lr *ListRange) ([]*AppTransfer, error) { - var appTransferList []*AppTransfer - return appTransferList, s.Get(&appTransferList, fmt.Sprintf("/account/app-transfers"), lr) +func (s *Service) AppTransferList(ctx context.Context, lr *ListRange) (AppTransferListResult, error) { + var appTransfer AppTransferListResult + return appTransfer, s.Get(ctx, &appTransfer, fmt.Sprintf("/account/app-transfers"), nil, lr) } type AppTransferUpdateOpts struct { - State string `json:"state"` // the current state of an app transfer + State string `json:"state" url:"state,key"` // the current state of an app transfer +} +type AppTransferUpdateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the transfer + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app transfer was created + ID string `json:"id" url:"id,key"` // unique identifier of app transfer + Owner struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of the owner of the transfer + Recipient struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"recipient" url:"recipient,key"` // identity of the recipient of the transfer + State string `json:"state" url:"state,key"` // the current state of an app transfer + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app transfer was updated } // Update an existing app transfer. -func (s *Service) AppTransferUpdate(appTransferIdentity string, o struct { - State string `json:"state"` // the current state of an app transfer -}) (*AppTransfer, error) { - var appTransfer AppTransfer - return &appTransfer, s.Patch(&appTransfer, fmt.Sprintf("/account/app-transfers/%v", appTransferIdentity), o) +func (s *Service) AppTransferUpdate(ctx context.Context, appTransferIdentity string, o AppTransferUpdateOpts) (*AppTransferUpdateResult, error) { + var appTransfer AppTransferUpdateResult + return &appTransfer, s.Patch(ctx, &appTransfer, fmt.Sprintf("/account/app-transfers/%v", appTransferIdentity), o) } // A build represents the process of transforming a code tarball into a // slug type Build struct { - CreatedAt time.Time `json:"created_at"` // when build was created - ID string `json:"id"` // unique identifier of build - Slug *struct { - ID string `json:"id"` // unique identifier of slug - } `json:"slug"` // slug created by this build + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app that the build belongs to + Buildpacks *[]struct { + URL string `json:"url" url:"url,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpacks" url:"buildpacks,key"` // buildpacks executed for this build, in order + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when build was created + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Release *struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // release resulting from the build + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug created by this build SourceBlob struct { - URL string `json:"url"` // URL where gzipped tar archive of source code for build was + Checksum *string `json:"checksum" url:"checksum,key"` // an optional checksum of the gzipped tarball for verifying its + // integrity + URL string `json:"url" url:"url,key"` // URL where gzipped tar archive of source code for build was // downloaded. - Version *string `json:"version"` // Version of the gzipped tarball. - } `json:"source_blob"` // location of gzipped tarball of source code used to create build - Status string `json:"status"` // status of build - UpdatedAt time.Time `json:"updated_at"` // when build was updated + Version *string `json:"version" url:"version,key"` // Version of the gzipped tarball. + } `json:"source_blob" url:"source_blob,key"` // location of gzipped tarball of source code used to create build + Status string `json:"status" url:"status,key"` // status of build + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when build was updated User struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"user"` // user that started the build + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that started the build } type BuildCreateOpts struct { + Buildpacks *[]*struct { + URL *string `json:"url,omitempty" url:"url,omitempty,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpacks,omitempty" url:"buildpacks,omitempty,key"` // buildpacks executed for this build, in order SourceBlob struct { - URL *string `json:"url,omitempty"` // URL where gzipped tar archive of source code for build was + Checksum *string `json:"checksum,omitempty" url:"checksum,omitempty,key"` // an optional checksum of the gzipped tarball for verifying its + // integrity + URL *string `json:"url,omitempty" url:"url,omitempty,key"` // URL where gzipped tar archive of source code for build was // downloaded. - Version *string `json:"version,omitempty"` // Version of the gzipped tarball. - } `json:"source_blob"` // location of gzipped tarball of source code used to create build + Version *string `json:"version,omitempty" url:"version,omitempty,key"` // Version of the gzipped tarball. + } `json:"source_blob" url:"source_blob,key"` // location of gzipped tarball of source code used to create build +} +type BuildCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app that the build belongs to + Buildpacks *[]struct { + URL string `json:"url" url:"url,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpacks" url:"buildpacks,key"` // buildpacks executed for this build, in order + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when build was created + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Release *struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // release resulting from the build + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug created by this build + SourceBlob struct { + Checksum *string `json:"checksum" url:"checksum,key"` // an optional checksum of the gzipped tarball for verifying its + // integrity + URL string `json:"url" url:"url,key"` // URL where gzipped tar archive of source code for build was + // downloaded. + Version *string `json:"version" url:"version,key"` // Version of the gzipped tarball. + } `json:"source_blob" url:"source_blob,key"` // location of gzipped tarball of source code used to create build + Status string `json:"status" url:"status,key"` // status of build + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when build was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that started the build } // Create a new build. -func (s *Service) BuildCreate(appIdentity string, o struct { +func (s *Service) BuildCreate(ctx context.Context, appIdentity string, o BuildCreateOpts) (*BuildCreateResult, error) { + var build BuildCreateResult + return &build, s.Post(ctx, &build, fmt.Sprintf("/apps/%v/builds", appIdentity), o) +} + +type BuildInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app that the build belongs to + Buildpacks *[]struct { + URL string `json:"url" url:"url,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpacks" url:"buildpacks,key"` // buildpacks executed for this build, in order + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when build was created + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Release *struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // release resulting from the build + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug created by this build SourceBlob struct { - URL *string `json:"url,omitempty"` // URL where gzipped tar archive of source code for build was + Checksum *string `json:"checksum" url:"checksum,key"` // an optional checksum of the gzipped tarball for verifying its + // integrity + URL string `json:"url" url:"url,key"` // URL where gzipped tar archive of source code for build was // downloaded. - Version *string `json:"version,omitempty"` // Version of the gzipped tarball. - } `json:"source_blob"` // location of gzipped tarball of source code used to create build -}) (*Build, error) { - var build Build - return &build, s.Post(&build, fmt.Sprintf("/apps/%v/builds", appIdentity), o) + Version *string `json:"version" url:"version,key"` // Version of the gzipped tarball. + } `json:"source_blob" url:"source_blob,key"` // location of gzipped tarball of source code used to create build + Status string `json:"status" url:"status,key"` // status of build + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when build was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that started the build } // Info for existing build. -func (s *Service) BuildInfo(appIdentity string, buildIdentity string) (*Build, error) { - var build Build - return &build, s.Get(&build, fmt.Sprintf("/apps/%v/builds/%v", appIdentity, buildIdentity), nil) +func (s *Service) BuildInfo(ctx context.Context, appIdentity string, buildIdentity string) (*BuildInfoResult, error) { + var build BuildInfoResult + return &build, s.Get(ctx, &build, fmt.Sprintf("/apps/%v/builds/%v", appIdentity, buildIdentity), nil, nil) +} + +type BuildListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app that the build belongs to + Buildpacks *[]struct { + URL string `json:"url" url:"url,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpacks" url:"buildpacks,key"` // buildpacks executed for this build, in order + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when build was created + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Release *struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // release resulting from the build + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug created by this build + SourceBlob struct { + Checksum *string `json:"checksum" url:"checksum,key"` // an optional checksum of the gzipped tarball for verifying its + // integrity + URL string `json:"url" url:"url,key"` // URL where gzipped tar archive of source code for build was + // downloaded. + Version *string `json:"version" url:"version,key"` // Version of the gzipped tarball. + } `json:"source_blob" url:"source_blob,key"` // location of gzipped tarball of source code used to create build + Status string `json:"status" url:"status,key"` // status of build + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when build was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that started the build } // List existing build. -func (s *Service) BuildList(appIdentity string, lr *ListRange) ([]*Build, error) { - var buildList []*Build - return buildList, s.Get(&buildList, fmt.Sprintf("/apps/%v/builds", appIdentity), lr) +func (s *Service) BuildList(ctx context.Context, appIdentity string, lr *ListRange) (BuildListResult, error) { + var build BuildListResult + return build, s.Get(ctx, &build, fmt.Sprintf("/apps/%v/builds", appIdentity), nil, lr) } // A build result contains the output from a build. type BuildResult struct { Build struct { - ID string `json:"id"` // unique identifier of build - Status string `json:"status"` // status of build - } `json:"build"` // identity of build - ExitCode float64 `json:"exit_code"` // status from the build + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Status string `json:"status" url:"status,key"` // status of build + } `json:"build" url:"build,key"` // identity of build + ExitCode float64 `json:"exit_code" url:"exit_code,key"` // status from the build Lines []struct { - Line string `json:"line"` // A line of output from the build. - Stream string `json:"stream"` // The output stream where the line was sent. - } `json:"lines"` // A list of all the lines of a build's output. + Line string `json:"line" url:"line,key"` // A line of output from the build. + Stream string `json:"stream" url:"stream,key"` // The output stream where the line was sent. + } `json:"lines" url:"lines,key"` // A list of all the lines of a build's output. This has been replaced + // by the `output_stream_url` attribute on the build resource. +} +type BuildResultInfoResult struct { + Build struct { + ID string `json:"id" url:"id,key"` // unique identifier of build + OutputStreamURL string `json:"output_stream_url" url:"output_stream_url,key"` // Build process output will be available from this URL as a stream. The + // stream is available as either `text/plain` or `text/event-stream`. + // Clients should be prepared to handle disconnects and can resume the + // stream by sending a `Range` header (for `text/plain`) or a + // `Last-Event-Id` header (for `text/event-stream`). + Status string `json:"status" url:"status,key"` // status of build + } `json:"build" url:"build,key"` // identity of build + ExitCode float64 `json:"exit_code" url:"exit_code,key"` // status from the build + Lines []struct { + Line string `json:"line" url:"line,key"` // A line of output from the build. + Stream string `json:"stream" url:"stream,key"` // The output stream where the line was sent. + } `json:"lines" url:"lines,key"` // A list of all the lines of a build's output. This has been replaced + // by the `output_stream_url` attribute on the build resource. } // Info for existing result. -func (s *Service) BuildResultInfo(appIdentity string, buildIdentity string) (*BuildResult, error) { - var buildResult BuildResult - return &buildResult, s.Get(&buildResult, fmt.Sprintf("/apps/%v/builds/%v/result", appIdentity, buildIdentity), nil) +func (s *Service) BuildResultInfo(ctx context.Context, appIdentity string, buildIdentity string) (*BuildResultInfoResult, error) { + var buildResult BuildResultInfoResult + return &buildResult, s.Get(ctx, &buildResult, fmt.Sprintf("/apps/%v/builds/%v/result", appIdentity, buildIdentity), nil, nil) +} + +// A buildpack installation represents a buildpack that will be run +// against an app. +type BuildpackInstallation struct { + Buildpack struct { + Name string `json:"name" url:"name,key"` // either the shorthand name (heroku official buildpacks) or url + // (unofficial buildpacks) of the buildpack for the app + URL string `json:"url" url:"url,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpack" url:"buildpack,key"` // buildpack + Ordinal int `json:"ordinal" url:"ordinal,key"` // determines the order in which the buildpacks will execute +} +type BuildpackInstallationUpdateOpts struct { + Updates []struct { + Buildpack string `json:"buildpack" url:"buildpack,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"updates" url:"updates,key"` // The buildpack attribute can accept a name, a url, or a urn. +} +type BuildpackInstallationUpdateResult []struct { + Buildpack struct { + Name string `json:"name" url:"name,key"` // either the shorthand name (heroku official buildpacks) or url + // (unofficial buildpacks) of the buildpack for the app + URL string `json:"url" url:"url,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpack" url:"buildpack,key"` // buildpack + Ordinal int `json:"ordinal" url:"ordinal,key"` // determines the order in which the buildpacks will execute +} + +// Update an app's buildpack installations. +func (s *Service) BuildpackInstallationUpdate(ctx context.Context, appIdentity string, o BuildpackInstallationUpdateOpts) (BuildpackInstallationUpdateResult, error) { + var buildpackInstallation BuildpackInstallationUpdateResult + return buildpackInstallation, s.Put(ctx, &buildpackInstallation, fmt.Sprintf("/apps/%v/buildpack-installations", appIdentity), o) +} + +type BuildpackInstallationListResult []struct { + Buildpack struct { + Name string `json:"name" url:"name,key"` // either the shorthand name (heroku official buildpacks) or url + // (unofficial buildpacks) of the buildpack for the app + URL string `json:"url" url:"url,key"` // location of the buildpack for the app. Either a url (unofficial + // buildpacks) or an internal urn (heroku official buildpacks). + } `json:"buildpack" url:"buildpack,key"` // buildpack + Ordinal int `json:"ordinal" url:"ordinal,key"` // determines the order in which the buildpacks will execute +} + +// List an app's existing buildpack installations. +func (s *Service) BuildpackInstallationList(ctx context.Context, appIdentity string, lr *ListRange) (BuildpackInstallationListResult, error) { + var buildpackInstallation BuildpackInstallationListResult + return buildpackInstallation, s.Get(ctx, &buildpackInstallation, fmt.Sprintf("/apps/%v/buildpack-installations", appIdentity), nil, lr) } // A collaborator represents an account that has been given access to an // app on Heroku. type Collaborator struct { - CreatedAt time.Time `json:"created_at"` // when collaborator was created - ID string `json:"id"` // unique identifier of collaborator - UpdatedAt time.Time `json:"updated_at"` // when collaborator was updated + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Permissions []struct { + Description string `json:"description" url:"description,key"` // A description of what the app permission allows. + Name string `json:"name" url:"name,key"` // The name of the app permission. + } `json:"permissions" url:"permissions,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated User struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"user"` // identity of collaborated account + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } type CollaboratorCreateOpts struct { - Silent *bool `json:"silent,omitempty"` // whether to suppress email invitation when creating collaborator - User string `json:"user"` // unique email address of account + Silent *bool `json:"silent,omitempty" url:"silent,omitempty,key"` // whether to suppress email invitation when creating collaborator + User string `json:"user" url:"user,key"` // unique email address of account +} +type CollaboratorCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Permissions []struct { + Description string `json:"description" url:"description,key"` // A description of what the app permission allows. + Name string `json:"name" url:"name,key"` // The name of the app permission. + } `json:"permissions" url:"permissions,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // Create a new collaborator. -func (s *Service) CollaboratorCreate(appIdentity string, o struct { - Silent *bool `json:"silent,omitempty"` // whether to suppress email invitation when creating collaborator - User string `json:"user"` // unique email address of account -}) (*Collaborator, error) { - var collaborator Collaborator - return &collaborator, s.Post(&collaborator, fmt.Sprintf("/apps/%v/collaborators", appIdentity), o) +func (s *Service) CollaboratorCreate(ctx context.Context, appIdentity string, o CollaboratorCreateOpts) (*CollaboratorCreateResult, error) { + var collaborator CollaboratorCreateResult + return &collaborator, s.Post(ctx, &collaborator, fmt.Sprintf("/apps/%v/collaborators", appIdentity), o) +} + +type CollaboratorDeleteResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Permissions []struct { + Description string `json:"description" url:"description,key"` // A description of what the app permission allows. + Name string `json:"name" url:"name,key"` // The name of the app permission. + } `json:"permissions" url:"permissions,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // Delete an existing collaborator. -func (s *Service) CollaboratorDelete(appIdentity string, collaboratorIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v/collaborators/%v", appIdentity, collaboratorIdentity)) +func (s *Service) CollaboratorDelete(ctx context.Context, appIdentity string, collaboratorIdentity string) (*CollaboratorDeleteResult, error) { + var collaborator CollaboratorDeleteResult + return &collaborator, s.Delete(ctx, &collaborator, fmt.Sprintf("/apps/%v/collaborators/%v", appIdentity, collaboratorIdentity)) +} + +type CollaboratorInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Permissions []struct { + Description string `json:"description" url:"description,key"` // A description of what the app permission allows. + Name string `json:"name" url:"name,key"` // The name of the app permission. + } `json:"permissions" url:"permissions,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // Info for existing collaborator. -func (s *Service) CollaboratorInfo(appIdentity string, collaboratorIdentity string) (*Collaborator, error) { - var collaborator Collaborator - return &collaborator, s.Get(&collaborator, fmt.Sprintf("/apps/%v/collaborators/%v", appIdentity, collaboratorIdentity), nil) +func (s *Service) CollaboratorInfo(ctx context.Context, appIdentity string, collaboratorIdentity string) (*CollaboratorInfoResult, error) { + var collaborator CollaboratorInfoResult + return &collaborator, s.Get(ctx, &collaborator, fmt.Sprintf("/apps/%v/collaborators/%v", appIdentity, collaboratorIdentity), nil, nil) +} + +type CollaboratorListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Permissions []struct { + Description string `json:"description" url:"description,key"` // A description of what the app permission allows. + Name string `json:"name" url:"name,key"` // The name of the app permission. + } `json:"permissions" url:"permissions,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // List existing collaborators. -func (s *Service) CollaboratorList(appIdentity string, lr *ListRange) ([]*Collaborator, error) { - var collaboratorList []*Collaborator - return collaboratorList, s.Get(&collaboratorList, fmt.Sprintf("/apps/%v/collaborators", appIdentity), lr) +func (s *Service) CollaboratorList(ctx context.Context, appIdentity string, lr *ListRange) (CollaboratorListResult, error) { + var collaborator CollaboratorListResult + return collaborator, s.Get(ctx, &collaborator, fmt.Sprintf("/apps/%v/collaborators", appIdentity), nil, lr) } // Config Vars allow you to manage the configuration information // provided to an app on Heroku. type ConfigVar map[string]string +type ConfigVarInfoForAppResult map[string]*string // Get config-vars for app. -func (s *Service) ConfigVarInfo(appIdentity string) (map[string]string, error) { - var configVar ConfigVar - return configVar, s.Get(&configVar, fmt.Sprintf("/apps/%v/config-vars", appIdentity), nil) +func (s *Service) ConfigVarInfoForApp(ctx context.Context, appIdentity string) (ConfigVarInfoForAppResult, error) { + var configVar ConfigVarInfoForAppResult + return configVar, s.Get(ctx, &configVar, fmt.Sprintf("/apps/%v/config-vars", appIdentity), nil, nil) } -type ConfigVarUpdateOpts map[string]*string +type ConfigVarInfoForAppReleaseResult map[string]*string + +// Get config-vars for a release. +func (s *Service) ConfigVarInfoForAppRelease(ctx context.Context, appIdentity string, releaseIdentity string) (ConfigVarInfoForAppReleaseResult, error) { + var configVar ConfigVarInfoForAppReleaseResult + return configVar, s.Get(ctx, &configVar, fmt.Sprintf("/apps/%v/releases/%v/config-vars", appIdentity, releaseIdentity), nil, nil) +} + +type ConfigVarUpdateResult map[string]*string // Update config-vars for app. You can update existing config-vars by -// setting them again, and remove by setting it to `NULL`. -func (s *Service) ConfigVarUpdate(appIdentity string, o map[string]*string) (map[string]string, error) { - var configVar ConfigVar - return configVar, s.Patch(&configVar, fmt.Sprintf("/apps/%v/config-vars", appIdentity), o) +// setting them again, and remove by setting it to `null`. +func (s *Service) ConfigVarUpdate(ctx context.Context, appIdentity string, o map[string]*string) (ConfigVarUpdateResult, error) { + var configVar ConfigVarUpdateResult + return configVar, s.Patch(ctx, &configVar, fmt.Sprintf("/apps/%v/config-vars", appIdentity), o) } // A credit represents value that will be used up before further charges // are assigned to an account. type Credit struct { - Amount float64 `json:"amount"` // total value of credit in cents - Balance float64 `json:"balance"` // remaining value of credit in cents - CreatedAt time.Time `json:"created_at"` // when credit was created - ExpiresAt time.Time `json:"expires_at"` // when credit will expire - ID string `json:"id"` // unique identifier of credit - Title string `json:"title"` // a name for credit - UpdatedAt time.Time `json:"updated_at"` // when credit was updated + Amount float64 `json:"amount" url:"amount,key"` // total value of credit in cents + Balance float64 `json:"balance" url:"balance,key"` // remaining value of credit in cents + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when credit was created + ExpiresAt time.Time `json:"expires_at" url:"expires_at,key"` // when credit will expire + ID string `json:"id" url:"id,key"` // unique identifier of credit + Title string `json:"title" url:"title,key"` // a name for credit + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when credit was updated +} +type CreditCreateOpts struct { + Code1 *string `json:"code1,omitempty" url:"code1,omitempty,key"` // first code from a discount card + Code2 *string `json:"code2,omitempty" url:"code2,omitempty,key"` // second code from a discount card +} +type CreditCreateResult struct { + Amount float64 `json:"amount" url:"amount,key"` // total value of credit in cents + Balance float64 `json:"balance" url:"balance,key"` // remaining value of credit in cents + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when credit was created + ExpiresAt time.Time `json:"expires_at" url:"expires_at,key"` // when credit will expire + ID string `json:"id" url:"id,key"` // unique identifier of credit + Title string `json:"title" url:"title,key"` // a name for credit + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when credit was updated +} + +// Create a new credit. +func (s *Service) CreditCreate(ctx context.Context, o CreditCreateOpts) (*CreditCreateResult, error) { + var credit CreditCreateResult + return &credit, s.Post(ctx, &credit, fmt.Sprintf("/account/credits"), o) +} + +type CreditInfoResult struct { + Amount float64 `json:"amount" url:"amount,key"` // total value of credit in cents + Balance float64 `json:"balance" url:"balance,key"` // remaining value of credit in cents + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when credit was created + ExpiresAt time.Time `json:"expires_at" url:"expires_at,key"` // when credit will expire + ID string `json:"id" url:"id,key"` // unique identifier of credit + Title string `json:"title" url:"title,key"` // a name for credit + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when credit was updated } // Info for existing credit. -func (s *Service) CreditInfo(creditIdentity string) (*Credit, error) { - var credit Credit - return &credit, s.Get(&credit, fmt.Sprintf("/account/credits/%v", creditIdentity), nil) +func (s *Service) CreditInfo(ctx context.Context, creditIdentity string) (*CreditInfoResult, error) { + var credit CreditInfoResult + return &credit, s.Get(ctx, &credit, fmt.Sprintf("/account/credits/%v", creditIdentity), nil, nil) +} + +type CreditListResult []struct { + Amount float64 `json:"amount" url:"amount,key"` // total value of credit in cents + Balance float64 `json:"balance" url:"balance,key"` // remaining value of credit in cents + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when credit was created + ExpiresAt time.Time `json:"expires_at" url:"expires_at,key"` // when credit will expire + ID string `json:"id" url:"id,key"` // unique identifier of credit + Title string `json:"title" url:"title,key"` // a name for credit + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when credit was updated } // List existing credits. -func (s *Service) CreditList(lr *ListRange) ([]*Credit, error) { - var creditList []*Credit - return creditList, s.Get(&creditList, fmt.Sprintf("/account/credits"), lr) +func (s *Service) CreditList(ctx context.Context, lr *ListRange) (CreditListResult, error) { + var credit CreditListResult + return credit, s.Get(ctx, &credit, fmt.Sprintf("/account/credits"), nil, lr) } // Domains define what web routes should be routed to an app on Heroku. type Domain struct { - CreatedAt time.Time `json:"created_at"` // when domain was created - Hostname string `json:"hostname"` // full hostname - ID string `json:"id"` // unique identifier of this domain - UpdatedAt time.Time `json:"updated_at"` // when domain was updated + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app that owns the domain + CName *string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when domain was created + Hostname string `json:"hostname" url:"hostname,key"` // full hostname + ID string `json:"id" url:"id,key"` // unique identifier of this domain + Kind string `json:"kind" url:"kind,key"` // type of domain name + Status string `json:"status" url:"status,key"` // status of this record's cname + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when domain was updated } type DomainCreateOpts struct { - Hostname string `json:"hostname"` // full hostname + Hostname string `json:"hostname" url:"hostname,key"` // full hostname +} +type DomainCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app that owns the domain + CName *string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when domain was created + Hostname string `json:"hostname" url:"hostname,key"` // full hostname + ID string `json:"id" url:"id,key"` // unique identifier of this domain + Kind string `json:"kind" url:"kind,key"` // type of domain name + Status string `json:"status" url:"status,key"` // status of this record's cname + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when domain was updated } // Create a new domain. -func (s *Service) DomainCreate(appIdentity string, o struct { - Hostname string `json:"hostname"` // full hostname -}) (*Domain, error) { - var domain Domain - return &domain, s.Post(&domain, fmt.Sprintf("/apps/%v/domains", appIdentity), o) +func (s *Service) DomainCreate(ctx context.Context, appIdentity string, o DomainCreateOpts) (*DomainCreateResult, error) { + var domain DomainCreateResult + return &domain, s.Post(ctx, &domain, fmt.Sprintf("/apps/%v/domains", appIdentity), o) +} + +type DomainDeleteResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app that owns the domain + CName *string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when domain was created + Hostname string `json:"hostname" url:"hostname,key"` // full hostname + ID string `json:"id" url:"id,key"` // unique identifier of this domain + Kind string `json:"kind" url:"kind,key"` // type of domain name + Status string `json:"status" url:"status,key"` // status of this record's cname + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when domain was updated } // Delete an existing domain -func (s *Service) DomainDelete(appIdentity string, domainIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v/domains/%v", appIdentity, domainIdentity)) +func (s *Service) DomainDelete(ctx context.Context, appIdentity string, domainIdentity string) (*DomainDeleteResult, error) { + var domain DomainDeleteResult + return &domain, s.Delete(ctx, &domain, fmt.Sprintf("/apps/%v/domains/%v", appIdentity, domainIdentity)) +} + +type DomainInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app that owns the domain + CName *string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when domain was created + Hostname string `json:"hostname" url:"hostname,key"` // full hostname + ID string `json:"id" url:"id,key"` // unique identifier of this domain + Kind string `json:"kind" url:"kind,key"` // type of domain name + Status string `json:"status" url:"status,key"` // status of this record's cname + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when domain was updated } // Info for existing domain. -func (s *Service) DomainInfo(appIdentity string, domainIdentity string) (*Domain, error) { - var domain Domain - return &domain, s.Get(&domain, fmt.Sprintf("/apps/%v/domains/%v", appIdentity, domainIdentity), nil) +func (s *Service) DomainInfo(ctx context.Context, appIdentity string, domainIdentity string) (*DomainInfoResult, error) { + var domain DomainInfoResult + return &domain, s.Get(ctx, &domain, fmt.Sprintf("/apps/%v/domains/%v", appIdentity, domainIdentity), nil, nil) +} + +type DomainListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app that owns the domain + CName *string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when domain was created + Hostname string `json:"hostname" url:"hostname,key"` // full hostname + ID string `json:"id" url:"id,key"` // unique identifier of this domain + Kind string `json:"kind" url:"kind,key"` // type of domain name + Status string `json:"status" url:"status,key"` // status of this record's cname + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when domain was updated } // List existing domains. -func (s *Service) DomainList(appIdentity string, lr *ListRange) ([]*Domain, error) { - var domainList []*Domain - return domainList, s.Get(&domainList, fmt.Sprintf("/apps/%v/domains", appIdentity), lr) +func (s *Service) DomainList(ctx context.Context, appIdentity string, lr *ListRange) (DomainListResult, error) { + var domain DomainListResult + return domain, s.Get(ctx, &domain, fmt.Sprintf("/apps/%v/domains", appIdentity), nil, lr) } -// Dynos encapsulate running processes of an app on Heroku. +// Dynos encapsulate running processes of an app on Heroku. Detailed +// information about dyno sizes can be found at: +// [https://devcenter.heroku.com/articles/dyno-types](https://devcenter.h +// eroku.com/articles/dyno-types). type Dyno struct { - AttachURL *string `json:"attach_url"` // a URL to stream output from for attached processes or null for + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + AttachURL *string `json:"attach_url" url:"attach_url,key"` // a URL to stream output from for attached processes or null for // non-attached processes - Command string `json:"command"` // command used to start this process - CreatedAt time.Time `json:"created_at"` // when dyno was created - ID string `json:"id"` // unique identifier of this dyno - Name string `json:"name"` // the name of this process on this dyno + Command string `json:"command" url:"command,key"` // command used to start this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when dyno was created + ID string `json:"id" url:"id,key"` // unique identifier of this dyno + Name string `json:"name" url:"name,key"` // the name of this process on this dyno Release struct { - ID string `json:"id"` // unique identifier of release - Version int `json:"version"` // unique version assigned to the release - } `json:"release"` // app release of the dyno - Size string `json:"size"` // dyno size (default: "1X") - State string `json:"state"` // current status of process (either: crashed, down, idle, starting, or + ID string `json:"id" url:"id,key"` // unique identifier of release + Version int `json:"version" url:"version,key"` // unique version assigned to the release + } `json:"release" url:"release,key"` // app release of the dyno + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + State string `json:"state" url:"state,key"` // current status of process (either: crashed, down, idle, starting, or // up) - Type string `json:"type"` // type of process - UpdatedAt time.Time `json:"updated_at"` // when process last changed state + Type string `json:"type" url:"type,key"` // type of process + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when process last changed state } type DynoCreateOpts struct { - Attach *bool `json:"attach,omitempty"` // whether to stream output or not - Command string `json:"command"` // command used to start this process - Env *map[string]string `json:"env,omitempty"` // custom environment to add to the dyno config vars - Size *string `json:"size,omitempty"` // dyno size (default: "1X") + Attach *bool `json:"attach,omitempty" url:"attach,omitempty,key"` // whether to stream output or not + Command string `json:"command" url:"command,key"` // command used to start this process + Env *map[string]string `json:"env,omitempty" url:"env,omitempty,key"` // custom environment to add to the dyno config vars + ForceNoTty *bool `json:"force_no_tty,omitempty" url:"force_no_tty,omitempty,key"` // force an attached one-off dyno to not run in a tty + Size *string `json:"size,omitempty" url:"size,omitempty,key"` // dyno size (default: "standard-1X") + TimeToLive *int `json:"time_to_live,omitempty" url:"time_to_live,omitempty,key"` // seconds until dyno expires, after which it will soon be killed + Type *string `json:"type,omitempty" url:"type,omitempty,key"` // type of process +} +type DynoCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + AttachURL *string `json:"attach_url" url:"attach_url,key"` // a URL to stream output from for attached processes or null for + // non-attached processes + Command string `json:"command" url:"command,key"` // command used to start this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when dyno was created + ID string `json:"id" url:"id,key"` // unique identifier of this dyno + Name string `json:"name" url:"name,key"` // the name of this process on this dyno + Release struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + Version int `json:"version" url:"version,key"` // unique version assigned to the release + } `json:"release" url:"release,key"` // app release of the dyno + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + State string `json:"state" url:"state,key"` // current status of process (either: crashed, down, idle, starting, or + // up) + Type string `json:"type" url:"type,key"` // type of process + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when process last changed state } // Create a new dyno. -func (s *Service) DynoCreate(appIdentity string, o struct { - Attach *bool `json:"attach,omitempty"` // whether to stream output or not - Command string `json:"command"` // command used to start this process - Env *map[string]string `json:"env,omitempty"` // custom environment to add to the dyno config vars - Size *string `json:"size,omitempty"` // dyno size (default: "1X") -}) (*Dyno, error) { - var dyno Dyno - return &dyno, s.Post(&dyno, fmt.Sprintf("/apps/%v/dynos", appIdentity), o) +func (s *Service) DynoCreate(ctx context.Context, appIdentity string, o DynoCreateOpts) (*DynoCreateResult, error) { + var dyno DynoCreateResult + return &dyno, s.Post(ctx, &dyno, fmt.Sprintf("/apps/%v/dynos", appIdentity), o) } +type DynoRestartResult struct{} + // Restart dyno. -func (s *Service) DynoRestart(appIdentity string, dynoIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v/dynos/%v", appIdentity, dynoIdentity)) +func (s *Service) DynoRestart(ctx context.Context, appIdentity string, dynoIdentity string) (DynoRestartResult, error) { + var dyno DynoRestartResult + return dyno, s.Delete(ctx, &dyno, fmt.Sprintf("/apps/%v/dynos/%v", appIdentity, dynoIdentity)) } -// Restart all dynos -func (s *Service) DynoRestartAll(appIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v/dynos", appIdentity)) +type DynoRestartAllResult struct{} + +// Restart all dynos. +func (s *Service) DynoRestartAll(ctx context.Context, appIdentity string) (DynoRestartAllResult, error) { + var dyno DynoRestartAllResult + return dyno, s.Delete(ctx, &dyno, fmt.Sprintf("/apps/%v/dynos", appIdentity)) +} + +type DynoStopResult struct{} + +// Stop dyno. +func (s *Service) DynoStop(ctx context.Context, appIdentity string, dynoIdentity string) (DynoStopResult, error) { + var dyno DynoStopResult + return dyno, s.Post(ctx, &dyno, fmt.Sprintf("/apps/%v/dynos/%v/actions/stop", appIdentity, dynoIdentity), nil) +} + +type DynoInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + AttachURL *string `json:"attach_url" url:"attach_url,key"` // a URL to stream output from for attached processes or null for + // non-attached processes + Command string `json:"command" url:"command,key"` // command used to start this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when dyno was created + ID string `json:"id" url:"id,key"` // unique identifier of this dyno + Name string `json:"name" url:"name,key"` // the name of this process on this dyno + Release struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + Version int `json:"version" url:"version,key"` // unique version assigned to the release + } `json:"release" url:"release,key"` // app release of the dyno + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + State string `json:"state" url:"state,key"` // current status of process (either: crashed, down, idle, starting, or + // up) + Type string `json:"type" url:"type,key"` // type of process + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when process last changed state } // Info for existing dyno. -func (s *Service) DynoInfo(appIdentity string, dynoIdentity string) (*Dyno, error) { - var dyno Dyno - return &dyno, s.Get(&dyno, fmt.Sprintf("/apps/%v/dynos/%v", appIdentity, dynoIdentity), nil) +func (s *Service) DynoInfo(ctx context.Context, appIdentity string, dynoIdentity string) (*DynoInfoResult, error) { + var dyno DynoInfoResult + return &dyno, s.Get(ctx, &dyno, fmt.Sprintf("/apps/%v/dynos/%v", appIdentity, dynoIdentity), nil, nil) +} + +type DynoListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + AttachURL *string `json:"attach_url" url:"attach_url,key"` // a URL to stream output from for attached processes or null for + // non-attached processes + Command string `json:"command" url:"command,key"` // command used to start this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when dyno was created + ID string `json:"id" url:"id,key"` // unique identifier of this dyno + Name string `json:"name" url:"name,key"` // the name of this process on this dyno + Release struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + Version int `json:"version" url:"version,key"` // unique version assigned to the release + } `json:"release" url:"release,key"` // app release of the dyno + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + State string `json:"state" url:"state,key"` // current status of process (either: crashed, down, idle, starting, or + // up) + Type string `json:"type" url:"type,key"` // type of process + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when process last changed state } // List existing dynos. -func (s *Service) DynoList(appIdentity string, lr *ListRange) ([]*Dyno, error) { - var dynoList []*Dyno - return dynoList, s.Get(&dynoList, fmt.Sprintf("/apps/%v/dynos", appIdentity), lr) +func (s *Service) DynoList(ctx context.Context, appIdentity string, lr *ListRange) (DynoListResult, error) { + var dyno DynoListResult + return dyno, s.Get(ctx, &dyno, fmt.Sprintf("/apps/%v/dynos", appIdentity), nil, lr) +} + +// Dyno sizes are the values and details of sizes that can be assigned +// to dynos. This information can also be found at : +// [https://devcenter.heroku.com/articles/dyno-types](https://devcenter.h +// eroku.com/articles/dyno-types). +type DynoSize struct { + Compute int `json:"compute" url:"compute,key"` // minimum vCPUs, non-dedicated may get more depending on load + Cost *struct{} `json:"cost" url:"cost,key"` // price information for this dyno size + Dedicated bool `json:"dedicated" url:"dedicated,key"` // whether this dyno will be dedicated to one user + DynoUnits int `json:"dyno_units" url:"dyno_units,key"` // unit of consumption for Heroku Enterprise customers + ID string `json:"id" url:"id,key"` // unique identifier of this dyno size + Memory float64 `json:"memory" url:"memory,key"` // amount of RAM in GB + Name string `json:"name" url:"name,key"` // the name of this dyno-size + PrivateSpaceOnly bool `json:"private_space_only" url:"private_space_only,key"` // whether this dyno can only be provisioned in a private space +} +type DynoSizeInfoResult struct { + Compute int `json:"compute" url:"compute,key"` // minimum vCPUs, non-dedicated may get more depending on load + Cost *struct{} `json:"cost" url:"cost,key"` // price information for this dyno size + Dedicated bool `json:"dedicated" url:"dedicated,key"` // whether this dyno will be dedicated to one user + DynoUnits int `json:"dyno_units" url:"dyno_units,key"` // unit of consumption for Heroku Enterprise customers + ID string `json:"id" url:"id,key"` // unique identifier of this dyno size + Memory float64 `json:"memory" url:"memory,key"` // amount of RAM in GB + Name string `json:"name" url:"name,key"` // the name of this dyno-size + PrivateSpaceOnly bool `json:"private_space_only" url:"private_space_only,key"` // whether this dyno can only be provisioned in a private space +} + +// Info for existing dyno size. +func (s *Service) DynoSizeInfo(ctx context.Context, dynoSizeIdentity string) (*DynoSizeInfoResult, error) { + var dynoSize DynoSizeInfoResult + return &dynoSize, s.Get(ctx, &dynoSize, fmt.Sprintf("/dyno-sizes/%v", dynoSizeIdentity), nil, nil) +} + +type DynoSizeListResult []struct { + Compute int `json:"compute" url:"compute,key"` // minimum vCPUs, non-dedicated may get more depending on load + Cost *struct{} `json:"cost" url:"cost,key"` // price information for this dyno size + Dedicated bool `json:"dedicated" url:"dedicated,key"` // whether this dyno will be dedicated to one user + DynoUnits int `json:"dyno_units" url:"dyno_units,key"` // unit of consumption for Heroku Enterprise customers + ID string `json:"id" url:"id,key"` // unique identifier of this dyno size + Memory float64 `json:"memory" url:"memory,key"` // amount of RAM in GB + Name string `json:"name" url:"name,key"` // the name of this dyno-size + PrivateSpaceOnly bool `json:"private_space_only" url:"private_space_only,key"` // whether this dyno can only be provisioned in a private space +} + +// List existing dyno sizes. +func (s *Service) DynoSizeList(ctx context.Context, lr *ListRange) (DynoSizeListResult, error) { + var dynoSize DynoSizeListResult + return dynoSize, s.Get(ctx, &dynoSize, fmt.Sprintf("/dyno-sizes"), nil, lr) +} + +// An event represents an action performed on another API resource. +type Event struct { + Action string `json:"action" url:"action,key"` // the operation performed on the resource + Actor struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"actor" url:"actor,key"` // user that performed the operation + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the event was created + Data struct { + AllowTracking bool `json:"allow_tracking" url:"allow_tracking,key"` // whether to allow third party web activity tracking + Beta bool `json:"beta" url:"beta,key"` // whether allowed to utilize beta Heroku features + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account was created + DefaultOrganization *struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"default_organization" url:"default_organization,key"` // organization selected by default + DelinquentAt *time.Time `json:"delinquent_at" url:"delinquent_at,key"` // when account became delinquent + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + IdentityProvider *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + } `json:"identity_provider" url:"identity_provider,key"` // Identity Provider details for federated users. + LastLogin *time.Time `json:"last_login" url:"last_login,key"` // when account last authorized with Heroku + Name *string `json:"name" url:"name,key"` // full name of the account owner + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account + SuspendedAt *time.Time `json:"suspended_at" url:"suspended_at,key"` // when account was suspended + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether two-factor auth is enabled on the account + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account was updated + Verified bool `json:"verified" url:"verified,key"` // whether account has been verified with billing information + } `json:"data" url:"data,key"` // An account represents an individual signed up to use the Heroku + // platform. + ID string `json:"id" url:"id,key"` // unique identifier of an event + PreviousData struct{} `json:"previous_data" url:"previous_data,key"` // data fields that were changed during update with previous values + PublishedAt *time.Time `json:"published_at" url:"published_at,key"` // when the event was published + Resource string `json:"resource" url:"resource,key"` // the type of resource affected + Sequence *string `json:"sequence" url:"sequence,key"` // a numeric string representing the event's sequence + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the event was updated (same as created) + Version string `json:"version" url:"version,key"` // the event's API version string +} + +// A failed event represents a failure of an action performed on another +// API resource. +type FailedEvent struct { + Action string `json:"action" url:"action,key"` // The attempted operation performed on the resource. + Code *int `json:"code" url:"code,key"` // An HTTP status code. + ErrorID *string `json:"error_id" url:"error_id,key"` // ID of error raised. + Message string `json:"message" url:"message,key"` // A detailed error message. + Method string `json:"method" url:"method,key"` // The HTTP method type of the failed action. + Path string `json:"path" url:"path,key"` // The path of the attempted operation. + Resource *struct { + ID string `json:"id" url:"id,key"` // Unique identifier of a resource. + Name string `json:"name" url:"name,key"` // the type of resource affected + } `json:"resource" url:"resource,key"` // The related resource of the failed action. +} + +// Filters are special endpoints to allow for API consumers to specify a +// subset of resources to consume in order to reduce the number of +// requests that are performed. Each filter endpoint endpoint is +// responsible for determining its supported request format. The +// endpoints are over POST in order to handle large request bodies +// without hitting request uri query length limitations, but the +// requests themselves are idempotent and will not have side effects. +type FilterApps struct{} +type FilterAppsAppsOpts struct { + In *struct { + ID *[]*string `json:"id,omitempty" url:"id,omitempty,key"` + } `json:"in,omitempty" url:"in,omitempty,key"` +} +type FilterAppsAppsResult []struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Joined bool `json:"joined" url:"joined,key"` // is the current member a collaborator on this app. + Locked bool `json:"locked" url:"locked,key"` // are other organization members forbidden from joining this app. + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this app + Owner *struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app +} + +// Request an apps list filtered by app id. +func (s *Service) FilterAppsApps(ctx context.Context, o FilterAppsAppsOpts) (FilterAppsAppsResult, error) { + var filterApps FilterAppsAppsResult + return filterApps, s.Post(ctx, &filterApps, fmt.Sprintf("/filters/apps"), o) } // The formation of processes that should be maintained for an app. @@ -879,176 +2725,580 @@ func (s *Service) DynoList(appIdentity string, lr *ListRange) ([]*Dyno, error) { // `process_types` attribute for the [slug](#slug) currently released on // an app. type Formation struct { - Command string `json:"command"` // command to use to launch this process - CreatedAt time.Time `json:"created_at"` // when process type was created - ID string `json:"id"` // unique identifier of this process type - Quantity int `json:"quantity"` // number of processes to maintain - Size string `json:"size"` // dyno size (default: "1X") - Type string `json:"type"` // type of process to maintain - UpdatedAt time.Time `json:"updated_at"` // when dyno type was updated + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + Command string `json:"command" url:"command,key"` // command to use to launch this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when process type was created + ID string `json:"id" url:"id,key"` // unique identifier of this process type + Quantity int `json:"quantity" url:"quantity,key"` // number of processes to maintain + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + Type string `json:"type" url:"type,key"` // type of process to maintain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when dyno type was updated +} +type FormationInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + Command string `json:"command" url:"command,key"` // command to use to launch this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when process type was created + ID string `json:"id" url:"id,key"` // unique identifier of this process type + Quantity int `json:"quantity" url:"quantity,key"` // number of processes to maintain + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + Type string `json:"type" url:"type,key"` // type of process to maintain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when dyno type was updated } // Info for a process type -func (s *Service) FormationInfo(appIdentity string, formationIdentity string) (*Formation, error) { - var formation Formation - return &formation, s.Get(&formation, fmt.Sprintf("/apps/%v/formation/%v", appIdentity, formationIdentity), nil) +func (s *Service) FormationInfo(ctx context.Context, appIdentity string, formationIdentity string) (*FormationInfoResult, error) { + var formation FormationInfoResult + return &formation, s.Get(ctx, &formation, fmt.Sprintf("/apps/%v/formation/%v", appIdentity, formationIdentity), nil, nil) +} + +type FormationListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + Command string `json:"command" url:"command,key"` // command to use to launch this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when process type was created + ID string `json:"id" url:"id,key"` // unique identifier of this process type + Quantity int `json:"quantity" url:"quantity,key"` // number of processes to maintain + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + Type string `json:"type" url:"type,key"` // type of process to maintain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when dyno type was updated } // List process type formation -func (s *Service) FormationList(appIdentity string, lr *ListRange) ([]*Formation, error) { - var formationList []*Formation - return formationList, s.Get(&formationList, fmt.Sprintf("/apps/%v/formation", appIdentity), lr) +func (s *Service) FormationList(ctx context.Context, appIdentity string, lr *ListRange) (FormationListResult, error) { + var formation FormationListResult + return formation, s.Get(ctx, &formation, fmt.Sprintf("/apps/%v/formation", appIdentity), nil, lr) } type FormationBatchUpdateOpts struct { Updates []struct { - Process string `json:"process"` // unique identifier of this process type - Quantity *int `json:"quantity,omitempty"` // number of processes to maintain - Size *string `json:"size,omitempty"` // dyno size (default: "1X") - } `json:"updates"` // Array with formation updates. Each element must have "process", the - // id or name of the process type to be updated, and can optionally - // update its "quantity" or "size". + Quantity *int `json:"quantity,omitempty" url:"quantity,omitempty,key"` // number of processes to maintain + Size *string `json:"size,omitempty" url:"size,omitempty,key"` // dyno size (default: "standard-1X") + Type string `json:"type" url:"type,key"` // type of process to maintain + } `json:"updates" url:"updates,key"` // Array with formation updates. Each element must have "type", the id + // or name of the process type to be updated, and can optionally update + // its "quantity" or "size". +} +type FormationBatchUpdateResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + Command string `json:"command" url:"command,key"` // command to use to launch this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when process type was created + ID string `json:"id" url:"id,key"` // unique identifier of this process type + Quantity int `json:"quantity" url:"quantity,key"` // number of processes to maintain + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + Type string `json:"type" url:"type,key"` // type of process to maintain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when dyno type was updated } // Batch update process types -func (s *Service) FormationBatchUpdate(appIdentity string, o struct { - Updates []struct { - Process string `json:"process"` // unique identifier of this process type - Quantity *int `json:"quantity,omitempty"` // number of processes to maintain - Size *string `json:"size,omitempty"` // dyno size (default: "1X") - } `json:"updates"` // Array with formation updates. Each element must have "process", the - // id or name of the process type to be updated, and can optionally - // update its "quantity" or "size". -}) (*Formation, error) { - var formation Formation - return &formation, s.Patch(&formation, fmt.Sprintf("/apps/%v/formation", appIdentity), o) +func (s *Service) FormationBatchUpdate(ctx context.Context, appIdentity string, o FormationBatchUpdateOpts) (FormationBatchUpdateResult, error) { + var formation FormationBatchUpdateResult + return formation, s.Patch(ctx, &formation, fmt.Sprintf("/apps/%v/formation", appIdentity), o) } type FormationUpdateOpts struct { - Quantity *int `json:"quantity,omitempty"` // number of processes to maintain - Size *string `json:"size,omitempty"` // dyno size (default: "1X") + Quantity *int `json:"quantity,omitempty" url:"quantity,omitempty,key"` // number of processes to maintain + Size *string `json:"size,omitempty" url:"size,omitempty,key"` // dyno size (default: "standard-1X") +} +type FormationUpdateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app formation belongs to + Command string `json:"command" url:"command,key"` // command to use to launch this process + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when process type was created + ID string `json:"id" url:"id,key"` // unique identifier of this process type + Quantity int `json:"quantity" url:"quantity,key"` // number of processes to maintain + Size string `json:"size" url:"size,key"` // dyno size (default: "standard-1X") + Type string `json:"type" url:"type,key"` // type of process to maintain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when dyno type was updated } // Update process type -func (s *Service) FormationUpdate(appIdentity string, formationIdentity string, o struct { - Quantity *int `json:"quantity,omitempty"` // number of processes to maintain - Size *string `json:"size,omitempty"` // dyno size (default: "1X") -}) (*Formation, error) { - var formation Formation - return &formation, s.Patch(&formation, fmt.Sprintf("/apps/%v/formation/%v", appIdentity, formationIdentity), o) +func (s *Service) FormationUpdate(ctx context.Context, appIdentity string, formationIdentity string, o FormationUpdateOpts) (*FormationUpdateResult, error) { + var formation FormationUpdateResult + return &formation, s.Patch(ctx, &formation, fmt.Sprintf("/apps/%v/formation/%v", appIdentity, formationIdentity), o) +} + +// Identity Providers represent the SAML configuration of an +// Organization. +type IdentityProvider struct { + Certificate string `json:"certificate" url:"certificate,key"` // raw contents of the public certificate (eg: .crt or .pem file) + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when provider record was created + EntityID string `json:"entity_id" url:"entity_id,key"` // URL identifier provided by the identity provider + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization associated with this identity provider + SloTargetURL string `json:"slo_target_url" url:"slo_target_url,key"` // single log out URL for this identity provider + SsoTargetURL string `json:"sso_target_url" url:"sso_target_url,key"` // single sign on URL for this identity provider + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the identity provider record was updated +} +type IdentityProviderListResult []struct { + Certificate string `json:"certificate" url:"certificate,key"` // raw contents of the public certificate (eg: .crt or .pem file) + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when provider record was created + EntityID string `json:"entity_id" url:"entity_id,key"` // URL identifier provided by the identity provider + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization associated with this identity provider + SloTargetURL string `json:"slo_target_url" url:"slo_target_url,key"` // single log out URL for this identity provider + SsoTargetURL string `json:"sso_target_url" url:"sso_target_url,key"` // single sign on URL for this identity provider + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the identity provider record was updated +} + +// Get a list of an organization's Identity Providers +func (s *Service) IdentityProviderList(ctx context.Context, organizationName string, lr *ListRange) (IdentityProviderListResult, error) { + var identityProvider IdentityProviderListResult + return identityProvider, s.Get(ctx, &identityProvider, fmt.Sprintf("/organizations/%v/identity-providers", organizationName), nil, lr) +} + +type IdentityProviderCreateOpts struct { + Certificate string `json:"certificate" url:"certificate,key"` // raw contents of the public certificate (eg: .crt or .pem file) + EntityID string `json:"entity_id" url:"entity_id,key"` // URL identifier provided by the identity provider + SloTargetURL *string `json:"slo_target_url,omitempty" url:"slo_target_url,omitempty,key"` // single log out URL for this identity provider + SsoTargetURL string `json:"sso_target_url" url:"sso_target_url,key"` // single sign on URL for this identity provider +} +type IdentityProviderCreateResult struct { + Certificate string `json:"certificate" url:"certificate,key"` // raw contents of the public certificate (eg: .crt or .pem file) + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when provider record was created + EntityID string `json:"entity_id" url:"entity_id,key"` // URL identifier provided by the identity provider + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization associated with this identity provider + SloTargetURL string `json:"slo_target_url" url:"slo_target_url,key"` // single log out URL for this identity provider + SsoTargetURL string `json:"sso_target_url" url:"sso_target_url,key"` // single sign on URL for this identity provider + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the identity provider record was updated +} + +// Create an Identity Provider for an organization +func (s *Service) IdentityProviderCreate(ctx context.Context, organizationName string, o IdentityProviderCreateOpts) (*IdentityProviderCreateResult, error) { + var identityProvider IdentityProviderCreateResult + return &identityProvider, s.Post(ctx, &identityProvider, fmt.Sprintf("/organizations/%v/identity-providers", organizationName), o) +} + +type IdentityProviderUpdateOpts struct { + Certificate *string `json:"certificate,omitempty" url:"certificate,omitempty,key"` // raw contents of the public certificate (eg: .crt or .pem file) + EntityID *string `json:"entity_id,omitempty" url:"entity_id,omitempty,key"` // URL identifier provided by the identity provider + SloTargetURL *string `json:"slo_target_url,omitempty" url:"slo_target_url,omitempty,key"` // single log out URL for this identity provider + SsoTargetURL *string `json:"sso_target_url,omitempty" url:"sso_target_url,omitempty,key"` // single sign on URL for this identity provider +} +type IdentityProviderUpdateResult struct { + Certificate string `json:"certificate" url:"certificate,key"` // raw contents of the public certificate (eg: .crt or .pem file) + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when provider record was created + EntityID string `json:"entity_id" url:"entity_id,key"` // URL identifier provided by the identity provider + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization associated with this identity provider + SloTargetURL string `json:"slo_target_url" url:"slo_target_url,key"` // single log out URL for this identity provider + SsoTargetURL string `json:"sso_target_url" url:"sso_target_url,key"` // single sign on URL for this identity provider + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the identity provider record was updated +} + +// Update an organization's Identity Provider +func (s *Service) IdentityProviderUpdate(ctx context.Context, organizationName string, identityProviderID string, o IdentityProviderUpdateOpts) (*IdentityProviderUpdateResult, error) { + var identityProvider IdentityProviderUpdateResult + return &identityProvider, s.Patch(ctx, &identityProvider, fmt.Sprintf("/organizations/%v/identity-providers/%v", organizationName, identityProviderID), o) +} + +type IdentityProviderDeleteResult struct { + Certificate string `json:"certificate" url:"certificate,key"` // raw contents of the public certificate (eg: .crt or .pem file) + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when provider record was created + EntityID string `json:"entity_id" url:"entity_id,key"` // URL identifier provided by the identity provider + ID string `json:"id" url:"id,key"` // unique identifier of this identity provider + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization associated with this identity provider + SloTargetURL string `json:"slo_target_url" url:"slo_target_url,key"` // single log out URL for this identity provider + SsoTargetURL string `json:"sso_target_url" url:"sso_target_url,key"` // single sign on URL for this identity provider + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the identity provider record was updated +} + +// Delete an organization's Identity Provider +func (s *Service) IdentityProviderDelete(ctx context.Context, organizationName string, identityProviderID string) (*IdentityProviderDeleteResult, error) { + var identityProvider IdentityProviderDeleteResult + return &identityProvider, s.Delete(ctx, &identityProvider, fmt.Sprintf("/organizations/%v/identity-providers/%v", organizationName, identityProviderID)) +} + +// An inbound-ruleset is a collection of rules that specify what hosts +// can or cannot connect to an application. +type InboundRuleset struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when inbound-ruleset was created + CreatedBy string `json:"created_by" url:"created_by,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an inbound-ruleset + Rules []struct { + Action string `json:"action" url:"action,key"` // states whether the connection is allowed or denied + Source string `json:"source" url:"source,key"` // is the request’s source in CIDR notation + } `json:"rules" url:"rules,key"` +} +type InboundRulesetInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when inbound-ruleset was created + CreatedBy string `json:"created_by" url:"created_by,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an inbound-ruleset + Rules []struct { + Action string `json:"action" url:"action,key"` // states whether the connection is allowed or denied + Source string `json:"source" url:"source,key"` // is the request’s source in CIDR notation + } `json:"rules" url:"rules,key"` +} + +// Current inbound ruleset for a space +func (s *Service) InboundRulesetInfo(ctx context.Context, spaceIdentity string) (*InboundRulesetInfoResult, error) { + var inboundRuleset InboundRulesetInfoResult + return &inboundRuleset, s.Get(ctx, &inboundRuleset, fmt.Sprintf("/spaces/%v/inbound-ruleset", spaceIdentity), nil, nil) +} + +type InboundRulesetListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when inbound-ruleset was created + CreatedBy string `json:"created_by" url:"created_by,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an inbound-ruleset + Rules []struct { + Action string `json:"action" url:"action,key"` // states whether the connection is allowed or denied + Source string `json:"source" url:"source,key"` // is the request’s source in CIDR notation + } `json:"rules" url:"rules,key"` +} + +// List all inbound rulesets for a space +func (s *Service) InboundRulesetList(ctx context.Context, spaceIdentity string, lr *ListRange) (InboundRulesetListResult, error) { + var inboundRuleset InboundRulesetListResult + return inboundRuleset, s.Get(ctx, &inboundRuleset, fmt.Sprintf("/spaces/%v/inbound-rulesets", spaceIdentity), nil, lr) +} + +type InboundRulesetCreateOpts struct { + Rules *[]*struct { + Action string `json:"action" url:"action,key"` // states whether the connection is allowed or denied + Source string `json:"source" url:"source,key"` // is the request’s source in CIDR notation + } `json:"rules,omitempty" url:"rules,omitempty,key"` +} + +// Create a new inbound ruleset +func (s *Service) InboundRulesetCreate(ctx context.Context, spaceIdentity string, o InboundRulesetCreateOpts) (*InboundRuleset, error) { + var inboundRuleset InboundRuleset + return &inboundRuleset, s.Put(ctx, &inboundRuleset, fmt.Sprintf("/spaces/%v/inbound-ruleset", spaceIdentity), o) +} + +// An invitation represents an invite sent to a user to use the Heroku +// platform. +type Invitation struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invitation was created + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` + VerificationRequired bool `json:"verification_required" url:"verification_required,key"` // if the invitation requires verification +} + +// Info for invitation. +func (s *Service) InvitationInfo(ctx context.Context, invitationIdentity string) (*Invitation, error) { + var invitation Invitation + return &invitation, s.Get(ctx, &invitation, fmt.Sprintf("/invitations/%v", invitationIdentity), nil, nil) +} + +type InvitationCreateOpts struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Name *string `json:"name" url:"name,key"` // full name of the account owner +} + +// Invite a user. +func (s *Service) InvitationCreate(ctx context.Context, o InvitationCreateOpts) (*Invitation, error) { + var invitation Invitation + return &invitation, s.Post(ctx, &invitation, fmt.Sprintf("/invitations"), o) +} + +type InvitationSendVerificationCodeOpts struct { + Method *string `json:"method,omitempty" url:"method,omitempty,key"` // Transport used to send verification code + PhoneNumber string `json:"phone_number" url:"phone_number,key"` // Phone number to send verification code +} + +// Send a verification code for an invitation via SMS/phone call. +func (s *Service) InvitationSendVerificationCode(ctx context.Context, invitationIdentity string, o InvitationSendVerificationCodeOpts) (*Invitation, error) { + var invitation Invitation + return &invitation, s.Post(ctx, &invitation, fmt.Sprintf("/invitations/%v/actions/send-verification", invitationIdentity), o) +} + +type InvitationVerifyOpts struct { + VerificationCode string `json:"verification_code" url:"verification_code,key"` // Value used to verify invitation +} + +// Verify an invitation using a verification code. +func (s *Service) InvitationVerify(ctx context.Context, invitationIdentity string, o InvitationVerifyOpts) (*Invitation, error) { + var invitation Invitation + return &invitation, s.Post(ctx, &invitation, fmt.Sprintf("/invitations/%v/actions/verify", invitationIdentity), o) +} + +type InvitationFinalizeOpts struct { + Password string `json:"password" url:"password,key"` // current password on the account + PasswordConfirmation string `json:"password_confirmation" url:"password_confirmation,key"` // current password on the account + ReceiveNewsletter *bool `json:"receive_newsletter,omitempty" url:"receive_newsletter,omitempty,key"` // whether this user should receive a newsletter or not +} + +// Finalize Invitation and Create Account. +func (s *Service) InvitationFinalize(ctx context.Context, invitationIdentity string, o InvitationFinalizeOpts) (*Invitation, error) { + var invitation Invitation + return &invitation, s.Patch(ctx, &invitation, fmt.Sprintf("/invitations/%v", invitationIdentity), o) +} + +// An invoice is an itemized bill of goods for an account which includes +// pricing and charges. +type Invoice struct { + ChargesTotal float64 `json:"charges_total" url:"charges_total,key"` // total charges on this invoice + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invoice was created + CreditsTotal float64 `json:"credits_total" url:"credits_total,key"` // total credits on this invoice + ID string `json:"id" url:"id,key"` // unique identifier of this invoice + Number int `json:"number" url:"number,key"` // human readable invoice number + PeriodEnd string `json:"period_end" url:"period_end,key"` // the ending date that the invoice covers + PeriodStart string `json:"period_start" url:"period_start,key"` // the starting date that this invoice covers + State int `json:"state" url:"state,key"` // payment status for this invoice (pending, successful, failed) + Total float64 `json:"total" url:"total,key"` // combined total of charges and credits on this invoice + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invoice was updated +} +type InvoiceInfoResult struct { + ChargesTotal float64 `json:"charges_total" url:"charges_total,key"` // total charges on this invoice + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invoice was created + CreditsTotal float64 `json:"credits_total" url:"credits_total,key"` // total credits on this invoice + ID string `json:"id" url:"id,key"` // unique identifier of this invoice + Number int `json:"number" url:"number,key"` // human readable invoice number + PeriodEnd string `json:"period_end" url:"period_end,key"` // the ending date that the invoice covers + PeriodStart string `json:"period_start" url:"period_start,key"` // the starting date that this invoice covers + State int `json:"state" url:"state,key"` // payment status for this invoice (pending, successful, failed) + Total float64 `json:"total" url:"total,key"` // combined total of charges and credits on this invoice + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invoice was updated +} + +// Info for existing invoice. +func (s *Service) InvoiceInfo(ctx context.Context, invoiceIdentity int) (*InvoiceInfoResult, error) { + var invoice InvoiceInfoResult + return &invoice, s.Get(ctx, &invoice, fmt.Sprintf("/account/invoices/%v", invoiceIdentity), nil, nil) +} + +type InvoiceListResult []struct { + ChargesTotal float64 `json:"charges_total" url:"charges_total,key"` // total charges on this invoice + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invoice was created + CreditsTotal float64 `json:"credits_total" url:"credits_total,key"` // total credits on this invoice + ID string `json:"id" url:"id,key"` // unique identifier of this invoice + Number int `json:"number" url:"number,key"` // human readable invoice number + PeriodEnd string `json:"period_end" url:"period_end,key"` // the ending date that the invoice covers + PeriodStart string `json:"period_start" url:"period_start,key"` // the starting date that this invoice covers + State int `json:"state" url:"state,key"` // payment status for this invoice (pending, successful, failed) + Total float64 `json:"total" url:"total,key"` // combined total of charges and credits on this invoice + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invoice was updated +} + +// List existing invoices. +func (s *Service) InvoiceList(ctx context.Context, lr *ListRange) (InvoiceListResult, error) { + var invoice InvoiceListResult + return invoice, s.Get(ctx, &invoice, fmt.Sprintf("/account/invoices"), nil, lr) +} + +// An invoice address represents the address that should be listed on an +// invoice. +type InvoiceAddress struct { + Address1 string `json:"address_1" url:"address_1,key"` // invoice street address line 1 + Address2 string `json:"address_2" url:"address_2,key"` // invoice street address line 2 + City string `json:"city" url:"city,key"` // invoice city + Country string `json:"country" url:"country,key"` // country + HerokuID string `json:"heroku_id" url:"heroku_id,key"` // heroku_id identifier reference + Other string `json:"other" url:"other,key"` // metadata / additional information to go on invoice + PostalCode string `json:"postal_code" url:"postal_code,key"` // invoice zip code + State string `json:"state" url:"state,key"` // invoice state + UseInvoiceAddress bool `json:"use_invoice_address" url:"use_invoice_address,key"` // flag to use the invoice address for an account or not +} + +// Retrieve existing invoice address. +func (s *Service) InvoiceAddressInfo(ctx context.Context) (*InvoiceAddress, error) { + var invoiceAddress InvoiceAddress + return &invoiceAddress, s.Get(ctx, &invoiceAddress, fmt.Sprintf("/account/invoice-address"), nil, nil) +} + +type InvoiceAddressUpdateOpts struct { + Address1 *string `json:"address_1,omitempty" url:"address_1,omitempty,key"` // invoice street address line 1 + Address2 *string `json:"address_2,omitempty" url:"address_2,omitempty,key"` // invoice street address line 2 + City *string `json:"city,omitempty" url:"city,omitempty,key"` // invoice city + Country *string `json:"country,omitempty" url:"country,omitempty,key"` // country + Other *string `json:"other,omitempty" url:"other,omitempty,key"` // metadata / additional information to go on invoice + PostalCode *string `json:"postal_code,omitempty" url:"postal_code,omitempty,key"` // invoice zip code + State *string `json:"state,omitempty" url:"state,omitempty,key"` // invoice state + UseInvoiceAddress *bool `json:"use_invoice_address,omitempty" url:"use_invoice_address,omitempty,key"` // flag to use the invoice address for an account or not +} + +// Update invoice address for an account. +func (s *Service) InvoiceAddressUpdate(ctx context.Context, o InvoiceAddressUpdateOpts) (*InvoiceAddress, error) { + var invoiceAddress InvoiceAddress + return &invoiceAddress, s.Put(ctx, &invoiceAddress, fmt.Sprintf("/account/invoice-address"), o) } // Keys represent public SSH keys associated with an account and are // used to authorize accounts as they are performing git operations. type Key struct { - Comment string `json:"comment"` // comment on the key - CreatedAt time.Time `json:"created_at"` // when key was created - Email string `json:"email"` // deprecated. Please refer to 'comment' instead - Fingerprint string `json:"fingerprint"` // a unique identifying string based on contents - ID string `json:"id"` // unique identifier of this key - PublicKey string `json:"public_key"` // full public_key as uploaded - UpdatedAt time.Time `json:"updated_at"` // when key was updated + Comment string `json:"comment" url:"comment,key"` // comment on the key + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when key was created + Email string `json:"email" url:"email,key"` // deprecated. Please refer to 'comment' instead + Fingerprint string `json:"fingerprint" url:"fingerprint,key"` // a unique identifying string based on contents + ID string `json:"id" url:"id,key"` // unique identifier of this key + PublicKey string `json:"public_key" url:"public_key,key"` // full public_key as uploaded + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when key was updated } -type KeyCreateOpts struct { - PublicKey string `json:"public_key"` // full public_key as uploaded -} - -// Create a new key. -func (s *Service) KeyCreate(o struct { - PublicKey string `json:"public_key"` // full public_key as uploaded -}) (*Key, error) { - var key Key - return &key, s.Post(&key, fmt.Sprintf("/account/keys"), o) -} - -// Delete an existing key -func (s *Service) KeyDelete(keyIdentity string) error { - return s.Delete(fmt.Sprintf("/account/keys/%v", keyIdentity)) +type KeyInfoResult struct { + Comment string `json:"comment" url:"comment,key"` // comment on the key + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when key was created + Email string `json:"email" url:"email,key"` // deprecated. Please refer to 'comment' instead + Fingerprint string `json:"fingerprint" url:"fingerprint,key"` // a unique identifying string based on contents + ID string `json:"id" url:"id,key"` // unique identifier of this key + PublicKey string `json:"public_key" url:"public_key,key"` // full public_key as uploaded + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when key was updated } // Info for existing key. -func (s *Service) KeyInfo(keyIdentity string) (*Key, error) { - var key Key - return &key, s.Get(&key, fmt.Sprintf("/account/keys/%v", keyIdentity), nil) +func (s *Service) KeyInfo(ctx context.Context, keyIdentity string) (*KeyInfoResult, error) { + var key KeyInfoResult + return &key, s.Get(ctx, &key, fmt.Sprintf("/account/keys/%v", keyIdentity), nil, nil) +} + +type KeyListResult []struct { + Comment string `json:"comment" url:"comment,key"` // comment on the key + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when key was created + Email string `json:"email" url:"email,key"` // deprecated. Please refer to 'comment' instead + Fingerprint string `json:"fingerprint" url:"fingerprint,key"` // a unique identifying string based on contents + ID string `json:"id" url:"id,key"` // unique identifier of this key + PublicKey string `json:"public_key" url:"public_key,key"` // full public_key as uploaded + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when key was updated } // List existing keys. -func (s *Service) KeyList(lr *ListRange) ([]*Key, error) { - var keyList []*Key - return keyList, s.Get(&keyList, fmt.Sprintf("/account/keys"), lr) +func (s *Service) KeyList(ctx context.Context, lr *ListRange) (KeyListResult, error) { + var key KeyListResult + return key, s.Get(ctx, &key, fmt.Sprintf("/account/keys"), nil, lr) } -// [Log -// drains](https://devcenter.heroku.com/articles/logging#syslog-drains) +// [Log drains](https://devcenter.heroku.com/articles/log-drains) // provide a way to forward your Heroku logs to an external syslog // server for long-term archiving. This external service must be // configured to receive syslog packets from Heroku, whereupon its URL -// can be added to an app using this API. Some addons will add a log +// can be added to an app using this API. Some add-ons will add a log // drain when they are provisioned to an app. These drains can only be // removed by removing the add-on. type LogDrain struct { Addon *struct { - ID string `json:"id"` // unique identifier of add-on - } `json:"addon"` // addon that created the drain - CreatedAt time.Time `json:"created_at"` // when log drain was created - ID string `json:"id"` // unique identifier of this log drain - Token string `json:"token"` // token associated with the log drain - UpdatedAt time.Time `json:"updated_at"` // when log drain was updated - URL string `json:"url"` // url associated with the log drain + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + } `json:"addon" url:"addon,key"` // add-on that created the drain + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when log drain was created + ID string `json:"id" url:"id,key"` // unique identifier of this log drain + Token string `json:"token" url:"token,key"` // token associated with the log drain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when log drain was updated + URL string `json:"url" url:"url,key"` // url associated with the log drain } type LogDrainCreateOpts struct { - URL string `json:"url"` // url associated with the log drain + URL string `json:"url" url:"url,key"` // url associated with the log drain +} +type LogDrainCreateResult struct { + Addon *struct { + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + } `json:"addon" url:"addon,key"` // add-on that created the drain + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when log drain was created + ID string `json:"id" url:"id,key"` // unique identifier of this log drain + Token string `json:"token" url:"token,key"` // token associated with the log drain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when log drain was updated + URL string `json:"url" url:"url,key"` // url associated with the log drain } // Create a new log drain. -func (s *Service) LogDrainCreate(appIdentity string, o struct { - URL string `json:"url"` // url associated with the log drain -}) (*LogDrain, error) { - var logDrain LogDrain - return &logDrain, s.Post(&logDrain, fmt.Sprintf("/apps/%v/log-drains", appIdentity), o) +func (s *Service) LogDrainCreate(ctx context.Context, appIdentity string, o LogDrainCreateOpts) (*LogDrainCreateResult, error) { + var logDrain LogDrainCreateResult + return &logDrain, s.Post(ctx, &logDrain, fmt.Sprintf("/apps/%v/log-drains", appIdentity), o) +} + +type LogDrainDeleteResult struct { + Addon *struct { + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + } `json:"addon" url:"addon,key"` // add-on that created the drain + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when log drain was created + ID string `json:"id" url:"id,key"` // unique identifier of this log drain + Token string `json:"token" url:"token,key"` // token associated with the log drain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when log drain was updated + URL string `json:"url" url:"url,key"` // url associated with the log drain } // Delete an existing log drain. Log drains added by add-ons can only be // removed by removing the add-on. -func (s *Service) LogDrainDelete(appIdentity string, logDrainIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v/log-drains/%v", appIdentity, logDrainIdentity)) +func (s *Service) LogDrainDelete(ctx context.Context, appIdentity string, logDrainQueryIdentity string) (*LogDrainDeleteResult, error) { + var logDrain LogDrainDeleteResult + return &logDrain, s.Delete(ctx, &logDrain, fmt.Sprintf("/apps/%v/log-drains/%v", appIdentity, logDrainQueryIdentity)) +} + +type LogDrainInfoResult struct { + Addon *struct { + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + } `json:"addon" url:"addon,key"` // add-on that created the drain + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when log drain was created + ID string `json:"id" url:"id,key"` // unique identifier of this log drain + Token string `json:"token" url:"token,key"` // token associated with the log drain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when log drain was updated + URL string `json:"url" url:"url,key"` // url associated with the log drain } // Info for existing log drain. -func (s *Service) LogDrainInfo(appIdentity string, logDrainIdentity string) (*LogDrain, error) { - var logDrain LogDrain - return &logDrain, s.Get(&logDrain, fmt.Sprintf("/apps/%v/log-drains/%v", appIdentity, logDrainIdentity), nil) +func (s *Service) LogDrainInfo(ctx context.Context, appIdentity string, logDrainQueryIdentity string) (*LogDrainInfoResult, error) { + var logDrain LogDrainInfoResult + return &logDrain, s.Get(ctx, &logDrain, fmt.Sprintf("/apps/%v/log-drains/%v", appIdentity, logDrainQueryIdentity), nil, nil) +} + +type LogDrainListResult []struct { + Addon *struct { + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + } `json:"addon" url:"addon,key"` // add-on that created the drain + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when log drain was created + ID string `json:"id" url:"id,key"` // unique identifier of this log drain + Token string `json:"token" url:"token,key"` // token associated with the log drain + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when log drain was updated + URL string `json:"url" url:"url,key"` // url associated with the log drain } // List existing log drains. -func (s *Service) LogDrainList(appIdentity string, lr *ListRange) ([]*LogDrain, error) { - var logDrainList []*LogDrain - return logDrainList, s.Get(&logDrainList, fmt.Sprintf("/apps/%v/log-drains", appIdentity), lr) +func (s *Service) LogDrainList(ctx context.Context, appIdentity string, lr *ListRange) (LogDrainListResult, error) { + var logDrain LogDrainListResult + return logDrain, s.Get(ctx, &logDrain, fmt.Sprintf("/apps/%v/log-drains", appIdentity), nil, lr) } // A log session is a reference to the http based log stream for an app. type LogSession struct { - CreatedAt time.Time `json:"created_at"` // when log connection was created - ID string `json:"id"` // unique identifier of this log session - LogplexURL string `json:"logplex_url"` // URL for log streaming session - UpdatedAt time.Time `json:"updated_at"` // when log session was updated + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when log connection was created + ID string `json:"id" url:"id,key"` // unique identifier of this log session + LogplexURL string `json:"logplex_url" url:"logplex_url,key"` // URL for log streaming session + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when log session was updated } type LogSessionCreateOpts struct { - Dyno *string `json:"dyno,omitempty"` // dyno to limit results to - Lines *int `json:"lines,omitempty"` // number of log lines to stream at once - Source *string `json:"source,omitempty"` // log source to limit results to - Tail *bool `json:"tail,omitempty"` // whether to stream ongoing logs + Dyno *string `json:"dyno,omitempty" url:"dyno,omitempty,key"` // dyno to limit results to + Lines *int `json:"lines,omitempty" url:"lines,omitempty,key"` // number of log lines to stream at once + Source *string `json:"source,omitempty" url:"source,omitempty,key"` // log source to limit results to + Tail *bool `json:"tail,omitempty" url:"tail,omitempty,key"` // whether to stream ongoing logs +} +type LogSessionCreateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when log connection was created + ID string `json:"id" url:"id,key"` // unique identifier of this log session + LogplexURL string `json:"logplex_url" url:"logplex_url,key"` // URL for log streaming session + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when log session was updated } // Create a new log session. -func (s *Service) LogSessionCreate(appIdentity string, o struct { - Dyno *string `json:"dyno,omitempty"` // dyno to limit results to - Lines *int `json:"lines,omitempty"` // number of log lines to stream at once - Source *string `json:"source,omitempty"` // log source to limit results to - Tail *bool `json:"tail,omitempty"` // whether to stream ongoing logs -}) (*LogSession, error) { - var logSession LogSession - return &logSession, s.Post(&logSession, fmt.Sprintf("/apps/%v/log-sessions", appIdentity), o) +func (s *Service) LogSessionCreate(ctx context.Context, appIdentity string, o LogSessionCreateOpts) (*LogSessionCreateResult, error) { + var logSession LogSessionCreateResult + return &logSession, s.Post(ctx, &logSession, fmt.Sprintf("/apps/%v/log-sessions", appIdentity), o) } // OAuth authorizations represent clients that a Heroku user has @@ -1057,67 +3307,243 @@ func (s *Service) LogSessionCreate(appIdentity string, o struct { // documentation](https://devcenter.heroku.com/articles/oauth) type OAuthAuthorization struct { AccessToken *struct { - ExpiresIn *int `json:"expires_in"` // seconds until OAuth token expires; may be `null` for tokens with + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with // indefinite lifetime - ID string `json:"id"` // unique identifier of OAuth token - Token string `json:"token"` // contents of the token to be used for authorization - } `json:"access_token"` // access token for this authorization + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // access token for this authorization Client *struct { - ID string `json:"id"` // unique identifier of this OAuth client - Name string `json:"name"` // OAuth client name - RedirectURI string `json:"redirect_uri"` // endpoint for redirection after authorization with OAuth client - } `json:"client"` // identifier of the client that obtained this authorization, if any - CreatedAt time.Time `json:"created_at"` // when OAuth authorization was created + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + } `json:"client" url:"client,key"` // identifier of the client that obtained this authorization, if any + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth authorization was created Grant *struct { - Code string `json:"code"` // grant code received from OAuth web application authorization - ExpiresIn int `json:"expires_in"` // seconds until OAuth grant expires - ID string `json:"id"` // unique identifier of OAuth grant - } `json:"grant"` // this authorization's grant - ID string `json:"id"` // unique identifier of OAuth authorization + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + ExpiresIn int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth grant expires + ID string `json:"id" url:"id,key"` // unique identifier of OAuth grant + } `json:"grant" url:"grant,key"` // this authorization's grant + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization RefreshToken *struct { - ExpiresIn *int `json:"expires_in"` // seconds until OAuth token expires; may be `null` for tokens with + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with // indefinite lifetime - ID string `json:"id"` // unique identifier of OAuth token - Token string `json:"token"` // contents of the token to be used for authorization - } `json:"refresh_token"` // refresh token for this authorization - Scope []string `json:"scope"` // The scope of access OAuth authorization allows - UpdatedAt time.Time `json:"updated_at"` // when OAuth authorization was updated + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Scope []string `json:"scope" url:"scope,key"` // The scope of access OAuth authorization allows + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth authorization was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + FullName *string `json:"full_name" url:"full_name,key"` // full name of the account owner + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // authenticated user associated with this authorization } type OAuthAuthorizationCreateOpts struct { - Client *string `json:"client,omitempty"` // unique identifier of this OAuth client - Description *string `json:"description,omitempty"` // human-friendly description of this OAuth authorization - ExpiresIn *int `json:"expires_in,omitempty"` // seconds until OAuth token expires; may be `null` for tokens with + Client *string `json:"client,omitempty" url:"client,omitempty,key"` // unique identifier of this OAuth client + Description *string `json:"description,omitempty" url:"description,omitempty,key"` // human-friendly description of this OAuth authorization + ExpiresIn *int `json:"expires_in,omitempty" url:"expires_in,omitempty,key"` // seconds until OAuth token expires; may be `null` for tokens with // indefinite lifetime - Scope []string `json:"scope"` // The scope of access OAuth authorization allows + Scope []string `json:"scope" url:"scope,key"` // The scope of access OAuth authorization allows +} +type OAuthAuthorizationCreateResult struct { + AccessToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // access token for this authorization + Client *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + } `json:"client" url:"client,key"` // identifier of the client that obtained this authorization, if any + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth authorization was created + Grant *struct { + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + ExpiresIn int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth grant expires + ID string `json:"id" url:"id,key"` // unique identifier of OAuth grant + } `json:"grant" url:"grant,key"` // this authorization's grant + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + RefreshToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Scope []string `json:"scope" url:"scope,key"` // The scope of access OAuth authorization allows + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth authorization was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + FullName *string `json:"full_name" url:"full_name,key"` // full name of the account owner + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // authenticated user associated with this authorization } // Create a new OAuth authorization. -func (s *Service) OAuthAuthorizationCreate(o struct { - Client *string `json:"client,omitempty"` // unique identifier of this OAuth client - Description *string `json:"description,omitempty"` // human-friendly description of this OAuth authorization - ExpiresIn *int `json:"expires_in,omitempty"` // seconds until OAuth token expires; may be `null` for tokens with - // indefinite lifetime - Scope []string `json:"scope"` // The scope of access OAuth authorization allows -}) (*OAuthAuthorization, error) { - var oauthAuthorization OAuthAuthorization - return &oauthAuthorization, s.Post(&oauthAuthorization, fmt.Sprintf("/oauth/authorizations"), o) +func (s *Service) OAuthAuthorizationCreate(ctx context.Context, o OAuthAuthorizationCreateOpts) (*OAuthAuthorizationCreateResult, error) { + var oauthAuthorization OAuthAuthorizationCreateResult + return &oauthAuthorization, s.Post(ctx, &oauthAuthorization, fmt.Sprintf("/oauth/authorizations"), o) +} + +type OAuthAuthorizationDeleteResult struct { + AccessToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // access token for this authorization + Client *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + } `json:"client" url:"client,key"` // identifier of the client that obtained this authorization, if any + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth authorization was created + Grant *struct { + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + ExpiresIn int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth grant expires + ID string `json:"id" url:"id,key"` // unique identifier of OAuth grant + } `json:"grant" url:"grant,key"` // this authorization's grant + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + RefreshToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Scope []string `json:"scope" url:"scope,key"` // The scope of access OAuth authorization allows + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth authorization was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + FullName *string `json:"full_name" url:"full_name,key"` // full name of the account owner + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // authenticated user associated with this authorization } // Delete OAuth authorization. -func (s *Service) OAuthAuthorizationDelete(oauthAuthorizationIdentity string) error { - return s.Delete(fmt.Sprintf("/oauth/authorizations/%v", oauthAuthorizationIdentity)) +func (s *Service) OAuthAuthorizationDelete(ctx context.Context, oauthAuthorizationIdentity string) (*OAuthAuthorizationDeleteResult, error) { + var oauthAuthorization OAuthAuthorizationDeleteResult + return &oauthAuthorization, s.Delete(ctx, &oauthAuthorization, fmt.Sprintf("/oauth/authorizations/%v", oauthAuthorizationIdentity)) +} + +type OAuthAuthorizationInfoResult struct { + AccessToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // access token for this authorization + Client *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + } `json:"client" url:"client,key"` // identifier of the client that obtained this authorization, if any + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth authorization was created + Grant *struct { + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + ExpiresIn int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth grant expires + ID string `json:"id" url:"id,key"` // unique identifier of OAuth grant + } `json:"grant" url:"grant,key"` // this authorization's grant + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + RefreshToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Scope []string `json:"scope" url:"scope,key"` // The scope of access OAuth authorization allows + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth authorization was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + FullName *string `json:"full_name" url:"full_name,key"` // full name of the account owner + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // authenticated user associated with this authorization } // Info for an OAuth authorization. -func (s *Service) OAuthAuthorizationInfo(oauthAuthorizationIdentity string) (*OAuthAuthorization, error) { - var oauthAuthorization OAuthAuthorization - return &oauthAuthorization, s.Get(&oauthAuthorization, fmt.Sprintf("/oauth/authorizations/%v", oauthAuthorizationIdentity), nil) +func (s *Service) OAuthAuthorizationInfo(ctx context.Context, oauthAuthorizationIdentity string) (*OAuthAuthorizationInfoResult, error) { + var oauthAuthorization OAuthAuthorizationInfoResult + return &oauthAuthorization, s.Get(ctx, &oauthAuthorization, fmt.Sprintf("/oauth/authorizations/%v", oauthAuthorizationIdentity), nil, nil) +} + +type OAuthAuthorizationListResult []struct { + AccessToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // access token for this authorization + Client *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + } `json:"client" url:"client,key"` // identifier of the client that obtained this authorization, if any + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth authorization was created + Grant *struct { + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + ExpiresIn int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth grant expires + ID string `json:"id" url:"id,key"` // unique identifier of OAuth grant + } `json:"grant" url:"grant,key"` // this authorization's grant + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + RefreshToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Scope []string `json:"scope" url:"scope,key"` // The scope of access OAuth authorization allows + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth authorization was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + FullName *string `json:"full_name" url:"full_name,key"` // full name of the account owner + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // authenticated user associated with this authorization } // List OAuth authorizations. -func (s *Service) OAuthAuthorizationList(lr *ListRange) ([]*OAuthAuthorization, error) { - var oauthAuthorizationList []*OAuthAuthorization - return oauthAuthorizationList, s.Get(&oauthAuthorizationList, fmt.Sprintf("/oauth/authorizations"), lr) +func (s *Service) OAuthAuthorizationList(ctx context.Context, lr *ListRange) (OAuthAuthorizationListResult, error) { + var oauthAuthorization OAuthAuthorizationListResult + return oauthAuthorization, s.Get(ctx, &oauthAuthorization, fmt.Sprintf("/oauth/authorizations"), nil, lr) +} + +type OAuthAuthorizationRegenerateResult struct { + AccessToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // access token for this authorization + Client *struct { + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + } `json:"client" url:"client,key"` // identifier of the client that obtained this authorization, if any + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth authorization was created + Grant *struct { + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + ExpiresIn int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth grant expires + ID string `json:"id" url:"id,key"` // unique identifier of OAuth grant + } `json:"grant" url:"grant,key"` // this authorization's grant + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + RefreshToken *struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Scope []string `json:"scope" url:"scope,key"` // The scope of access OAuth authorization allows + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth authorization was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + FullName *string `json:"full_name" url:"full_name,key"` // full name of the account owner + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // authenticated user associated with this authorization +} + +// Regenerate OAuth tokens. This endpoint is only available to direct +// authorizations or privileged OAuth clients. +func (s *Service) OAuthAuthorizationRegenerate(ctx context.Context, oauthAuthorizationIdentity string) (*OAuthAuthorizationRegenerateResult, error) { + var oauthAuthorization OAuthAuthorizationRegenerateResult + return &oauthAuthorization, s.Post(ctx, &oauthAuthorization, fmt.Sprintf("/oauth/authorizations/%v/actions/regenerate-tokens", oauthAuthorizationIdentity), nil) } // OAuth clients are applications that Heroku users can authorize to @@ -1125,57 +3551,106 @@ func (s *Service) OAuthAuthorizationList(lr *ListRange) ([]*OAuthAuthorization, // information please refer to the [Heroku OAuth // documentation](https://devcenter.heroku.com/articles/oauth). type OAuthClient struct { - CreatedAt time.Time `json:"created_at"` // when OAuth client was created - ID string `json:"id"` // unique identifier of this OAuth client - IgnoresDelinquent *bool `json:"ignores_delinquent"` // whether the client is still operable given a delinquent account - Name string `json:"name"` // OAuth client name - RedirectURI string `json:"redirect_uri"` // endpoint for redirection after authorization with OAuth client - Secret string `json:"secret"` // secret used to obtain OAuth authorizations under this client - UpdatedAt time.Time `json:"updated_at"` // when OAuth client was updated + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth client was created + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + IgnoresDelinquent *bool `json:"ignores_delinquent" url:"ignores_delinquent,key"` // whether the client is still operable given a delinquent account + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth client was updated } type OAuthClientCreateOpts struct { - Name string `json:"name"` // OAuth client name - RedirectURI string `json:"redirect_uri"` // endpoint for redirection after authorization with OAuth client + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client +} +type OAuthClientCreateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth client was created + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + IgnoresDelinquent *bool `json:"ignores_delinquent" url:"ignores_delinquent,key"` // whether the client is still operable given a delinquent account + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth client was updated } // Create a new OAuth client. -func (s *Service) OAuthClientCreate(o struct { - Name string `json:"name"` // OAuth client name - RedirectURI string `json:"redirect_uri"` // endpoint for redirection after authorization with OAuth client -}) (*OAuthClient, error) { - var oauthClient OAuthClient - return &oauthClient, s.Post(&oauthClient, fmt.Sprintf("/oauth/clients"), o) +func (s *Service) OAuthClientCreate(ctx context.Context, o OAuthClientCreateOpts) (*OAuthClientCreateResult, error) { + var oauthClient OAuthClientCreateResult + return &oauthClient, s.Post(ctx, &oauthClient, fmt.Sprintf("/oauth/clients"), o) +} + +type OAuthClientDeleteResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth client was created + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + IgnoresDelinquent *bool `json:"ignores_delinquent" url:"ignores_delinquent,key"` // whether the client is still operable given a delinquent account + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth client was updated } // Delete OAuth client. -func (s *Service) OAuthClientDelete(oauthClientIdentity string) error { - return s.Delete(fmt.Sprintf("/oauth/clients/%v", oauthClientIdentity)) +func (s *Service) OAuthClientDelete(ctx context.Context, oauthClientIdentity string) (*OAuthClientDeleteResult, error) { + var oauthClient OAuthClientDeleteResult + return &oauthClient, s.Delete(ctx, &oauthClient, fmt.Sprintf("/oauth/clients/%v", oauthClientIdentity)) } // Info for an OAuth client -func (s *Service) OAuthClientInfo(oauthClientIdentity string) (*OAuthClient, error) { +func (s *Service) OAuthClientInfo(ctx context.Context, oauthClientIdentity string) (*OAuthClient, error) { var oauthClient OAuthClient - return &oauthClient, s.Get(&oauthClient, fmt.Sprintf("/oauth/clients/%v", oauthClientIdentity), nil) + return &oauthClient, s.Get(ctx, &oauthClient, fmt.Sprintf("/oauth/clients/%v", oauthClientIdentity), nil, nil) +} + +type OAuthClientListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth client was created + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + IgnoresDelinquent *bool `json:"ignores_delinquent" url:"ignores_delinquent,key"` // whether the client is still operable given a delinquent account + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth client was updated } // List OAuth clients -func (s *Service) OAuthClientList(lr *ListRange) ([]*OAuthClient, error) { - var oauthClientList []*OAuthClient - return oauthClientList, s.Get(&oauthClientList, fmt.Sprintf("/oauth/clients"), lr) +func (s *Service) OAuthClientList(ctx context.Context, lr *ListRange) (OAuthClientListResult, error) { + var oauthClient OAuthClientListResult + return oauthClient, s.Get(ctx, &oauthClient, fmt.Sprintf("/oauth/clients"), nil, lr) } type OAuthClientUpdateOpts struct { - Name *string `json:"name,omitempty"` // OAuth client name - RedirectURI *string `json:"redirect_uri,omitempty"` // endpoint for redirection after authorization with OAuth client + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // OAuth client name + RedirectURI *string `json:"redirect_uri,omitempty" url:"redirect_uri,omitempty,key"` // endpoint for redirection after authorization with OAuth client +} +type OAuthClientUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth client was created + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + IgnoresDelinquent *bool `json:"ignores_delinquent" url:"ignores_delinquent,key"` // whether the client is still operable given a delinquent account + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth client was updated } // Update OAuth client -func (s *Service) OAuthClientUpdate(oauthClientIdentity string, o struct { - Name *string `json:"name,omitempty"` // OAuth client name - RedirectURI *string `json:"redirect_uri,omitempty"` // endpoint for redirection after authorization with OAuth client -}) (*OAuthClient, error) { - var oauthClient OAuthClient - return &oauthClient, s.Patch(&oauthClient, fmt.Sprintf("/oauth/clients/%v", oauthClientIdentity), o) +func (s *Service) OAuthClientUpdate(ctx context.Context, oauthClientIdentity string, o OAuthClientUpdateOpts) (*OAuthClientUpdateResult, error) { + var oauthClient OAuthClientUpdateResult + return &oauthClient, s.Patch(ctx, &oauthClient, fmt.Sprintf("/oauth/clients/%v", oauthClientIdentity), o) +} + +type OAuthClientRotateCredentialsResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth client was created + ID string `json:"id" url:"id,key"` // unique identifier of this OAuth client + IgnoresDelinquent *bool `json:"ignores_delinquent" url:"ignores_delinquent,key"` // whether the client is still operable given a delinquent account + Name string `json:"name" url:"name,key"` // OAuth client name + RedirectURI string `json:"redirect_uri" url:"redirect_uri,key"` // endpoint for redirection after authorization with OAuth client + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth client was updated +} + +// Rotate credentials for an OAuth client +func (s *Service) OAuthClientRotateCredentials(ctx context.Context, oauthClientIdentity string) (*OAuthClientRotateCredentialsResult, error) { + var oauthClient OAuthClientRotateCredentialsResult + return &oauthClient, s.Post(ctx, &oauthClient, fmt.Sprintf("/oauth/clients/%v/actions/rotate-credentials", oauthClientIdentity), nil) } // OAuth grants are used to obtain authorizations on behalf of a user. @@ -1189,546 +3664,2603 @@ type OAuthGrant struct{} // documentation](https://devcenter.heroku.com/articles/oauth) type OAuthToken struct { AccessToken struct { - ExpiresIn *int `json:"expires_in"` // seconds until OAuth token expires; may be `null` for tokens with + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with // indefinite lifetime - ID string `json:"id"` // unique identifier of OAuth token - Token string `json:"token"` // contents of the token to be used for authorization - } `json:"access_token"` // current access token + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // current access token Authorization struct { - ID string `json:"id"` // unique identifier of OAuth authorization - } `json:"authorization"` // authorization for this set of tokens + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + } `json:"authorization" url:"authorization,key"` // authorization for this set of tokens Client *struct { - Secret string `json:"secret"` // secret used to obtain OAuth authorizations under this client - } `json:"client"` // OAuth client secret used to obtain token - CreatedAt time.Time `json:"created_at"` // when OAuth token was created + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + } `json:"client" url:"client,key"` // OAuth client secret used to obtain token + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth token was created Grant struct { - Code string `json:"code"` // grant code received from OAuth web application authorization - Type string `json:"type"` // type of grant requested, one of `authorization_code` or + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + Type string `json:"type" url:"type,key"` // type of grant requested, one of `authorization_code` or // `refresh_token` - } `json:"grant"` // grant used on the underlying authorization - ID string `json:"id"` // unique identifier of OAuth token + } `json:"grant" url:"grant,key"` // grant used on the underlying authorization + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token RefreshToken struct { - ExpiresIn *int `json:"expires_in"` // seconds until OAuth token expires; may be `null` for tokens with + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with // indefinite lifetime - ID string `json:"id"` // unique identifier of OAuth token - Token string `json:"token"` // contents of the token to be used for authorization - } `json:"refresh_token"` // refresh token for this authorization + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization Session struct { - ID string `json:"id"` // unique identifier of OAuth token - } `json:"session"` // OAuth session using this token - UpdatedAt time.Time `json:"updated_at"` // when OAuth token was updated + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + } `json:"session" url:"session,key"` // OAuth session using this token + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth token was updated User struct { - ID string `json:"id"` // unique identifier of an account - } `json:"user"` // Reference to the user associated with this token + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // Reference to the user associated with this token } type OAuthTokenCreateOpts struct { Client struct { - Secret *string `json:"secret,omitempty"` // secret used to obtain OAuth authorizations under this client - } `json:"client"` + Secret *string `json:"secret,omitempty" url:"secret,omitempty,key"` // secret used to obtain OAuth authorizations under this client + } `json:"client" url:"client,key"` Grant struct { - Code *string `json:"code,omitempty"` // grant code received from OAuth web application authorization - Type *string `json:"type,omitempty"` // type of grant requested, one of `authorization_code` or + Code *string `json:"code,omitempty" url:"code,omitempty,key"` // grant code received from OAuth web application authorization + Type *string `json:"type,omitempty" url:"type,omitempty,key"` // type of grant requested, one of `authorization_code` or // `refresh_token` - } `json:"grant"` + } `json:"grant" url:"grant,key"` RefreshToken struct { - Token *string `json:"token,omitempty"` // contents of the token to be used for authorization - } `json:"refresh_token"` + Token *string `json:"token,omitempty" url:"token,omitempty,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` +} +type OAuthTokenCreateResult struct { + AccessToken struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // current access token + Authorization struct { + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + } `json:"authorization" url:"authorization,key"` // authorization for this set of tokens + Client *struct { + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + } `json:"client" url:"client,key"` // OAuth client secret used to obtain token + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth token was created + Grant struct { + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + Type string `json:"type" url:"type,key"` // type of grant requested, one of `authorization_code` or + // `refresh_token` + } `json:"grant" url:"grant,key"` // grant used on the underlying authorization + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + RefreshToken struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Session struct { + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + } `json:"session" url:"session,key"` // OAuth session using this token + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth token was updated + User struct { + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // Reference to the user associated with this token } // Create a new OAuth token. -func (s *Service) OAuthTokenCreate(o struct { - Client struct { - Secret *string `json:"secret,omitempty"` // secret used to obtain OAuth authorizations under this client - } `json:"client"` - Grant struct { - Code *string `json:"code,omitempty"` // grant code received from OAuth web application authorization - Type *string `json:"type,omitempty"` // type of grant requested, one of `authorization_code` or +func (s *Service) OAuthTokenCreate(ctx context.Context, o OAuthTokenCreateOpts) (*OAuthTokenCreateResult, error) { + var oauthToken OAuthTokenCreateResult + return &oauthToken, s.Post(ctx, &oauthToken, fmt.Sprintf("/oauth/tokens"), o) +} + +type OAuthTokenDeleteResult struct { + AccessToken struct { + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"access_token" url:"access_token,key"` // current access token + Authorization struct { + ID string `json:"id" url:"id,key"` // unique identifier of OAuth authorization + } `json:"authorization" url:"authorization,key"` // authorization for this set of tokens + Client *struct { + Secret string `json:"secret" url:"secret,key"` // secret used to obtain OAuth authorizations under this client + } `json:"client" url:"client,key"` // OAuth client secret used to obtain token + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when OAuth token was created + Grant struct { + Code string `json:"code" url:"code,key"` // grant code received from OAuth web application authorization + Type string `json:"type" url:"type,key"` // type of grant requested, one of `authorization_code` or // `refresh_token` - } `json:"grant"` + } `json:"grant" url:"grant,key"` // grant used on the underlying authorization + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token RefreshToken struct { - Token *string `json:"token,omitempty"` // contents of the token to be used for authorization - } `json:"refresh_token"` -}) (*OAuthToken, error) { - var oauthToken OAuthToken - return &oauthToken, s.Post(&oauthToken, fmt.Sprintf("/oauth/tokens"), o) + ExpiresIn *int `json:"expires_in" url:"expires_in,key"` // seconds until OAuth token expires; may be `null` for tokens with + // indefinite lifetime + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + Token string `json:"token" url:"token,key"` // contents of the token to be used for authorization + } `json:"refresh_token" url:"refresh_token,key"` // refresh token for this authorization + Session struct { + ID string `json:"id" url:"id,key"` // unique identifier of OAuth token + } `json:"session" url:"session,key"` // OAuth session using this token + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when OAuth token was updated + User struct { + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // Reference to the user associated with this token +} + +// Revoke OAuth access token. +func (s *Service) OAuthTokenDelete(ctx context.Context, oauthTokenIdentity string) (*OAuthTokenDeleteResult, error) { + var oauthToken OAuthTokenDeleteResult + return &oauthToken, s.Delete(ctx, &oauthToken, fmt.Sprintf("/oauth/tokens/%v", oauthTokenIdentity)) } // Organizations allow you to manage access to a shared group of // applications across your development team. type Organization struct { - CreditCardCollections bool `json:"credit_card_collections"` // whether charges incurred by the org are paid by credit card. - Default bool `json:"default"` // whether to use this organization when none is specified - Name string `json:"name"` // unique name of organization - ProvisionedLicenses bool `json:"provisioned_licenses"` // whether the org is provisioned licenses by salesforce. - Role string `json:"role"` // role in the organization + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the organization was created + CreditCardCollections bool `json:"credit_card_collections" url:"credit_card_collections,key"` // whether charges incurred by the org are paid by credit card. + Default bool `json:"default" url:"default,key"` // whether to use this organization when none is specified + ID string `json:"id" url:"id,key"` // unique identifier of organization + MembershipLimit *float64 `json:"membership_limit" url:"membership_limit,key"` // upper limit of members allowed in an organization. + Name string `json:"name" url:"name,key"` // unique name of organization + ProvisionedLicenses bool `json:"provisioned_licenses" url:"provisioned_licenses,key"` // whether the org is provisioned licenses by salesforce. + Role *string `json:"role" url:"role,key"` // role in the organization + Type string `json:"type" url:"type,key"` // type of organization. + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the organization was updated +} +type OrganizationListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the organization was created + CreditCardCollections bool `json:"credit_card_collections" url:"credit_card_collections,key"` // whether charges incurred by the org are paid by credit card. + Default bool `json:"default" url:"default,key"` // whether to use this organization when none is specified + ID string `json:"id" url:"id,key"` // unique identifier of organization + MembershipLimit *float64 `json:"membership_limit" url:"membership_limit,key"` // upper limit of members allowed in an organization. + Name string `json:"name" url:"name,key"` // unique name of organization + ProvisionedLicenses bool `json:"provisioned_licenses" url:"provisioned_licenses,key"` // whether the org is provisioned licenses by salesforce. + Role *string `json:"role" url:"role,key"` // role in the organization + Type string `json:"type" url:"type,key"` // type of organization. + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the organization was updated } // List organizations in which you are a member. -func (s *Service) OrganizationList(lr *ListRange) ([]*Organization, error) { - var organizationList []*Organization - return organizationList, s.Get(&organizationList, fmt.Sprintf("/organizations"), lr) +func (s *Service) OrganizationList(ctx context.Context, lr *ListRange) (OrganizationListResult, error) { + var organization OrganizationListResult + return organization, s.Get(ctx, &organization, fmt.Sprintf("/organizations"), nil, lr) +} + +// Info for an organization. +func (s *Service) OrganizationInfo(ctx context.Context, organizationIdentity string) (*Organization, error) { + var organization Organization + return &organization, s.Get(ctx, &organization, fmt.Sprintf("/organizations/%v", organizationIdentity), nil, nil) } type OrganizationUpdateOpts struct { - Default *bool `json:"default,omitempty"` // whether to use this organization when none is specified + Default *bool `json:"default,omitempty" url:"default,omitempty,key"` // whether to use this organization when none is specified + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name of organization +} +type OrganizationUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the organization was created + CreditCardCollections bool `json:"credit_card_collections" url:"credit_card_collections,key"` // whether charges incurred by the org are paid by credit card. + Default bool `json:"default" url:"default,key"` // whether to use this organization when none is specified + ID string `json:"id" url:"id,key"` // unique identifier of organization + MembershipLimit *float64 `json:"membership_limit" url:"membership_limit,key"` // upper limit of members allowed in an organization. + Name string `json:"name" url:"name,key"` // unique name of organization + ProvisionedLicenses bool `json:"provisioned_licenses" url:"provisioned_licenses,key"` // whether the org is provisioned licenses by salesforce. + Role *string `json:"role" url:"role,key"` // role in the organization + Type string `json:"type" url:"type,key"` // type of organization. + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the organization was updated } -// Set or unset the organization as your default organization. -func (s *Service) OrganizationUpdate(organizationIdentity string, o struct { - Default *bool `json:"default,omitempty"` // whether to use this organization when none is specified -}) (*Organization, error) { - var organization Organization - return &organization, s.Patch(&organization, fmt.Sprintf("/organizations/%v", organizationIdentity), o) +// Update organization properties. +func (s *Service) OrganizationUpdate(ctx context.Context, organizationIdentity string, o OrganizationUpdateOpts) (*OrganizationUpdateResult, error) { + var organization OrganizationUpdateResult + return &organization, s.Patch(ctx, &organization, fmt.Sprintf("/organizations/%v", organizationIdentity), o) +} + +type OrganizationCreateOpts struct { + Address1 *string `json:"address_1,omitempty" url:"address_1,omitempty,key"` // street address line 1 + Address2 *string `json:"address_2,omitempty" url:"address_2,omitempty,key"` // street address line 2 + CardNumber *string `json:"card_number,omitempty" url:"card_number,omitempty,key"` // encrypted card number of payment method + City *string `json:"city,omitempty" url:"city,omitempty,key"` // city + Country *string `json:"country,omitempty" url:"country,omitempty,key"` // country + Cvv *string `json:"cvv,omitempty" url:"cvv,omitempty,key"` // card verification value + ExpirationMonth *string `json:"expiration_month,omitempty" url:"expiration_month,omitempty,key"` // expiration month + ExpirationYear *string `json:"expiration_year,omitempty" url:"expiration_year,omitempty,key"` // expiration year + FirstName *string `json:"first_name,omitempty" url:"first_name,omitempty,key"` // the first name for payment method + LastName *string `json:"last_name,omitempty" url:"last_name,omitempty,key"` // the last name for payment method + Name string `json:"name" url:"name,key"` // unique name of organization + Other *string `json:"other,omitempty" url:"other,omitempty,key"` // metadata + PostalCode *string `json:"postal_code,omitempty" url:"postal_code,omitempty,key"` // postal code + State *string `json:"state,omitempty" url:"state,omitempty,key"` // state +} +type OrganizationCreateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the organization was created + CreditCardCollections bool `json:"credit_card_collections" url:"credit_card_collections,key"` // whether charges incurred by the org are paid by credit card. + Default bool `json:"default" url:"default,key"` // whether to use this organization when none is specified + ID string `json:"id" url:"id,key"` // unique identifier of organization + MembershipLimit *float64 `json:"membership_limit" url:"membership_limit,key"` // upper limit of members allowed in an organization. + Name string `json:"name" url:"name,key"` // unique name of organization + ProvisionedLicenses bool `json:"provisioned_licenses" url:"provisioned_licenses,key"` // whether the org is provisioned licenses by salesforce. + Role *string `json:"role" url:"role,key"` // role in the organization + Type string `json:"type" url:"type,key"` // type of organization. + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the organization was updated +} + +// Create a new organization. +func (s *Service) OrganizationCreate(ctx context.Context, o OrganizationCreateOpts) (*OrganizationCreateResult, error) { + var organization OrganizationCreateResult + return &organization, s.Post(ctx, &organization, fmt.Sprintf("/organizations"), o) +} + +type OrganizationDeleteResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the organization was created + CreditCardCollections bool `json:"credit_card_collections" url:"credit_card_collections,key"` // whether charges incurred by the org are paid by credit card. + Default bool `json:"default" url:"default,key"` // whether to use this organization when none is specified + ID string `json:"id" url:"id,key"` // unique identifier of organization + MembershipLimit *float64 `json:"membership_limit" url:"membership_limit,key"` // upper limit of members allowed in an organization. + Name string `json:"name" url:"name,key"` // unique name of organization + ProvisionedLicenses bool `json:"provisioned_licenses" url:"provisioned_licenses,key"` // whether the org is provisioned licenses by salesforce. + Role *string `json:"role" url:"role,key"` // role in the organization + Type string `json:"type" url:"type,key"` // type of organization. + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the organization was updated +} + +// Delete an existing organization. +func (s *Service) OrganizationDelete(ctx context.Context, organizationIdentity string) (*OrganizationDeleteResult, error) { + var organization OrganizationDeleteResult + return &organization, s.Delete(ctx, &organization, fmt.Sprintf("/organizations/%v", organizationIdentity)) +} + +// A list of add-ons the Organization uses across all apps +type OrganizationAddOn struct{} +type OrganizationAddOnListForOrganizationResult []struct { + Actions []struct{} `json:"actions" url:"actions,key"` // provider actions for this specific add-on + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // billing application associated with this add-on + ConfigVars []string `json:"config_vars" url:"config_vars,key"` // config vars exposed to the owning app by this add-on + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when add-on was created + ID string `json:"id" url:"id,key"` // unique identifier of add-on + Name string `json:"name" url:"name,key"` // globally unique name of the add-on + Plan struct { + ID string `json:"id" url:"id,key"` // unique identifier of this plan + Name string `json:"name" url:"name,key"` // unique name of this plan + } `json:"plan" url:"plan,key"` // identity of add-on plan + ProviderID string `json:"provider_id" url:"provider_id,key"` // id of this add-on with its provider + State string `json:"state" url:"state,key"` // state in the add-on's lifecycle + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when add-on was updated + WebURL *string `json:"web_url" url:"web_url,key"` // URL for logging into web interface of add-on (e.g. a dashboard) +} + +// List add-ons used across all Organization apps +func (s *Service) OrganizationAddOnListForOrganization(ctx context.Context, organizationIdentity string, lr *ListRange) (OrganizationAddOnListForOrganizationResult, error) { + var organizationAddOn OrganizationAddOnListForOrganizationResult + return organizationAddOn, s.Get(ctx, &organizationAddOn, fmt.Sprintf("/organizations/%v/addons", organizationIdentity), nil, lr) } // An organization app encapsulates the organization specific // functionality of Heroku apps. type OrganizationApp struct { - ArchivedAt *time.Time `json:"archived_at"` // when app was archived - BuildpackProvidedDescription *string `json:"buildpack_provided_description"` // description from buildpack of app - CreatedAt time.Time `json:"created_at"` // when app was created - GitURL string `json:"git_url"` // git repo URL of app - ID string `json:"id"` // unique identifier of app - Joined bool `json:"joined"` // is the current member a collaborator on this app. - Locked bool `json:"locked"` // are other organization members forbidden from joining this app. - Maintenance bool `json:"maintenance"` // maintenance status of app - Name string `json:"name"` // unique name of app + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Joined bool `json:"joined" url:"joined,key"` // is the current member a collaborator on this app. + Locked bool `json:"locked" url:"locked,key"` // are other organization members forbidden from joining this app. + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app Organization *struct { - Name string `json:"name"` // unique name of organization - } `json:"organization"` // organization that owns this app + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this app Owner *struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"owner"` // identity of app owner + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner Region struct { - ID string `json:"id"` // unique identifier of region - Name string `json:"name"` // unique name of region - } `json:"region"` // identity of app region - ReleasedAt *time.Time `json:"released_at"` // when app was released - RepoSize *int `json:"repo_size"` // git repo size in bytes of app - SlugSize *int `json:"slug_size"` // slug size in bytes of app - Stack struct { - ID string `json:"id"` // unique identifier of stack - Name string `json:"name"` // unique name of stack - } `json:"stack"` // identity of app stack - UpdatedAt time.Time `json:"updated_at"` // when app was updated - WebURL string `json:"web_url"` // web URL of app + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } type OrganizationAppCreateOpts struct { - Locked *bool `json:"locked,omitempty"` // are other organization members forbidden from joining this app. - Name *string `json:"name,omitempty"` // unique name of app - Organization *string `json:"organization,omitempty"` // unique name of organization - Personal *bool `json:"personal,omitempty"` // force creation of the app in the user account even if a default org + Locked *bool `json:"locked,omitempty" url:"locked,omitempty,key"` // are other organization members forbidden from joining this app. + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name of app + Organization *string `json:"organization,omitempty" url:"organization,omitempty,key"` // unique name of organization + Personal *bool `json:"personal,omitempty" url:"personal,omitempty,key"` // force creation of the app in the user account even if a default org // is set. - Region *string `json:"region,omitempty"` // unique name of region - Stack *string `json:"stack,omitempty"` // unique name of stack + Region *string `json:"region,omitempty" url:"region,omitempty,key"` // unique name of region + Space *string `json:"space,omitempty" url:"space,omitempty,key"` // unique name of space + Stack *string `json:"stack,omitempty" url:"stack,omitempty,key"` // unique name of stack } // Create a new app in the specified organization, in the default // organization if unspecified, or in personal account, if default // organization is not set. -func (s *Service) OrganizationAppCreate(o struct { - Locked *bool `json:"locked,omitempty"` // are other organization members forbidden from joining this app. - Name *string `json:"name,omitempty"` // unique name of app - Organization *string `json:"organization,omitempty"` // unique name of organization - Personal *bool `json:"personal,omitempty"` // force creation of the app in the user account even if a default org - // is set. - Region *string `json:"region,omitempty"` // unique name of region - Stack *string `json:"stack,omitempty"` // unique name of stack -}) (*OrganizationApp, error) { +func (s *Service) OrganizationAppCreate(ctx context.Context, o OrganizationAppCreateOpts) (*OrganizationApp, error) { var organizationApp OrganizationApp - return &organizationApp, s.Post(&organizationApp, fmt.Sprintf("/organizations/apps"), o) + return &organizationApp, s.Post(ctx, &organizationApp, fmt.Sprintf("/organizations/apps"), o) +} + +type OrganizationAppListResult []struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Joined bool `json:"joined" url:"joined,key"` // is the current member a collaborator on this app. + Locked bool `json:"locked" url:"locked,key"` // are other organization members forbidden from joining this app. + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this app + Owner *struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // List apps in the default organization, or in personal account, if // default organization is not set. -func (s *Service) OrganizationAppList(lr *ListRange) ([]*OrganizationApp, error) { - var organizationAppList []*OrganizationApp - return organizationAppList, s.Get(&organizationAppList, fmt.Sprintf("/organizations/apps"), lr) +func (s *Service) OrganizationAppList(ctx context.Context, lr *ListRange) (OrganizationAppListResult, error) { + var organizationApp OrganizationAppListResult + return organizationApp, s.Get(ctx, &organizationApp, fmt.Sprintf("/organizations/apps"), nil, lr) +} + +type OrganizationAppListForOrganizationResult []struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Joined bool `json:"joined" url:"joined,key"` // is the current member a collaborator on this app. + Locked bool `json:"locked" url:"locked,key"` // are other organization members forbidden from joining this app. + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this app + Owner *struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // List organization apps. -func (s *Service) OrganizationAppListForOrganization(organizationIdentity string, lr *ListRange) ([]*OrganizationApp, error) { - var organizationAppList []*OrganizationApp - return organizationAppList, s.Get(&organizationAppList, fmt.Sprintf("/organizations/%v/apps", organizationIdentity), lr) +func (s *Service) OrganizationAppListForOrganization(ctx context.Context, organizationIdentity string, lr *ListRange) (OrganizationAppListForOrganizationResult, error) { + var organizationApp OrganizationAppListForOrganizationResult + return organizationApp, s.Get(ctx, &organizationApp, fmt.Sprintf("/organizations/%v/apps", organizationIdentity), nil, lr) } // Info for an organization app. -func (s *Service) OrganizationAppInfo(organizationAppIdentity string) (*OrganizationApp, error) { +func (s *Service) OrganizationAppInfo(ctx context.Context, organizationAppIdentity string) (*OrganizationApp, error) { var organizationApp OrganizationApp - return &organizationApp, s.Get(&organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), nil) + return &organizationApp, s.Get(ctx, &organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), nil, nil) } type OrganizationAppUpdateLockedOpts struct { - Locked bool `json:"locked"` // are other organization members forbidden from joining this app. + Locked bool `json:"locked" url:"locked,key"` // are other organization members forbidden from joining this app. +} +type OrganizationAppUpdateLockedResult struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Joined bool `json:"joined" url:"joined,key"` // is the current member a collaborator on this app. + Locked bool `json:"locked" url:"locked,key"` // are other organization members forbidden from joining this app. + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this app + Owner *struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // Lock or unlock an organization app. -func (s *Service) OrganizationAppUpdateLocked(organizationAppIdentity string, o struct { - Locked bool `json:"locked"` // are other organization members forbidden from joining this app. -}) (*OrganizationApp, error) { - var organizationApp OrganizationApp - return &organizationApp, s.Patch(&organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), o) +func (s *Service) OrganizationAppUpdateLocked(ctx context.Context, organizationAppIdentity string, o OrganizationAppUpdateLockedOpts) (*OrganizationAppUpdateLockedResult, error) { + var organizationApp OrganizationAppUpdateLockedResult + return &organizationApp, s.Patch(ctx, &organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), o) } type OrganizationAppTransferToAccountOpts struct { - Owner string `json:"owner"` // unique email address of account + Owner string `json:"owner" url:"owner,key"` // unique email address of account } // Transfer an existing organization app to another Heroku account. -func (s *Service) OrganizationAppTransferToAccount(organizationAppIdentity string, o struct { - Owner string `json:"owner"` // unique email address of account -}) (*OrganizationApp, error) { +func (s *Service) OrganizationAppTransferToAccount(ctx context.Context, organizationAppIdentity string, o OrganizationAppTransferToAccountOpts) (*OrganizationApp, error) { var organizationApp OrganizationApp - return &organizationApp, s.Patch(&organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), o) + return &organizationApp, s.Patch(ctx, &organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), o) } type OrganizationAppTransferToOrganizationOpts struct { - Owner string `json:"owner"` // unique name of organization + Owner string `json:"owner" url:"owner,key"` // unique name of organization +} +type OrganizationAppTransferToOrganizationResult struct { + ArchivedAt *time.Time `json:"archived_at" url:"archived_at,key"` // when app was archived + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when app was created + GitURL string `json:"git_url" url:"git_url,key"` // git repo URL of app + ID string `json:"id" url:"id,key"` // unique identifier of app + Joined bool `json:"joined" url:"joined,key"` // is the current member a collaborator on this app. + Locked bool `json:"locked" url:"locked,key"` // are other organization members forbidden from joining this app. + Maintenance bool `json:"maintenance" url:"maintenance,key"` // maintenance status of app + Name string `json:"name" url:"name,key"` // unique name of app + Organization *struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this app + Owner *struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"owner" url:"owner,key"` // identity of app owner + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of app region + ReleasedAt *time.Time `json:"released_at" url:"released_at,key"` // when app was released + RepoSize *int `json:"repo_size" url:"repo_size,key"` // git repo size in bytes of app + SlugSize *int `json:"slug_size" url:"slug_size,key"` // slug size in bytes of app + Space *struct { + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + } `json:"space" url:"space,key"` // identity of space + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of app stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when app was updated + WebURL string `json:"web_url" url:"web_url,key"` // web URL of app } // Transfer an existing organization app to another organization. -func (s *Service) OrganizationAppTransferToOrganization(organizationAppIdentity string, o struct { - Owner string `json:"owner"` // unique name of organization -}) (*OrganizationApp, error) { - var organizationApp OrganizationApp - return &organizationApp, s.Patch(&organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), o) +func (s *Service) OrganizationAppTransferToOrganization(ctx context.Context, organizationAppIdentity string, o OrganizationAppTransferToOrganizationOpts) (*OrganizationAppTransferToOrganizationResult, error) { + var organizationApp OrganizationAppTransferToOrganizationResult + return &organizationApp, s.Patch(ctx, &organizationApp, fmt.Sprintf("/organizations/apps/%v", organizationAppIdentity), o) } // An organization collaborator represents an account that has been // given access to an organization app on Heroku. type OrganizationAppCollaborator struct { - CreatedAt time.Time `json:"created_at"` // when collaborator was created - ID string `json:"id"` // unique identifier of collaborator - Role string `json:"role"` // role in the organization - UpdatedAt time.Time `json:"updated_at"` // when collaborator was updated + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated User struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"user"` // identity of collaborated account + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } type OrganizationAppCollaboratorCreateOpts struct { - Silent *bool `json:"silent,omitempty"` // whether to suppress email invitation when creating collaborator - User string `json:"user"` // unique email address of account + Silent *bool `json:"silent,omitempty" url:"silent,omitempty,key"` // whether to suppress email invitation when creating collaborator + User string `json:"user" url:"user,key"` // unique email address of account +} +type OrganizationAppCollaboratorCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // Create a new collaborator on an organization app. Use this endpoint // instead of the `/apps/{app_id_or_name}/collaborator` endpoint when -// you want the collaborator to be granted [privileges] -// (https://devcenter.heroku.com/articles/org-users-access#roles) -// according to their role in the organization. -func (s *Service) OrganizationAppCollaboratorCreate(appIdentity string, o struct { - Silent *bool `json:"silent,omitempty"` // whether to suppress email invitation when creating collaborator - User string `json:"user"` // unique email address of account -}) (*OrganizationAppCollaborator, error) { - var organizationAppCollaborator OrganizationAppCollaborator - return &organizationAppCollaborator, s.Post(&organizationAppCollaborator, fmt.Sprintf("/organizations/apps/%v/collaborators", appIdentity), o) +// you want the collaborator to be granted [permissions] +// (https://devcenter.heroku.com/articles/org-users-access#roles-and-app- +// permissions) according to their role in the organization. +func (s *Service) OrganizationAppCollaboratorCreate(ctx context.Context, appIdentity string, o OrganizationAppCollaboratorCreateOpts) (*OrganizationAppCollaboratorCreateResult, error) { + var organizationAppCollaborator OrganizationAppCollaboratorCreateResult + return &organizationAppCollaborator, s.Post(ctx, &organizationAppCollaborator, fmt.Sprintf("/organizations/apps/%v/collaborators", appIdentity), o) +} + +type OrganizationAppCollaboratorDeleteResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // Delete an existing collaborator from an organization app. -func (s *Service) OrganizationAppCollaboratorDelete(organizationAppIdentity string, organizationAppCollaboratorIdentity string) error { - return s.Delete(fmt.Sprintf("/organizations/apps/%v/collaborators/%v", organizationAppIdentity, organizationAppCollaboratorIdentity)) +func (s *Service) OrganizationAppCollaboratorDelete(ctx context.Context, organizationAppIdentity string, organizationAppCollaboratorIdentity string) (*OrganizationAppCollaboratorDeleteResult, error) { + var organizationAppCollaborator OrganizationAppCollaboratorDeleteResult + return &organizationAppCollaborator, s.Delete(ctx, &organizationAppCollaborator, fmt.Sprintf("/organizations/apps/%v/collaborators/%v", organizationAppIdentity, organizationAppCollaboratorIdentity)) +} + +type OrganizationAppCollaboratorInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // Info for a collaborator on an organization app. -func (s *Service) OrganizationAppCollaboratorInfo(organizationAppIdentity string, organizationAppCollaboratorIdentity string) (*OrganizationAppCollaborator, error) { - var organizationAppCollaborator OrganizationAppCollaborator - return &organizationAppCollaborator, s.Get(&organizationAppCollaborator, fmt.Sprintf("/organizations/apps/%v/collaborators/%v", organizationAppIdentity, organizationAppCollaboratorIdentity), nil) +func (s *Service) OrganizationAppCollaboratorInfo(ctx context.Context, organizationAppIdentity string, organizationAppCollaboratorIdentity string) (*OrganizationAppCollaboratorInfoResult, error) { + var organizationAppCollaborator OrganizationAppCollaboratorInfoResult + return &organizationAppCollaborator, s.Get(ctx, &organizationAppCollaborator, fmt.Sprintf("/organizations/apps/%v/collaborators/%v", organizationAppIdentity, organizationAppCollaboratorIdentity), nil, nil) +} + +type OrganizationAppCollaboratorUpdateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account +} + +// Update an existing collaborator from an organization app. +func (s *Service) OrganizationAppCollaboratorUpdate(ctx context.Context, organizationAppIdentity string, organizationAppCollaboratorIdentity string) (*OrganizationAppCollaboratorUpdateResult, error) { + var organizationAppCollaborator OrganizationAppCollaboratorUpdateResult + return &organizationAppCollaborator, s.Patch(ctx, &organizationAppCollaborator, fmt.Sprintf("/organizations/apps/%v/collaborators/%v", organizationAppIdentity, organizationAppCollaboratorIdentity), nil) +} + +type OrganizationAppCollaboratorListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app collaborator belongs to + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when collaborator was created + ID string `json:"id" url:"id,key"` // unique identifier of collaborator + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when collaborator was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of collaborated account } // List collaborators on an organization app. -func (s *Service) OrganizationAppCollaboratorList(organizationAppIdentity string, lr *ListRange) ([]*OrganizationAppCollaborator, error) { - var organizationAppCollaboratorList []*OrganizationAppCollaborator - return organizationAppCollaboratorList, s.Get(&organizationAppCollaboratorList, fmt.Sprintf("/organizations/apps/%v/collaborators", organizationAppIdentity), lr) +func (s *Service) OrganizationAppCollaboratorList(ctx context.Context, organizationAppIdentity string, lr *ListRange) (OrganizationAppCollaboratorListResult, error) { + var organizationAppCollaborator OrganizationAppCollaboratorListResult + return organizationAppCollaborator, s.Get(ctx, &organizationAppCollaborator, fmt.Sprintf("/organizations/apps/%v/collaborators", organizationAppIdentity), nil, lr) +} + +// An organization app permission is a behavior that is assigned to a +// user in an organization app. +type OrganizationAppPermission struct { + Description string `json:"description" url:"description,key"` // A description of what the app permission allows. + Name string `json:"name" url:"name,key"` // The name of the app permission. +} +type OrganizationAppPermissionListResult []struct { + Description string `json:"description" url:"description,key"` // A description of what the app permission allows. + Name string `json:"name" url:"name,key"` // The name of the app permission. +} + +// Lists permissions available to organizations. +func (s *Service) OrganizationAppPermissionList(ctx context.Context, lr *ListRange) (OrganizationAppPermissionListResult, error) { + var organizationAppPermission OrganizationAppPermissionListResult + return organizationAppPermission, s.Get(ctx, &organizationAppPermission, fmt.Sprintf("/organizations/permissions"), nil, lr) +} + +// An organization feature represents a feature enabled on an +// organization account. +type OrganizationFeature struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account feature was created + Description string `json:"description" url:"description,key"` // description of account feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of account feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of account feature + Name string `json:"name" url:"name,key"` // unique name of account feature + State string `json:"state" url:"state,key"` // state of account feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account feature was updated +} +type OrganizationFeatureInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account feature was created + Description string `json:"description" url:"description,key"` // description of account feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of account feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of account feature + Name string `json:"name" url:"name,key"` // unique name of account feature + State string `json:"state" url:"state,key"` // state of account feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account feature was updated +} + +// Info for an existing account feature. +func (s *Service) OrganizationFeatureInfo(ctx context.Context, organizationIdentity string, organizationFeatureIdentity string) (*OrganizationFeatureInfoResult, error) { + var organizationFeature OrganizationFeatureInfoResult + return &organizationFeature, s.Get(ctx, &organizationFeature, fmt.Sprintf("/organizations/%v/features/%v", organizationIdentity, organizationFeatureIdentity), nil, nil) +} + +type OrganizationFeatureListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when account feature was created + Description string `json:"description" url:"description,key"` // description of account feature + DocURL string `json:"doc_url" url:"doc_url,key"` // documentation URL of account feature + Enabled bool `json:"enabled" url:"enabled,key"` // whether or not account feature has been enabled + ID string `json:"id" url:"id,key"` // unique identifier of account feature + Name string `json:"name" url:"name,key"` // unique name of account feature + State string `json:"state" url:"state,key"` // state of account feature + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when account feature was updated +} + +// List existing organization features. +func (s *Service) OrganizationFeatureList(ctx context.Context, organizationIdentity string, lr *ListRange) (OrganizationFeatureListResult, error) { + var organizationFeature OrganizationFeatureListResult + return organizationFeature, s.Get(ctx, &organizationFeature, fmt.Sprintf("/organizations/%v/features", organizationIdentity), nil, lr) +} + +// An organization invitation represents an invite to an organization. +type OrganizationInvitation struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invitation was created + ID string `json:"id" url:"id,key"` // Unique identifier of an invitation + InvitedBy struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"invited_by" url:"invited_by,key"` + Organization struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invitation was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` +} +type OrganizationInvitationListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invitation was created + ID string `json:"id" url:"id,key"` // Unique identifier of an invitation + InvitedBy struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"invited_by" url:"invited_by,key"` + Organization struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invitation was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` +} + +// Get a list of an organization's Identity Providers +func (s *Service) OrganizationInvitationList(ctx context.Context, organizationName string, lr *ListRange) (OrganizationInvitationListResult, error) { + var organizationInvitation OrganizationInvitationListResult + return organizationInvitation, s.Get(ctx, &organizationInvitation, fmt.Sprintf("/organizations/%v/invitations", organizationName), nil, lr) +} + +type OrganizationInvitationCreateOpts struct { + Email string `json:"email" url:"email,key"` // unique email address of account + Role *string `json:"role" url:"role,key"` // role in the organization +} + +// Create Organization Invitation +func (s *Service) OrganizationInvitationCreate(ctx context.Context, organizationIdentity string, o OrganizationInvitationCreateOpts) (*OrganizationInvitation, error) { + var organizationInvitation OrganizationInvitation + return &organizationInvitation, s.Put(ctx, &organizationInvitation, fmt.Sprintf("/organizations/%v/invitations", organizationIdentity), o) +} + +// Revoke an organization invitation. +func (s *Service) OrganizationInvitationRevoke(ctx context.Context, organizationIdentity string, organizationInvitationIdentity string) (*OrganizationInvitation, error) { + var organizationInvitation OrganizationInvitation + return &organizationInvitation, s.Delete(ctx, &organizationInvitation, fmt.Sprintf("/organizations/%v/invitations/%v", organizationIdentity, organizationInvitationIdentity)) +} + +type OrganizationInvitationGetResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invitation was created + ID string `json:"id" url:"id,key"` // Unique identifier of an invitation + InvitedBy struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"invited_by" url:"invited_by,key"` + Organization struct { + ID string `json:"id" url:"id,key"` // unique identifier of organization + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` + Role *string `json:"role" url:"role,key"` // role in the organization + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invitation was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` +} + +// Get an invitation by its token +func (s *Service) OrganizationInvitationGet(ctx context.Context, organizationInvitationToken string, lr *ListRange) (*OrganizationInvitationGetResult, error) { + var organizationInvitation OrganizationInvitationGetResult + return &organizationInvitation, s.Get(ctx, &organizationInvitation, fmt.Sprintf("/organizations/invitations/%v", organizationInvitationToken), nil, lr) +} + +type OrganizationInvitationAcceptResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the membership record was created + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of organization member + Role *string `json:"role" url:"role,key"` // role in the organization + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether the Enterprise organization member has two factor + // authentication enabled + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the membership record was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` // user information for the membership +} + +// Accept Organization Invitation +func (s *Service) OrganizationInvitationAccept(ctx context.Context, organizationInvitationToken string) (*OrganizationInvitationAcceptResult, error) { + var organizationInvitation OrganizationInvitationAcceptResult + return &organizationInvitation, s.Post(ctx, &organizationInvitation, fmt.Sprintf("/organizations/invitations/%v/accept", organizationInvitationToken), nil) +} + +// An organization invoice is an itemized bill of goods for an +// organization which includes pricing and charges. +type OrganizationInvoice struct { + AddonsTotal int `json:"addons_total" url:"addons_total,key"` // total add-ons charges in on this invoice + ChargesTotal int `json:"charges_total" url:"charges_total,key"` // total charges on this invoice + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invoice was created + CreditsTotal int `json:"credits_total" url:"credits_total,key"` // total credits on this invoice + DatabaseTotal int `json:"database_total" url:"database_total,key"` // total database charges on this invoice + DynoUnits float64 `json:"dyno_units" url:"dyno_units,key"` // The total amount of dyno units consumed across dyno types. + ID string `json:"id" url:"id,key"` // unique identifier of this invoice + Number int `json:"number" url:"number,key"` // human readable invoice number + PaymentStatus string `json:"payment_status" url:"payment_status,key"` // Status of the invoice payment. + PeriodEnd string `json:"period_end" url:"period_end,key"` // the ending date that the invoice covers + PeriodStart string `json:"period_start" url:"period_start,key"` // the starting date that this invoice covers + PlatformTotal int `json:"platform_total" url:"platform_total,key"` // total platform charges on this invoice + State int `json:"state" url:"state,key"` // payment status for this invoice (pending, successful, failed) + Total int `json:"total" url:"total,key"` // combined total of charges and credits on this invoice + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invoice was updated + WeightedDynoHours float64 `json:"weighted_dyno_hours" url:"weighted_dyno_hours,key"` // The total amount of hours consumed across dyno types. +} +type OrganizationInvoiceInfoResult struct { + AddonsTotal int `json:"addons_total" url:"addons_total,key"` // total add-ons charges in on this invoice + ChargesTotal int `json:"charges_total" url:"charges_total,key"` // total charges on this invoice + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invoice was created + CreditsTotal int `json:"credits_total" url:"credits_total,key"` // total credits on this invoice + DatabaseTotal int `json:"database_total" url:"database_total,key"` // total database charges on this invoice + DynoUnits float64 `json:"dyno_units" url:"dyno_units,key"` // The total amount of dyno units consumed across dyno types. + ID string `json:"id" url:"id,key"` // unique identifier of this invoice + Number int `json:"number" url:"number,key"` // human readable invoice number + PaymentStatus string `json:"payment_status" url:"payment_status,key"` // Status of the invoice payment. + PeriodEnd string `json:"period_end" url:"period_end,key"` // the ending date that the invoice covers + PeriodStart string `json:"period_start" url:"period_start,key"` // the starting date that this invoice covers + PlatformTotal int `json:"platform_total" url:"platform_total,key"` // total platform charges on this invoice + State int `json:"state" url:"state,key"` // payment status for this invoice (pending, successful, failed) + Total int `json:"total" url:"total,key"` // combined total of charges and credits on this invoice + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invoice was updated + WeightedDynoHours float64 `json:"weighted_dyno_hours" url:"weighted_dyno_hours,key"` // The total amount of hours consumed across dyno types. +} + +// Info for existing invoice. +func (s *Service) OrganizationInvoiceInfo(ctx context.Context, organizationIdentity string, organizationInvoiceIdentity int) (*OrganizationInvoiceInfoResult, error) { + var organizationInvoice OrganizationInvoiceInfoResult + return &organizationInvoice, s.Get(ctx, &organizationInvoice, fmt.Sprintf("/organizations/%v/invoices/%v", organizationIdentity, organizationInvoiceIdentity), nil, nil) +} + +type OrganizationInvoiceListResult []struct { + AddonsTotal int `json:"addons_total" url:"addons_total,key"` // total add-ons charges in on this invoice + ChargesTotal int `json:"charges_total" url:"charges_total,key"` // total charges on this invoice + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when invoice was created + CreditsTotal int `json:"credits_total" url:"credits_total,key"` // total credits on this invoice + DatabaseTotal int `json:"database_total" url:"database_total,key"` // total database charges on this invoice + DynoUnits float64 `json:"dyno_units" url:"dyno_units,key"` // The total amount of dyno units consumed across dyno types. + ID string `json:"id" url:"id,key"` // unique identifier of this invoice + Number int `json:"number" url:"number,key"` // human readable invoice number + PaymentStatus string `json:"payment_status" url:"payment_status,key"` // Status of the invoice payment. + PeriodEnd string `json:"period_end" url:"period_end,key"` // the ending date that the invoice covers + PeriodStart string `json:"period_start" url:"period_start,key"` // the starting date that this invoice covers + PlatformTotal int `json:"platform_total" url:"platform_total,key"` // total platform charges on this invoice + State int `json:"state" url:"state,key"` // payment status for this invoice (pending, successful, failed) + Total int `json:"total" url:"total,key"` // combined total of charges and credits on this invoice + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when invoice was updated + WeightedDynoHours float64 `json:"weighted_dyno_hours" url:"weighted_dyno_hours,key"` // The total amount of hours consumed across dyno types. +} + +// List existing invoices. +func (s *Service) OrganizationInvoiceList(ctx context.Context, organizationIdentity string, lr *ListRange) (OrganizationInvoiceListResult, error) { + var organizationInvoice OrganizationInvoiceListResult + return organizationInvoice, s.Get(ctx, &organizationInvoice, fmt.Sprintf("/organizations/%v/invoices", organizationIdentity), nil, lr) } // An organization member is an individual with access to an // organization. type OrganizationMember struct { - CreatedAt time.Time `json:"created_at"` // when organization-member was created - Email string `json:"email"` // email address of the organization member - Role string `json:"role"` // role in the organization - UpdatedAt time.Time `json:"updated_at"` // when organization-member was updated + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the membership record was created + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of organization member + Role *string `json:"role" url:"role,key"` // role in the organization + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether the Enterprise organization member has two factor + // authentication enabled + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the membership record was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` // user information for the membership } type OrganizationMemberCreateOrUpdateOpts struct { - Email string `json:"email"` // email address of the organization member - Role string `json:"role"` // role in the organization + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated *bool `json:"federated,omitempty" url:"federated,omitempty,key"` // whether the user is federated and belongs to an Identity Provider + Role *string `json:"role" url:"role,key"` // role in the organization +} +type OrganizationMemberCreateOrUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the membership record was created + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of organization member + Role *string `json:"role" url:"role,key"` // role in the organization + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether the Enterprise organization member has two factor + // authentication enabled + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the membership record was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` // user information for the membership } // Create a new organization member, or update their role. -func (s *Service) OrganizationMemberCreateOrUpdate(organizationIdentity string, o struct { - Email string `json:"email"` // email address of the organization member - Role string `json:"role"` // role in the organization -}) (*OrganizationMember, error) { - var organizationMember OrganizationMember - return &organizationMember, s.Put(&organizationMember, fmt.Sprintf("/organizations/%v/members", organizationIdentity), o) +func (s *Service) OrganizationMemberCreateOrUpdate(ctx context.Context, organizationIdentity string, o OrganizationMemberCreateOrUpdateOpts) (*OrganizationMemberCreateOrUpdateResult, error) { + var organizationMember OrganizationMemberCreateOrUpdateResult + return &organizationMember, s.Put(ctx, &organizationMember, fmt.Sprintf("/organizations/%v/members", organizationIdentity), o) +} + +type OrganizationMemberCreateOpts struct { + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated *bool `json:"federated,omitempty" url:"federated,omitempty,key"` // whether the user is federated and belongs to an Identity Provider + Role *string `json:"role" url:"role,key"` // role in the organization +} +type OrganizationMemberCreateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the membership record was created + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of organization member + Role *string `json:"role" url:"role,key"` // role in the organization + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether the Enterprise organization member has two factor + // authentication enabled + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the membership record was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` // user information for the membership +} + +// Create a new organization member. +func (s *Service) OrganizationMemberCreate(ctx context.Context, organizationIdentity string, o OrganizationMemberCreateOpts) (*OrganizationMemberCreateResult, error) { + var organizationMember OrganizationMemberCreateResult + return &organizationMember, s.Post(ctx, &organizationMember, fmt.Sprintf("/organizations/%v/members", organizationIdentity), o) +} + +type OrganizationMemberUpdateOpts struct { + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated *bool `json:"federated,omitempty" url:"federated,omitempty,key"` // whether the user is federated and belongs to an Identity Provider + Role *string `json:"role" url:"role,key"` // role in the organization +} +type OrganizationMemberUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the membership record was created + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of organization member + Role *string `json:"role" url:"role,key"` // role in the organization + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether the Enterprise organization member has two factor + // authentication enabled + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the membership record was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` // user information for the membership +} + +// Update an organization member. +func (s *Service) OrganizationMemberUpdate(ctx context.Context, organizationIdentity string, o OrganizationMemberUpdateOpts) (*OrganizationMemberUpdateResult, error) { + var organizationMember OrganizationMemberUpdateResult + return &organizationMember, s.Patch(ctx, &organizationMember, fmt.Sprintf("/organizations/%v/members", organizationIdentity), o) +} + +type OrganizationMemberDeleteResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the membership record was created + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of organization member + Role *string `json:"role" url:"role,key"` // role in the organization + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether the Enterprise organization member has two factor + // authentication enabled + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the membership record was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` // user information for the membership } // Remove a member from the organization. -func (s *Service) OrganizationMemberDelete(organizationIdentity string, organizationMemberIdentity string) error { - return s.Delete(fmt.Sprintf("/organizations/%v/members/%v", organizationIdentity, organizationMemberIdentity)) +func (s *Service) OrganizationMemberDelete(ctx context.Context, organizationIdentity string, organizationMemberIdentity string) (*OrganizationMemberDeleteResult, error) { + var organizationMember OrganizationMemberDeleteResult + return &organizationMember, s.Delete(ctx, &organizationMember, fmt.Sprintf("/organizations/%v/members/%v", organizationIdentity, organizationMemberIdentity)) +} + +type OrganizationMemberListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when the membership record was created + Email string `json:"email" url:"email,key"` // email address of the organization member + Federated bool `json:"federated" url:"federated,key"` // whether the user is federated and belongs to an Identity Provider + ID string `json:"id" url:"id,key"` // unique identifier of organization member + Role *string `json:"role" url:"role,key"` // role in the organization + TwoFactorAuthentication bool `json:"two_factor_authentication" url:"two_factor_authentication,key"` // whether the Enterprise organization member has two factor + // authentication enabled + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when the membership record was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + Name *string `json:"name" url:"name,key"` // full name of the account owner + } `json:"user" url:"user,key"` // user information for the membership } // List members of the organization. -func (s *Service) OrganizationMemberList(organizationIdentity string, lr *ListRange) ([]*OrganizationMember, error) { - var organizationMemberList []*OrganizationMember - return organizationMemberList, s.Get(&organizationMemberList, fmt.Sprintf("/organizations/%v/members", organizationIdentity), lr) +func (s *Service) OrganizationMemberList(ctx context.Context, organizationIdentity string, lr *ListRange) (OrganizationMemberListResult, error) { + var organizationMember OrganizationMemberListResult + return organizationMember, s.Get(ctx, &organizationMember, fmt.Sprintf("/organizations/%v/members", organizationIdentity), nil, lr) +} + +// Tracks an organization's preferences +type OrganizationPreferences struct { + DefaultPermission *string `json:"default-permission" url:"default-permission,key"` // The default permission used when adding new members to the + // organization + WhitelistingEnabled *bool `json:"whitelisting-enabled" url:"whitelisting-enabled,key"` // Whether whitelisting rules should be applied to add-on installations +} +type OrganizationPreferencesListResult struct { + DefaultPermission *string `json:"default-permission" url:"default-permission,key"` // The default permission used when adding new members to the + // organization + WhitelistingEnabled *bool `json:"whitelisting-enabled" url:"whitelisting-enabled,key"` // Whether whitelisting rules should be applied to add-on installations +} + +// Retrieve Organization Preferences +func (s *Service) OrganizationPreferencesList(ctx context.Context, organizationPreferencesIdentity string) (*OrganizationPreferencesListResult, error) { + var organizationPreferences OrganizationPreferencesListResult + return &organizationPreferences, s.Get(ctx, &organizationPreferences, fmt.Sprintf("/organizations/%v/preferences", organizationPreferencesIdentity), nil, nil) +} + +type OrganizationPreferencesUpdateOpts struct { + WhitelistingEnabled *bool `json:"whitelisting-enabled,omitempty" url:"whitelisting-enabled,omitempty,key"` // Whether whitelisting rules should be applied to add-on installations +} +type OrganizationPreferencesUpdateResult struct { + DefaultPermission *string `json:"default-permission" url:"default-permission,key"` // The default permission used when adding new members to the + // organization + WhitelistingEnabled *bool `json:"whitelisting-enabled" url:"whitelisting-enabled,key"` // Whether whitelisting rules should be applied to add-on installations +} + +// Update Organization Preferences +func (s *Service) OrganizationPreferencesUpdate(ctx context.Context, organizationPreferencesIdentity string, o OrganizationPreferencesUpdateOpts) (*OrganizationPreferencesUpdateResult, error) { + var organizationPreferences OrganizationPreferencesUpdateResult + return &organizationPreferences, s.Patch(ctx, &organizationPreferences, fmt.Sprintf("/organizations/%v/preferences", organizationPreferencesIdentity), o) +} + +// An outbound-ruleset is a collection of rules that specify what hosts +// Dynos are allowed to communicate with. +type OutboundRuleset struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when outbound-ruleset was created + CreatedBy string `json:"created_by" url:"created_by,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an outbound-ruleset + Rules []struct { + FromPort int `json:"from_port" url:"from_port,key"` // an endpoint of communication in an operating system. + Protocol string `json:"protocol" url:"protocol,key"` // formal standards and policies comprised of rules, procedures and + // formats that define communication between two or more devices over a + // network + Target string `json:"target" url:"target,key"` // is the target destination in CIDR notation + ToPort int `json:"to_port" url:"to_port,key"` // an endpoint of communication in an operating system. + } `json:"rules" url:"rules,key"` +} +type OutboundRulesetInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when outbound-ruleset was created + CreatedBy string `json:"created_by" url:"created_by,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an outbound-ruleset + Rules []struct { + FromPort int `json:"from_port" url:"from_port,key"` // an endpoint of communication in an operating system. + Protocol string `json:"protocol" url:"protocol,key"` // formal standards and policies comprised of rules, procedures and + // formats that define communication between two or more devices over a + // network + Target string `json:"target" url:"target,key"` // is the target destination in CIDR notation + ToPort int `json:"to_port" url:"to_port,key"` // an endpoint of communication in an operating system. + } `json:"rules" url:"rules,key"` +} + +// Current outbound ruleset for a space +func (s *Service) OutboundRulesetInfo(ctx context.Context, spaceIdentity string) (*OutboundRulesetInfoResult, error) { + var outboundRuleset OutboundRulesetInfoResult + return &outboundRuleset, s.Get(ctx, &outboundRuleset, fmt.Sprintf("/spaces/%v/outbound-ruleset", spaceIdentity), nil, nil) +} + +type OutboundRulesetListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when outbound-ruleset was created + CreatedBy string `json:"created_by" url:"created_by,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an outbound-ruleset + Rules []struct { + FromPort int `json:"from_port" url:"from_port,key"` // an endpoint of communication in an operating system. + Protocol string `json:"protocol" url:"protocol,key"` // formal standards and policies comprised of rules, procedures and + // formats that define communication between two or more devices over a + // network + Target string `json:"target" url:"target,key"` // is the target destination in CIDR notation + ToPort int `json:"to_port" url:"to_port,key"` // an endpoint of communication in an operating system. + } `json:"rules" url:"rules,key"` +} + +// List all Outbound Rulesets for a space +func (s *Service) OutboundRulesetList(ctx context.Context, spaceIdentity string, lr *ListRange) (OutboundRulesetListResult, error) { + var outboundRuleset OutboundRulesetListResult + return outboundRuleset, s.Get(ctx, &outboundRuleset, fmt.Sprintf("/spaces/%v/outbound-rulesets", spaceIdentity), nil, lr) +} + +type OutboundRulesetCreateOpts struct { + Rules *[]*struct { + FromPort int `json:"from_port" url:"from_port,key"` // an endpoint of communication in an operating system. + Protocol string `json:"protocol" url:"protocol,key"` // formal standards and policies comprised of rules, procedures and + // formats that define communication between two or more devices over a + // network + Target string `json:"target" url:"target,key"` // is the target destination in CIDR notation + ToPort int `json:"to_port" url:"to_port,key"` // an endpoint of communication in an operating system. + } `json:"rules,omitempty" url:"rules,omitempty,key"` +} + +// Create a new outbound ruleset +func (s *Service) OutboundRulesetCreate(ctx context.Context, spaceIdentity string, o OutboundRulesetCreateOpts) (*OutboundRuleset, error) { + var outboundRuleset OutboundRuleset + return &outboundRuleset, s.Put(ctx, &outboundRuleset, fmt.Sprintf("/spaces/%v/outbound-ruleset", spaceIdentity), o) +} + +// A password reset represents a in-process password reset attempt. +type PasswordReset struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when password reset was created + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` +} +type PasswordResetResetPasswordOpts struct { + Email *string `json:"email,omitempty" url:"email,omitempty,key"` // unique email address of account +} + +// Reset account's password. This will send a reset password link to the +// user's email address. +func (s *Service) PasswordResetResetPassword(ctx context.Context, o PasswordResetResetPasswordOpts) (*PasswordReset, error) { + var passwordReset PasswordReset + return &passwordReset, s.Post(ctx, &passwordReset, fmt.Sprintf("/password-resets"), o) +} + +type PasswordResetCompleteResetPasswordOpts struct { + Password *string `json:"password,omitempty" url:"password,omitempty,key"` // current password on the account + PasswordConfirmation *string `json:"password_confirmation,omitempty" url:"password_confirmation,omitempty,key"` // confirmation of the new password +} + +// Complete password reset. +func (s *Service) PasswordResetCompleteResetPassword(ctx context.Context, passwordResetResetPasswordToken string, o PasswordResetCompleteResetPasswordOpts) (*PasswordReset, error) { + var passwordReset PasswordReset + return &passwordReset, s.Post(ctx, &passwordReset, fmt.Sprintf("/password-resets/%v/actions/finalize", passwordResetResetPasswordToken), o) +} + +// A pipeline allows grouping of apps into different stages. +type Pipeline struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + Name string `json:"name" url:"name,key"` // name of pipeline + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline was updated +} +type PipelineCreateOpts struct { + Name string `json:"name" url:"name,key"` // name of pipeline +} +type PipelineCreateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + Name string `json:"name" url:"name,key"` // name of pipeline + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline was updated +} + +// Create a new pipeline. +func (s *Service) PipelineCreate(ctx context.Context, o PipelineCreateOpts) (*PipelineCreateResult, error) { + var pipeline PipelineCreateResult + return &pipeline, s.Post(ctx, &pipeline, fmt.Sprintf("/pipelines"), o) +} + +type PipelineInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + Name string `json:"name" url:"name,key"` // name of pipeline + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline was updated +} + +// Info for existing pipeline. +func (s *Service) PipelineInfo(ctx context.Context, pipelineIdentity string) (*PipelineInfoResult, error) { + var pipeline PipelineInfoResult + return &pipeline, s.Get(ctx, &pipeline, fmt.Sprintf("/pipelines/%v", pipelineIdentity), nil, nil) +} + +type PipelineDeleteResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + Name string `json:"name" url:"name,key"` // name of pipeline + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline was updated +} + +// Delete an existing pipeline. +func (s *Service) PipelineDelete(ctx context.Context, pipelineID string) (*PipelineDeleteResult, error) { + var pipeline PipelineDeleteResult + return &pipeline, s.Delete(ctx, &pipeline, fmt.Sprintf("/pipelines/%v", pipelineID)) +} + +type PipelineUpdateOpts struct { + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // name of pipeline +} +type PipelineUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + Name string `json:"name" url:"name,key"` // name of pipeline + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline was updated +} + +// Update an existing pipeline. +func (s *Service) PipelineUpdate(ctx context.Context, pipelineID string, o PipelineUpdateOpts) (*PipelineUpdateResult, error) { + var pipeline PipelineUpdateResult + return &pipeline, s.Patch(ctx, &pipeline, fmt.Sprintf("/pipelines/%v", pipelineID), o) +} + +type PipelineListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + Name string `json:"name" url:"name,key"` // name of pipeline + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline was updated +} + +// List existing pipelines. +func (s *Service) PipelineList(ctx context.Context, lr *ListRange) (PipelineListResult, error) { + var pipeline PipelineListResult + return pipeline, s.Get(ctx, &pipeline, fmt.Sprintf("/pipelines"), nil, lr) +} + +// Information about an app's coupling to a pipeline +type PipelineCoupling struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app involved in the pipeline coupling + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline coupling was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline coupling + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // pipeline involved in the coupling + Stage string `json:"stage" url:"stage,key"` // target pipeline stage + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline coupling was updated +} +type PipelineCouplingListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app involved in the pipeline coupling + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline coupling was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline coupling + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // pipeline involved in the coupling + Stage string `json:"stage" url:"stage,key"` // target pipeline stage + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline coupling was updated +} + +// List couplings for a pipeline +func (s *Service) PipelineCouplingList(ctx context.Context, pipelineID string, lr *ListRange) (PipelineCouplingListResult, error) { + var pipelineCoupling PipelineCouplingListResult + return pipelineCoupling, s.Get(ctx, &pipelineCoupling, fmt.Sprintf("/pipelines/%v/pipeline-couplings", pipelineID), nil, lr) +} + +type PipelineCouplingCreateOpts struct { + App string `json:"app" url:"app,key"` // unique identifier of app + Pipeline string `json:"pipeline" url:"pipeline,key"` // unique identifier of pipeline + Stage string `json:"stage" url:"stage,key"` // target pipeline stage +} +type PipelineCouplingCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app involved in the pipeline coupling + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline coupling was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline coupling + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // pipeline involved in the coupling + Stage string `json:"stage" url:"stage,key"` // target pipeline stage + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline coupling was updated +} + +// Create a new pipeline coupling. +func (s *Service) PipelineCouplingCreate(ctx context.Context, o PipelineCouplingCreateOpts) (*PipelineCouplingCreateResult, error) { + var pipelineCoupling PipelineCouplingCreateResult + return &pipelineCoupling, s.Post(ctx, &pipelineCoupling, fmt.Sprintf("/pipeline-couplings"), o) +} + +type PipelineCouplingInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app involved in the pipeline coupling + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline coupling was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline coupling + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // pipeline involved in the coupling + Stage string `json:"stage" url:"stage,key"` // target pipeline stage + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline coupling was updated +} + +// Info for an existing pipeline coupling. +func (s *Service) PipelineCouplingInfo(ctx context.Context, pipelineCouplingIdentity string) (*PipelineCouplingInfoResult, error) { + var pipelineCoupling PipelineCouplingInfoResult + return &pipelineCoupling, s.Get(ctx, &pipelineCoupling, fmt.Sprintf("/pipeline-couplings/%v", pipelineCouplingIdentity), nil, nil) +} + +type PipelineCouplingDeleteResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app involved in the pipeline coupling + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline coupling was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline coupling + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // pipeline involved in the coupling + Stage string `json:"stage" url:"stage,key"` // target pipeline stage + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline coupling was updated +} + +// Delete an existing pipeline coupling. +func (s *Service) PipelineCouplingDelete(ctx context.Context, pipelineCouplingIdentity string) (*PipelineCouplingDeleteResult, error) { + var pipelineCoupling PipelineCouplingDeleteResult + return &pipelineCoupling, s.Delete(ctx, &pipelineCoupling, fmt.Sprintf("/pipeline-couplings/%v", pipelineCouplingIdentity)) +} + +type PipelineCouplingUpdateOpts struct { + Stage *string `json:"stage,omitempty" url:"stage,omitempty,key"` // target pipeline stage +} +type PipelineCouplingUpdateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // app involved in the pipeline coupling + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when pipeline coupling was created + ID string `json:"id" url:"id,key"` // unique identifier of pipeline coupling + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // pipeline involved in the coupling + Stage string `json:"stage" url:"stage,key"` // target pipeline stage + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when pipeline coupling was updated +} + +// Update an existing pipeline coupling. +func (s *Service) PipelineCouplingUpdate(ctx context.Context, pipelineCouplingIdentity string, o PipelineCouplingUpdateOpts) (*PipelineCouplingUpdateResult, error) { + var pipelineCoupling PipelineCouplingUpdateResult + return &pipelineCoupling, s.Patch(ctx, &pipelineCoupling, fmt.Sprintf("/pipeline-couplings/%v", pipelineCouplingIdentity), o) +} + +// Promotions allow you to move code from an app in a pipeline to all +// targets +type PipelinePromotion struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when promotion was created + ID string `json:"id" url:"id,key"` // unique identifier of promotion + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // the pipeline which the promotion belongs to + Source struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // the app which was promoted from + Release struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // the release used to promoted from + } `json:"source" url:"source,key"` // the app being promoted from + Status string `json:"status" url:"status,key"` // status of promotion + UpdatedAt *time.Time `json:"updated_at" url:"updated_at,key"` // when promotion was updated +} +type PipelinePromotionCreateOpts struct { + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // pipeline involved in the promotion + Source struct { + App *struct { + ID *string `json:"id,omitempty" url:"id,omitempty,key"` // unique identifier of app + } `json:"app,omitempty" url:"app,omitempty,key"` // the app which was promoted from + } `json:"source" url:"source,key"` // the app being promoted from + Targets []struct { + App *struct { + ID *string `json:"id,omitempty" url:"id,omitempty,key"` // unique identifier of app + } `json:"app,omitempty" url:"app,omitempty,key"` // the app is being promoted to + } `json:"targets" url:"targets,key"` +} + +// Create a new promotion. +func (s *Service) PipelinePromotionCreate(ctx context.Context, o PipelinePromotionCreateOpts) (*PipelinePromotion, error) { + var pipelinePromotion PipelinePromotion + return &pipelinePromotion, s.Post(ctx, &pipelinePromotion, fmt.Sprintf("/pipeline-promotions"), o) +} + +type PipelinePromotionInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when promotion was created + ID string `json:"id" url:"id,key"` // unique identifier of promotion + Pipeline struct { + ID string `json:"id" url:"id,key"` // unique identifier of pipeline + } `json:"pipeline" url:"pipeline,key"` // the pipeline which the promotion belongs to + Source struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // the app which was promoted from + Release struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // the release used to promoted from + } `json:"source" url:"source,key"` // the app being promoted from + Status string `json:"status" url:"status,key"` // status of promotion + UpdatedAt *time.Time `json:"updated_at" url:"updated_at,key"` // when promotion was updated +} + +// Info for existing pipeline promotion. +func (s *Service) PipelinePromotionInfo(ctx context.Context, pipelinePromotionIdentity string) (*PipelinePromotionInfoResult, error) { + var pipelinePromotion PipelinePromotionInfoResult + return &pipelinePromotion, s.Get(ctx, &pipelinePromotion, fmt.Sprintf("/pipeline-promotions/%v", pipelinePromotionIdentity), nil, nil) +} + +// Promotion targets represent an individual app being promoted to +type PipelinePromotionTarget struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // the app which was promoted to + ErrorMessage *string `json:"error_message" url:"error_message,key"` // an error message for why the promotion failed + ID string `json:"id" url:"id,key"` // unique identifier of promotion target + PipelinePromotion struct { + ID string `json:"id" url:"id,key"` // unique identifier of promotion + } `json:"pipeline_promotion" url:"pipeline_promotion,key"` // the promotion which the target belongs to + Release *struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // the release which was created on the target app + Status string `json:"status" url:"status,key"` // status of promotion +} +type PipelinePromotionTargetListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + } `json:"app" url:"app,key"` // the app which was promoted to + ErrorMessage *string `json:"error_message" url:"error_message,key"` // an error message for why the promotion failed + ID string `json:"id" url:"id,key"` // unique identifier of promotion target + PipelinePromotion struct { + ID string `json:"id" url:"id,key"` // unique identifier of promotion + } `json:"pipeline_promotion" url:"pipeline_promotion,key"` // the promotion which the target belongs to + Release *struct { + ID string `json:"id" url:"id,key"` // unique identifier of release + } `json:"release" url:"release,key"` // the release which was created on the target app + Status string `json:"status" url:"status,key"` // status of promotion +} + +// List promotion targets belonging to an existing promotion. +func (s *Service) PipelinePromotionTargetList(ctx context.Context, pipelinePromotionID string, lr *ListRange) (PipelinePromotionTargetListResult, error) { + var pipelinePromotionTarget PipelinePromotionTargetListResult + return pipelinePromotionTarget, s.Get(ctx, &pipelinePromotionTarget, fmt.Sprintf("/pipeline-promotions/%v/promotion-targets", pipelinePromotionID), nil, lr) } // Plans represent different configurations of add-ons that may be added // to apps. Endpoints under add-on services can be accessed without // authentication. type Plan struct { - CreatedAt time.Time `json:"created_at"` // when plan was created - Default bool `json:"default"` // whether this plan is the default for its addon service - Description string `json:"description"` // description of plan - ID string `json:"id"` // unique identifier of this plan - Name string `json:"name"` // unique name of this plan - Price struct { - Cents int `json:"cents"` // price in cents per unit of plan - Unit string `json:"unit"` // unit of price for plan - } `json:"price"` // price - State string `json:"state"` // release status for plan - UpdatedAt time.Time `json:"updated_at"` // when plan was updated + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + Compliance *[]string `json:"compliance" url:"compliance,key"` // the compliance regimes applied to an add-on plan + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when plan was created + Default bool `json:"default" url:"default,key"` // whether this plan is the default for its add-on service + Description string `json:"description" url:"description,key"` // description of plan + HumanName string `json:"human_name" url:"human_name,key"` // human readable name of the add-on plan + ID string `json:"id" url:"id,key"` // unique identifier of this plan + InstallableInsidePrivateNetwork bool `json:"installable_inside_private_network" url:"installable_inside_private_network,key"` // whether this plan is installable to a Private Spaces app + InstallableOutsidePrivateNetwork bool `json:"installable_outside_private_network" url:"installable_outside_private_network,key"` // whether this plan is installable to a Common Runtime app + Name string `json:"name" url:"name,key"` // unique name of this plan + Price struct { + Cents int `json:"cents" url:"cents,key"` // price in cents per unit of plan + Unit string `json:"unit" url:"unit,key"` // unit of price for plan + } `json:"price" url:"price,key"` // price + SpaceDefault bool `json:"space_default" url:"space_default,key"` // whether this plan is the default for apps in Private Spaces + State string `json:"state" url:"state,key"` // release status for plan + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when plan was updated + Visible bool `json:"visible" url:"visible,key"` // whether this plan is publicly visible +} +type PlanInfoResult struct { + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + Compliance *[]string `json:"compliance" url:"compliance,key"` // the compliance regimes applied to an add-on plan + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when plan was created + Default bool `json:"default" url:"default,key"` // whether this plan is the default for its add-on service + Description string `json:"description" url:"description,key"` // description of plan + HumanName string `json:"human_name" url:"human_name,key"` // human readable name of the add-on plan + ID string `json:"id" url:"id,key"` // unique identifier of this plan + InstallableInsidePrivateNetwork bool `json:"installable_inside_private_network" url:"installable_inside_private_network,key"` // whether this plan is installable to a Private Spaces app + InstallableOutsidePrivateNetwork bool `json:"installable_outside_private_network" url:"installable_outside_private_network,key"` // whether this plan is installable to a Common Runtime app + Name string `json:"name" url:"name,key"` // unique name of this plan + Price struct { + Cents int `json:"cents" url:"cents,key"` // price in cents per unit of plan + Unit string `json:"unit" url:"unit,key"` // unit of price for plan + } `json:"price" url:"price,key"` // price + SpaceDefault bool `json:"space_default" url:"space_default,key"` // whether this plan is the default for apps in Private Spaces + State string `json:"state" url:"state,key"` // release status for plan + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when plan was updated + Visible bool `json:"visible" url:"visible,key"` // whether this plan is publicly visible } // Info for existing plan. -func (s *Service) PlanInfo(addonServiceIdentity string, planIdentity string) (*Plan, error) { - var plan Plan - return &plan, s.Get(&plan, fmt.Sprintf("/addon-services/%v/plans/%v", addonServiceIdentity, planIdentity), nil) +func (s *Service) PlanInfo(ctx context.Context, addOnServiceIdentity string, planIdentity string) (*PlanInfoResult, error) { + var plan PlanInfoResult + return &plan, s.Get(ctx, &plan, fmt.Sprintf("/addon-services/%v/plans/%v", addOnServiceIdentity, planIdentity), nil, nil) +} + +type PlanListResult []struct { + AddonService struct { + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // identity of add-on service + Compliance *[]string `json:"compliance" url:"compliance,key"` // the compliance regimes applied to an add-on plan + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when plan was created + Default bool `json:"default" url:"default,key"` // whether this plan is the default for its add-on service + Description string `json:"description" url:"description,key"` // description of plan + HumanName string `json:"human_name" url:"human_name,key"` // human readable name of the add-on plan + ID string `json:"id" url:"id,key"` // unique identifier of this plan + InstallableInsidePrivateNetwork bool `json:"installable_inside_private_network" url:"installable_inside_private_network,key"` // whether this plan is installable to a Private Spaces app + InstallableOutsidePrivateNetwork bool `json:"installable_outside_private_network" url:"installable_outside_private_network,key"` // whether this plan is installable to a Common Runtime app + Name string `json:"name" url:"name,key"` // unique name of this plan + Price struct { + Cents int `json:"cents" url:"cents,key"` // price in cents per unit of plan + Unit string `json:"unit" url:"unit,key"` // unit of price for plan + } `json:"price" url:"price,key"` // price + SpaceDefault bool `json:"space_default" url:"space_default,key"` // whether this plan is the default for apps in Private Spaces + State string `json:"state" url:"state,key"` // release status for plan + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when plan was updated + Visible bool `json:"visible" url:"visible,key"` // whether this plan is publicly visible } // List existing plans. -func (s *Service) PlanList(addonServiceIdentity string, lr *ListRange) ([]*Plan, error) { - var planList []*Plan - return planList, s.Get(&planList, fmt.Sprintf("/addon-services/%v/plans", addonServiceIdentity), lr) +func (s *Service) PlanList(ctx context.Context, addOnServiceIdentity string, lr *ListRange) (PlanListResult, error) { + var plan PlanListResult + return plan, s.Get(ctx, &plan, fmt.Sprintf("/addon-services/%v/plans", addOnServiceIdentity), nil, lr) } // Rate Limit represents the number of request tokens each account // holds. Requests to this endpoint do not count towards the rate limit. type RateLimit struct { - Remaining int `json:"remaining"` // allowed requests remaining in current interval + Remaining int `json:"remaining" url:"remaining,key"` // allowed requests remaining in current interval +} +type RateLimitInfoResult struct { + Remaining int `json:"remaining" url:"remaining,key"` // allowed requests remaining in current interval } // Info for rate limits. -func (s *Service) RateLimitInfo() (*RateLimit, error) { - var rateLimit RateLimit - return &rateLimit, s.Get(&rateLimit, fmt.Sprintf("/account/rate-limits"), nil) +func (s *Service) RateLimitInfo(ctx context.Context) (*RateLimitInfoResult, error) { + var rateLimit RateLimitInfoResult + return &rateLimit, s.Get(ctx, &rateLimit, fmt.Sprintf("/account/rate-limits"), nil, nil) } // A region represents a geographic location in which your application // may run. type Region struct { - CreatedAt time.Time `json:"created_at"` // when region was created - Description string `json:"description"` // description of region - ID string `json:"id"` // unique identifier of region - Name string `json:"name"` // unique name of region - UpdatedAt time.Time `json:"updated_at"` // when region was updated + Country string `json:"country" url:"country,key"` // country where the region exists + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when region was created + Description string `json:"description" url:"description,key"` // description of region + ID string `json:"id" url:"id,key"` // unique identifier of region + Locale string `json:"locale" url:"locale,key"` // area in the country where the region exists + Name string `json:"name" url:"name,key"` // unique name of region + PrivateCapable bool `json:"private_capable" url:"private_capable,key"` // whether or not region is available for creating a Private Space + Provider struct { + Name string `json:"name" url:"name,key"` // name of provider + Region string `json:"region" url:"region,key"` // region name used by provider + } `json:"provider" url:"provider,key"` // provider of underlying substrate + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when region was updated +} +type RegionInfoResult struct { + Country string `json:"country" url:"country,key"` // country where the region exists + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when region was created + Description string `json:"description" url:"description,key"` // description of region + ID string `json:"id" url:"id,key"` // unique identifier of region + Locale string `json:"locale" url:"locale,key"` // area in the country where the region exists + Name string `json:"name" url:"name,key"` // unique name of region + PrivateCapable bool `json:"private_capable" url:"private_capable,key"` // whether or not region is available for creating a Private Space + Provider struct { + Name string `json:"name" url:"name,key"` // name of provider + Region string `json:"region" url:"region,key"` // region name used by provider + } `json:"provider" url:"provider,key"` // provider of underlying substrate + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when region was updated } // Info for existing region. -func (s *Service) RegionInfo(regionIdentity string) (*Region, error) { - var region Region - return ®ion, s.Get(®ion, fmt.Sprintf("/regions/%v", regionIdentity), nil) +func (s *Service) RegionInfo(ctx context.Context, regionIdentity string) (*RegionInfoResult, error) { + var region RegionInfoResult + return ®ion, s.Get(ctx, ®ion, fmt.Sprintf("/regions/%v", regionIdentity), nil, nil) +} + +type RegionListResult []struct { + Country string `json:"country" url:"country,key"` // country where the region exists + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when region was created + Description string `json:"description" url:"description,key"` // description of region + ID string `json:"id" url:"id,key"` // unique identifier of region + Locale string `json:"locale" url:"locale,key"` // area in the country where the region exists + Name string `json:"name" url:"name,key"` // unique name of region + PrivateCapable bool `json:"private_capable" url:"private_capable,key"` // whether or not region is available for creating a Private Space + Provider struct { + Name string `json:"name" url:"name,key"` // name of provider + Region string `json:"region" url:"region,key"` // region name used by provider + } `json:"provider" url:"provider,key"` // provider of underlying substrate + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when region was updated } // List existing regions. -func (s *Service) RegionList(lr *ListRange) ([]*Region, error) { - var regionList []*Region - return regionList, s.Get(®ionList, fmt.Sprintf("/regions"), lr) +func (s *Service) RegionList(ctx context.Context, lr *ListRange) (RegionListResult, error) { + var region RegionListResult + return region, s.Get(ctx, ®ion, fmt.Sprintf("/regions"), nil, lr) } // A release represents a combination of code, config vars and add-ons // for an app on Heroku. type Release struct { - CreatedAt time.Time `json:"created_at"` // when release was created - Description string `json:"description"` // description of changes in this release - ID string `json:"id"` // unique identifier of release + AddonPlanNames []string `json:"addon_plan_names" url:"addon_plan_names,key"` // add-on plans installed on the app for this release + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the release + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when release was created + Current bool `json:"current" url:"current,key"` // indicates this release as being the current one for the app + Description string `json:"description" url:"description,key"` // description of changes in this release + ID string `json:"id" url:"id,key"` // unique identifier of release Slug *struct { - ID string `json:"id"` // unique identifier of slug - } `json:"slug"` // slug running in this release - UpdatedAt time.Time `json:"updated_at"` // when release was updated + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug running in this release + Status string `json:"status" url:"status,key"` // current status of the release + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when release was updated User struct { - Email string `json:"email"` // unique email address of account - ID string `json:"id"` // unique identifier of an account - } `json:"user"` // user that created the release - Version int `json:"version"` // unique version assigned to the release + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that created the release + Version int `json:"version" url:"version,key"` // unique version assigned to the release +} +type ReleaseInfoResult struct { + AddonPlanNames []string `json:"addon_plan_names" url:"addon_plan_names,key"` // add-on plans installed on the app for this release + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the release + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when release was created + Current bool `json:"current" url:"current,key"` // indicates this release as being the current one for the app + Description string `json:"description" url:"description,key"` // description of changes in this release + ID string `json:"id" url:"id,key"` // unique identifier of release + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug running in this release + Status string `json:"status" url:"status,key"` // current status of the release + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when release was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that created the release + Version int `json:"version" url:"version,key"` // unique version assigned to the release } // Info for existing release. -func (s *Service) ReleaseInfo(appIdentity string, releaseIdentity string) (*Release, error) { - var release Release - return &release, s.Get(&release, fmt.Sprintf("/apps/%v/releases/%v", appIdentity, releaseIdentity), nil) +func (s *Service) ReleaseInfo(ctx context.Context, appIdentity string, releaseIdentity string) (*ReleaseInfoResult, error) { + var release ReleaseInfoResult + return &release, s.Get(ctx, &release, fmt.Sprintf("/apps/%v/releases/%v", appIdentity, releaseIdentity), nil, nil) +} + +type ReleaseListResult []struct { + AddonPlanNames []string `json:"addon_plan_names" url:"addon_plan_names,key"` // add-on plans installed on the app for this release + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the release + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when release was created + Current bool `json:"current" url:"current,key"` // indicates this release as being the current one for the app + Description string `json:"description" url:"description,key"` // description of changes in this release + ID string `json:"id" url:"id,key"` // unique identifier of release + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug running in this release + Status string `json:"status" url:"status,key"` // current status of the release + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when release was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that created the release + Version int `json:"version" url:"version,key"` // unique version assigned to the release } // List existing releases. -func (s *Service) ReleaseList(appIdentity string, lr *ListRange) ([]*Release, error) { - var releaseList []*Release - return releaseList, s.Get(&releaseList, fmt.Sprintf("/apps/%v/releases", appIdentity), lr) +func (s *Service) ReleaseList(ctx context.Context, appIdentity string, lr *ListRange) (ReleaseListResult, error) { + var release ReleaseListResult + return release, s.Get(ctx, &release, fmt.Sprintf("/apps/%v/releases", appIdentity), nil, lr) } type ReleaseCreateOpts struct { - Description *string `json:"description,omitempty"` // description of changes in this release - Slug string `json:"slug"` // unique identifier of slug + Description *string `json:"description,omitempty" url:"description,omitempty,key"` // description of changes in this release + Slug string `json:"slug" url:"slug,key"` // unique identifier of slug +} +type ReleaseCreateResult struct { + AddonPlanNames []string `json:"addon_plan_names" url:"addon_plan_names,key"` // add-on plans installed on the app for this release + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the release + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when release was created + Current bool `json:"current" url:"current,key"` // indicates this release as being the current one for the app + Description string `json:"description" url:"description,key"` // description of changes in this release + ID string `json:"id" url:"id,key"` // unique identifier of release + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug running in this release + Status string `json:"status" url:"status,key"` // current status of the release + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when release was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that created the release + Version int `json:"version" url:"version,key"` // unique version assigned to the release } -// Create new release. The API cannot be used to create releases on -// Bamboo apps. -func (s *Service) ReleaseCreate(appIdentity string, o struct { - Description *string `json:"description,omitempty"` // description of changes in this release - Slug string `json:"slug"` // unique identifier of slug -}) (*Release, error) { - var release Release - return &release, s.Post(&release, fmt.Sprintf("/apps/%v/releases", appIdentity), o) +// Create new release. +func (s *Service) ReleaseCreate(ctx context.Context, appIdentity string, o ReleaseCreateOpts) (*ReleaseCreateResult, error) { + var release ReleaseCreateResult + return &release, s.Post(ctx, &release, fmt.Sprintf("/apps/%v/releases", appIdentity), o) } type ReleaseRollbackOpts struct { - Release string `json:"release"` // unique identifier of release + Release string `json:"release" url:"release,key"` // unique identifier of release +} +type ReleaseRollbackResult struct { + AddonPlanNames []string `json:"addon_plan_names" url:"addon_plan_names,key"` // add-on plans installed on the app for this release + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // app involved in the release + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when release was created + Current bool `json:"current" url:"current,key"` // indicates this release as being the current one for the app + Description string `json:"description" url:"description,key"` // description of changes in this release + ID string `json:"id" url:"id,key"` // unique identifier of release + Slug *struct { + ID string `json:"id" url:"id,key"` // unique identifier of slug + } `json:"slug" url:"slug,key"` // slug running in this release + Status string `json:"status" url:"status,key"` // current status of the release + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when release was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // user that created the release + Version int `json:"version" url:"version,key"` // unique version assigned to the release } // Rollback to an existing release. -func (s *Service) ReleaseRollback(appIdentity string, o struct { - Release string `json:"release"` // unique identifier of release -}) (*Release, error) { - var release Release - return &release, s.Post(&release, fmt.Sprintf("/apps/%v/releases", appIdentity), o) +func (s *Service) ReleaseRollback(ctx context.Context, appIdentity string, o ReleaseRollbackOpts) (*ReleaseRollbackResult, error) { + var release ReleaseRollbackResult + return &release, s.Post(ctx, &release, fmt.Sprintf("/apps/%v/releases", appIdentity), o) } // A slug is a snapshot of your application code that is ready to run on // the platform. type Slug struct { Blob struct { - Method string `json:"method"` // method to be used to interact with the slug blob - URL string `json:"url"` // URL to interact with the slug blob - } `json:"blob"` // pointer to the url where clients can fetch or store the actual + Method string `json:"method" url:"method,key"` // method to be used to interact with the slug blob + URL string `json:"url" url:"url,key"` // URL to interact with the slug blob + } `json:"blob" url:"blob,key"` // pointer to the url where clients can fetch or store the actual // release binary - BuildpackProvidedDescription *string `json:"buildpack_provided_description"` // description from buildpack of slug - Commit *string `json:"commit"` // identification of the code with your version control system (eg: SHA + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of slug + Checksum *string `json:"checksum" url:"checksum,key"` // an optional checksum of the slug for verifying its integrity + Commit *string `json:"commit" url:"commit,key"` // identification of the code with your version control system (eg: SHA // of the git HEAD) - CreatedAt time.Time `json:"created_at"` // when slug was created - ID string `json:"id"` // unique identifier of slug - ProcessTypes map[string]string `json:"process_types"` // hash mapping process type names to their respective command - Size *int `json:"size"` // size of slug, in bytes - UpdatedAt time.Time `json:"updated_at"` // when slug was updated + CommitDescription *string `json:"commit_description" url:"commit_description,key"` // an optional description of the provided commit + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when slug was created + ID string `json:"id" url:"id,key"` // unique identifier of slug + ProcessTypes map[string]string `json:"process_types" url:"process_types,key"` // hash mapping process type names to their respective command + Size *int `json:"size" url:"size,key"` // size of slug, in bytes + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of slug stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when slug was updated +} +type SlugInfoResult struct { + Blob struct { + Method string `json:"method" url:"method,key"` // method to be used to interact with the slug blob + URL string `json:"url" url:"url,key"` // URL to interact with the slug blob + } `json:"blob" url:"blob,key"` // pointer to the url where clients can fetch or store the actual + // release binary + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of slug + Checksum *string `json:"checksum" url:"checksum,key"` // an optional checksum of the slug for verifying its integrity + Commit *string `json:"commit" url:"commit,key"` // identification of the code with your version control system (eg: SHA + // of the git HEAD) + CommitDescription *string `json:"commit_description" url:"commit_description,key"` // an optional description of the provided commit + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when slug was created + ID string `json:"id" url:"id,key"` // unique identifier of slug + ProcessTypes map[string]string `json:"process_types" url:"process_types,key"` // hash mapping process type names to their respective command + Size *int `json:"size" url:"size,key"` // size of slug, in bytes + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of slug stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when slug was updated } // Info for existing slug. -func (s *Service) SlugInfo(appIdentity string, slugIdentity string) (*Slug, error) { - var slug Slug - return &slug, s.Get(&slug, fmt.Sprintf("/apps/%v/slugs/%v", appIdentity, slugIdentity), nil) +func (s *Service) SlugInfo(ctx context.Context, appIdentity string, slugIdentity string) (*SlugInfoResult, error) { + var slug SlugInfoResult + return &slug, s.Get(ctx, &slug, fmt.Sprintf("/apps/%v/slugs/%v", appIdentity, slugIdentity), nil, nil) } type SlugCreateOpts struct { - BuildpackProvidedDescription *string `json:"buildpack_provided_description,omitempty"` // description from buildpack of slug - Commit *string `json:"commit,omitempty"` // identification of the code with your version control system (eg: SHA + BuildpackProvidedDescription *string `json:"buildpack_provided_description,omitempty" url:"buildpack_provided_description,omitempty,key"` // description from buildpack of slug + Checksum *string `json:"checksum,omitempty" url:"checksum,omitempty,key"` // an optional checksum of the slug for verifying its integrity + Commit *string `json:"commit,omitempty" url:"commit,omitempty,key"` // identification of the code with your version control system (eg: SHA // of the git HEAD) - ProcessTypes map[string]string `json:"process_types"` // hash mapping process type names to their respective command + CommitDescription *string `json:"commit_description,omitempty" url:"commit_description,omitempty,key"` // an optional description of the provided commit + ProcessTypes map[string]string `json:"process_types" url:"process_types,key"` // hash mapping process type names to their respective command + Stack *string `json:"stack,omitempty" url:"stack,omitempty,key"` // unique name of stack +} +type SlugCreateResult struct { + Blob struct { + Method string `json:"method" url:"method,key"` // method to be used to interact with the slug blob + URL string `json:"url" url:"url,key"` // URL to interact with the slug blob + } `json:"blob" url:"blob,key"` // pointer to the url where clients can fetch or store the actual + // release binary + BuildpackProvidedDescription *string `json:"buildpack_provided_description" url:"buildpack_provided_description,key"` // description from buildpack of slug + Checksum *string `json:"checksum" url:"checksum,key"` // an optional checksum of the slug for verifying its integrity + Commit *string `json:"commit" url:"commit,key"` // identification of the code with your version control system (eg: SHA + // of the git HEAD) + CommitDescription *string `json:"commit_description" url:"commit_description,key"` // an optional description of the provided commit + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when slug was created + ID string `json:"id" url:"id,key"` // unique identifier of slug + ProcessTypes map[string]string `json:"process_types" url:"process_types,key"` // hash mapping process type names to their respective command + Size *int `json:"size" url:"size,key"` // size of slug, in bytes + Stack struct { + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + } `json:"stack" url:"stack,key"` // identity of slug stack + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when slug was updated } // Create a new slug. For more information please refer to [Deploying // Slugs using the Platform // API](https://devcenter.heroku.com/articles/platform-api-deploying-slug -// s?preview=1). -func (s *Service) SlugCreate(appIdentity string, o struct { - BuildpackProvidedDescription *string `json:"buildpack_provided_description,omitempty"` // description from buildpack of slug - Commit *string `json:"commit,omitempty"` // identification of the code with your version control system (eg: SHA - // of the git HEAD) - ProcessTypes map[string]string `json:"process_types"` // hash mapping process type names to their respective command -}) (*Slug, error) { - var slug Slug - return &slug, s.Post(&slug, fmt.Sprintf("/apps/%v/slugs", appIdentity), o) +// s). +func (s *Service) SlugCreate(ctx context.Context, appIdentity string, o SlugCreateOpts) (*SlugCreateResult, error) { + var slug SlugCreateResult + return &slug, s.Post(ctx, &slug, fmt.Sprintf("/apps/%v/slugs", appIdentity), o) +} + +// SMS numbers are used for recovery on accounts with two-factor +// authentication enabled. +type SmsNumber struct { + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account +} +type SmsNumberSMSNumberResult struct { + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account +} + +// Recover an account using an SMS recovery code +func (s *Service) SmsNumberSMSNumber(ctx context.Context, accountIdentity string) (*SmsNumberSMSNumberResult, error) { + var smsNumber SmsNumberSMSNumberResult + return &smsNumber, s.Get(ctx, &smsNumber, fmt.Sprintf("/users/%v/sms-number", accountIdentity), nil, nil) +} + +type SmsNumberRecoverResult struct { + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account +} + +// Recover an account using an SMS recovery code +func (s *Service) SmsNumberRecover(ctx context.Context, accountIdentity string) (*SmsNumberRecoverResult, error) { + var smsNumber SmsNumberRecoverResult + return &smsNumber, s.Post(ctx, &smsNumber, fmt.Sprintf("/users/%v/sms-number/actions/recover", accountIdentity), nil) +} + +type SmsNumberConfirmResult struct { + SmsNumber *string `json:"sms_number" url:"sms_number,key"` // SMS number of account +} + +// Confirm an SMS number change with a confirmation code +func (s *Service) SmsNumberConfirm(ctx context.Context, accountIdentity string) (*SmsNumberConfirmResult, error) { + var smsNumber SmsNumberConfirmResult + return &smsNumber, s.Post(ctx, &smsNumber, fmt.Sprintf("/users/%v/sms-number/actions/confirm", accountIdentity), nil) +} + +// SNI Endpoint is a public address serving a custom SSL cert for HTTPS +// traffic, using the SNI TLS extension, to a Heroku app. +type SniEndpoint struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // deprecated; refer to GET /apps/:id/domains for valid CNAMEs for this + // app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SNI endpoint + Name string `json:"name" url:"name,key"` // unique name for SNI endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when SNI endpoint was updated +} +type SniEndpointCreateOpts struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + PrivateKey string `json:"private_key" url:"private_key,key"` // contents of the private key (eg .key file) +} +type SniEndpointCreateResult struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // deprecated; refer to GET /apps/:id/domains for valid CNAMEs for this + // app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SNI endpoint + Name string `json:"name" url:"name,key"` // unique name for SNI endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when SNI endpoint was updated +} + +// Create a new SNI endpoint. +func (s *Service) SniEndpointCreate(ctx context.Context, appIdentity string, o SniEndpointCreateOpts) (*SniEndpointCreateResult, error) { + var sniEndpoint SniEndpointCreateResult + return &sniEndpoint, s.Post(ctx, &sniEndpoint, fmt.Sprintf("/apps/%v/sni-endpoints", appIdentity), o) +} + +type SniEndpointDeleteResult struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // deprecated; refer to GET /apps/:id/domains for valid CNAMEs for this + // app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SNI endpoint + Name string `json:"name" url:"name,key"` // unique name for SNI endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when SNI endpoint was updated +} + +// Delete existing SNI endpoint. +func (s *Service) SniEndpointDelete(ctx context.Context, appIdentity string, sniEndpointIdentity string) (*SniEndpointDeleteResult, error) { + var sniEndpoint SniEndpointDeleteResult + return &sniEndpoint, s.Delete(ctx, &sniEndpoint, fmt.Sprintf("/apps/%v/sni-endpoints/%v", appIdentity, sniEndpointIdentity)) +} + +type SniEndpointInfoResult struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // deprecated; refer to GET /apps/:id/domains for valid CNAMEs for this + // app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SNI endpoint + Name string `json:"name" url:"name,key"` // unique name for SNI endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when SNI endpoint was updated +} + +// Info for existing SNI endpoint. +func (s *Service) SniEndpointInfo(ctx context.Context, appIdentity string, sniEndpointIdentity string) (*SniEndpointInfoResult, error) { + var sniEndpoint SniEndpointInfoResult + return &sniEndpoint, s.Get(ctx, &sniEndpoint, fmt.Sprintf("/apps/%v/sni-endpoints/%v", appIdentity, sniEndpointIdentity), nil, nil) +} + +type SniEndpointListResult []struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // deprecated; refer to GET /apps/:id/domains for valid CNAMEs for this + // app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SNI endpoint + Name string `json:"name" url:"name,key"` // unique name for SNI endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when SNI endpoint was updated +} + +// List existing SNI endpoints. +func (s *Service) SniEndpointList(ctx context.Context, appIdentity string, lr *ListRange) (SniEndpointListResult, error) { + var sniEndpoint SniEndpointListResult + return sniEndpoint, s.Get(ctx, &sniEndpoint, fmt.Sprintf("/apps/%v/sni-endpoints", appIdentity), nil, lr) +} + +type SniEndpointUpdateOpts struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + PrivateKey string `json:"private_key" url:"private_key,key"` // contents of the private key (eg .key file) +} +type SniEndpointUpdateResult struct { + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // deprecated; refer to GET /apps/:id/domains for valid CNAMEs for this + // app + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SNI endpoint + Name string `json:"name" url:"name,key"` // unique name for SNI endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when SNI endpoint was updated +} + +// Update an existing SNI endpoint. +func (s *Service) SniEndpointUpdate(ctx context.Context, appIdentity string, sniEndpointIdentity string, o SniEndpointUpdateOpts) (*SniEndpointUpdateResult, error) { + var sniEndpoint SniEndpointUpdateResult + return &sniEndpoint, s.Patch(ctx, &sniEndpoint, fmt.Sprintf("/apps/%v/sni-endpoints/%v", appIdentity, sniEndpointIdentity), o) +} + +// A source is a location for uploading and downloading an application's +// source code. +type Source struct { + SourceBlob struct { + GetURL string `json:"get_url" url:"get_url,key"` // URL to download the source + PutURL string `json:"put_url" url:"put_url,key"` // URL to upload the source + } `json:"source_blob" url:"source_blob,key"` // pointer to the URL where clients can fetch or store the source +} +type SourceCreateResult struct { + SourceBlob struct { + GetURL string `json:"get_url" url:"get_url,key"` // URL to download the source + PutURL string `json:"put_url" url:"put_url,key"` // URL to upload the source + } `json:"source_blob" url:"source_blob,key"` // pointer to the URL where clients can fetch or store the source +} + +// Create URLs for uploading and downloading source. +func (s *Service) SourceCreate(ctx context.Context) (*SourceCreateResult, error) { + var source SourceCreateResult + return &source, s.Post(ctx, &source, fmt.Sprintf("/sources"), nil) +} + +type SourceCreateDeprecatedResult struct { + SourceBlob struct { + GetURL string `json:"get_url" url:"get_url,key"` // URL to download the source + PutURL string `json:"put_url" url:"put_url,key"` // URL to upload the source + } `json:"source_blob" url:"source_blob,key"` // pointer to the URL where clients can fetch or store the source +} + +// Create URLs for uploading and downloading source. Deprecated in favor +// of `POST /sources` +func (s *Service) SourceCreateDeprecated(ctx context.Context, appIdentity string) (*SourceCreateDeprecatedResult, error) { + var source SourceCreateDeprecatedResult + return &source, s.Post(ctx, &source, fmt.Sprintf("/apps/%v/sources", appIdentity), nil) +} + +// A space is an isolated, highly available, secure app execution +// environments, running in the modern VPC substrate. +type Space struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this space + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of space region + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + State string `json:"state" url:"state,key"` // availability of this space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated +} +type SpaceListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this space + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of space region + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + State string `json:"state" url:"state,key"` // availability of this space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated +} + +// List existing spaces. +func (s *Service) SpaceList(ctx context.Context, lr *ListRange) (SpaceListResult, error) { + var space SpaceListResult + return space, s.Get(ctx, &space, fmt.Sprintf("/spaces"), nil, lr) +} + +type SpaceInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this space + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of space region + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + State string `json:"state" url:"state,key"` // availability of this space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated +} + +// Info for existing space. +func (s *Service) SpaceInfo(ctx context.Context, spaceIdentity string) (*SpaceInfoResult, error) { + var space SpaceInfoResult + return &space, s.Get(ctx, &space, fmt.Sprintf("/spaces/%v", spaceIdentity), nil, nil) +} + +type SpaceUpdateOpts struct { + Name *string `json:"name,omitempty" url:"name,omitempty,key"` // unique name of space +} +type SpaceUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this space + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of space region + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + State string `json:"state" url:"state,key"` // availability of this space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated +} + +// Update an existing space. +func (s *Service) SpaceUpdate(ctx context.Context, spaceIdentity string, o SpaceUpdateOpts) (*SpaceUpdateResult, error) { + var space SpaceUpdateResult + return &space, s.Patch(ctx, &space, fmt.Sprintf("/spaces/%v", spaceIdentity), o) +} + +type SpaceDeleteResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this space + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of space region + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + State string `json:"state" url:"state,key"` // availability of this space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated +} + +// Delete an existing space. +func (s *Service) SpaceDelete(ctx context.Context, spaceIdentity string) (*SpaceDeleteResult, error) { + var space SpaceDeleteResult + return &space, s.Delete(ctx, &space, fmt.Sprintf("/spaces/%v", spaceIdentity)) +} + +type SpaceCreateOpts struct { + Name string `json:"name" url:"name,key"` // unique name of space + Organization string `json:"organization" url:"organization,key"` // unique name of organization + Region *string `json:"region,omitempty" url:"region,omitempty,key"` // unique identifier of region + Shield *bool `json:"shield,omitempty" url:"shield,omitempty,key"` // true if this space has shield enabled +} +type SpaceCreateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Name string `json:"name" url:"name,key"` // unique name of space + Organization struct { + Name string `json:"name" url:"name,key"` // unique name of organization + } `json:"organization" url:"organization,key"` // organization that owns this space + Region struct { + ID string `json:"id" url:"id,key"` // unique identifier of region + Name string `json:"name" url:"name,key"` // unique name of region + } `json:"region" url:"region,key"` // identity of space region + Shield bool `json:"shield" url:"shield,key"` // true if this space has shield enabled + State string `json:"state" url:"state,key"` // availability of this space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated +} + +// Create a new space. +func (s *Service) SpaceCreate(ctx context.Context, o SpaceCreateOpts) (*SpaceCreateResult, error) { + var space SpaceCreateResult + return &space, s.Post(ctx, &space, fmt.Sprintf("/spaces"), o) +} + +// Space access represents the permissions a particular user has on a +// particular space. +type SpaceAppAccess struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Permissions []struct { + Description string `json:"description" url:"description,key"` + Name string `json:"name" url:"name,key"` + } `json:"permissions" url:"permissions,key"` // user space permissions + Space struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"space" url:"space,key"` // space user belongs to + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of user account +} +type SpaceAppAccessInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Permissions []struct { + Description string `json:"description" url:"description,key"` + Name string `json:"name" url:"name,key"` + } `json:"permissions" url:"permissions,key"` // user space permissions + Space struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"space" url:"space,key"` // space user belongs to + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of user account +} + +// List permissions for a given user on a given space. +func (s *Service) SpaceAppAccessInfo(ctx context.Context, spaceIdentity string, accountIdentity string) (*SpaceAppAccessInfoResult, error) { + var spaceAppAccess SpaceAppAccessInfoResult + return &spaceAppAccess, s.Get(ctx, &spaceAppAccess, fmt.Sprintf("/spaces/%v/members/%v", spaceIdentity, accountIdentity), nil, nil) +} + +type SpaceAppAccessUpdateOpts struct { + Permissions *[]*struct { + Name *string `json:"name,omitempty" url:"name,omitempty,key"` + } `json:"permissions,omitempty" url:"permissions,omitempty,key"` +} +type SpaceAppAccessUpdateResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Permissions []struct { + Description string `json:"description" url:"description,key"` + Name string `json:"name" url:"name,key"` + } `json:"permissions" url:"permissions,key"` // user space permissions + Space struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"space" url:"space,key"` // space user belongs to + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of user account +} + +// Update an existing user's set of permissions on a space. +func (s *Service) SpaceAppAccessUpdate(ctx context.Context, spaceIdentity string, accountIdentity string, o SpaceAppAccessUpdateOpts) (*SpaceAppAccessUpdateResult, error) { + var spaceAppAccess SpaceAppAccessUpdateResult + return &spaceAppAccess, s.Patch(ctx, &spaceAppAccess, fmt.Sprintf("/spaces/%v/members/%v", spaceIdentity, accountIdentity), o) +} + +type SpaceAppAccessListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when space was created + ID string `json:"id" url:"id,key"` // unique identifier of space + Permissions []struct { + Description string `json:"description" url:"description,key"` + Name string `json:"name" url:"name,key"` + } `json:"permissions" url:"permissions,key"` // user space permissions + Space struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"space" url:"space,key"` // space user belongs to + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when space was updated + User struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"user" url:"user,key"` // identity of user account +} + +// List all users and their permissions on a space. +func (s *Service) SpaceAppAccessList(ctx context.Context, spaceIdentity string, lr *ListRange) (SpaceAppAccessListResult, error) { + var spaceAppAccess SpaceAppAccessListResult + return spaceAppAccess, s.Get(ctx, &spaceAppAccess, fmt.Sprintf("/spaces/%v/members", spaceIdentity), nil, lr) +} + +// Network address translation (NAT) for stable outbound IP addresses +// from a space +type SpaceNat struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when network address translation for a space was created + Sources []string `json:"sources" url:"sources,key"` // potential IPs from which outbound network traffic will originate + State string `json:"state" url:"state,key"` // availability of network address translation for a space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when network address translation for a space was updated +} +type SpaceNatInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when network address translation for a space was created + Sources []string `json:"sources" url:"sources,key"` // potential IPs from which outbound network traffic will originate + State string `json:"state" url:"state,key"` // availability of network address translation for a space + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when network address translation for a space was updated +} + +// Current state of network address translation for a space. +func (s *Service) SpaceNatInfo(ctx context.Context, spaceIdentity string) (*SpaceNatInfoResult, error) { + var spaceNat SpaceNatInfoResult + return &spaceNat, s.Get(ctx, &spaceNat, fmt.Sprintf("/spaces/%v/nat", spaceIdentity), nil, nil) } // [SSL Endpoint](https://devcenter.heroku.com/articles/ssl-endpoint) is // a public address serving custom SSL cert for HTTPS traffic to a -// Heroku app. Note that an app must have the `ssl:endpoint` addon +// Heroku app. Note that an app must have the `ssl:endpoint` add-on // installed before it can provision an SSL Endpoint using these APIs. type SSLEndpoint struct { - CertificateChain string `json:"certificate_chain"` // raw contents of the public certificate chain (eg: .crt or .pem file) - CName string `json:"cname"` // canonical name record, the address to point a domain at - CreatedAt time.Time `json:"created_at"` // when endpoint was created - ID string `json:"id"` // unique identifier of this SSL endpoint - Name string `json:"name"` // unique name for SSL endpoint - UpdatedAt time.Time `json:"updated_at"` // when endpoint was updated + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application associated with this ssl-endpoint + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SSL endpoint + Name string `json:"name" url:"name,key"` // unique name for SSL endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when endpoint was updated } type SSLEndpointCreateOpts struct { - CertificateChain string `json:"certificate_chain"` // raw contents of the public certificate chain (eg: .crt or .pem file) - Preprocess *bool `json:"preprocess,omitempty"` // allow Heroku to modify an uploaded public certificate chain if deemed + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + Preprocess *bool `json:"preprocess,omitempty" url:"preprocess,omitempty,key"` // allow Heroku to modify an uploaded public certificate chain if deemed // advantageous by adding missing intermediaries, stripping unnecessary // ones, etc. - PrivateKey string `json:"private_key"` // contents of the private key (eg .key file) + PrivateKey string `json:"private_key" url:"private_key,key"` // contents of the private key (eg .key file) +} +type SSLEndpointCreateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application associated with this ssl-endpoint + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SSL endpoint + Name string `json:"name" url:"name,key"` // unique name for SSL endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when endpoint was updated } // Create a new SSL endpoint. -func (s *Service) SSLEndpointCreate(appIdentity string, o struct { - CertificateChain string `json:"certificate_chain"` // raw contents of the public certificate chain (eg: .crt or .pem file) - Preprocess *bool `json:"preprocess,omitempty"` // allow Heroku to modify an uploaded public certificate chain if deemed - // advantageous by adding missing intermediaries, stripping unnecessary - // ones, etc. - PrivateKey string `json:"private_key"` // contents of the private key (eg .key file) -}) (*SSLEndpoint, error) { - var sslEndpoint SSLEndpoint - return &sslEndpoint, s.Post(&sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints", appIdentity), o) +func (s *Service) SSLEndpointCreate(ctx context.Context, appIdentity string, o SSLEndpointCreateOpts) (*SSLEndpointCreateResult, error) { + var sslEndpoint SSLEndpointCreateResult + return &sslEndpoint, s.Post(ctx, &sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints", appIdentity), o) +} + +type SSLEndpointDeleteResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application associated with this ssl-endpoint + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SSL endpoint + Name string `json:"name" url:"name,key"` // unique name for SSL endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when endpoint was updated } // Delete existing SSL endpoint. -func (s *Service) SSLEndpointDelete(appIdentity string, sslEndpointIdentity string) error { - return s.Delete(fmt.Sprintf("/apps/%v/ssl-endpoints/%v", appIdentity, sslEndpointIdentity)) +func (s *Service) SSLEndpointDelete(ctx context.Context, appIdentity string, sslEndpointIdentity string) (*SSLEndpointDeleteResult, error) { + var sslEndpoint SSLEndpointDeleteResult + return &sslEndpoint, s.Delete(ctx, &sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints/%v", appIdentity, sslEndpointIdentity)) +} + +type SSLEndpointInfoResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application associated with this ssl-endpoint + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SSL endpoint + Name string `json:"name" url:"name,key"` // unique name for SSL endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when endpoint was updated } // Info for existing SSL endpoint. -func (s *Service) SSLEndpointInfo(appIdentity string, sslEndpointIdentity string) (*SSLEndpoint, error) { - var sslEndpoint SSLEndpoint - return &sslEndpoint, s.Get(&sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints/%v", appIdentity, sslEndpointIdentity), nil) +func (s *Service) SSLEndpointInfo(ctx context.Context, appIdentity string, sslEndpointIdentity string) (*SSLEndpointInfoResult, error) { + var sslEndpoint SSLEndpointInfoResult + return &sslEndpoint, s.Get(ctx, &sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints/%v", appIdentity, sslEndpointIdentity), nil, nil) +} + +type SSLEndpointListResult []struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application associated with this ssl-endpoint + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SSL endpoint + Name string `json:"name" url:"name,key"` // unique name for SSL endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when endpoint was updated } // List existing SSL endpoints. -func (s *Service) SSLEndpointList(appIdentity string, lr *ListRange) ([]*SSLEndpoint, error) { - var sslEndpointList []*SSLEndpoint - return sslEndpointList, s.Get(&sslEndpointList, fmt.Sprintf("/apps/%v/ssl-endpoints", appIdentity), lr) +func (s *Service) SSLEndpointList(ctx context.Context, appIdentity string, lr *ListRange) (SSLEndpointListResult, error) { + var sslEndpoint SSLEndpointListResult + return sslEndpoint, s.Get(ctx, &sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints", appIdentity), nil, lr) } type SSLEndpointUpdateOpts struct { - CertificateChain *string `json:"certificate_chain,omitempty"` // raw contents of the public certificate chain (eg: .crt or .pem file) - Preprocess *bool `json:"preprocess,omitempty"` // allow Heroku to modify an uploaded public certificate chain if deemed + CertificateChain *string `json:"certificate_chain,omitempty" url:"certificate_chain,omitempty,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + Preprocess *bool `json:"preprocess,omitempty" url:"preprocess,omitempty,key"` // allow Heroku to modify an uploaded public certificate chain if deemed // advantageous by adding missing intermediaries, stripping unnecessary // ones, etc. - PrivateKey *string `json:"private_key,omitempty"` // contents of the private key (eg .key file) - Rollback *bool `json:"rollback,omitempty"` // indicates that a rollback should be performed + PrivateKey *string `json:"private_key,omitempty" url:"private_key,omitempty,key"` // contents of the private key (eg .key file) + Rollback *bool `json:"rollback,omitempty" url:"rollback,omitempty,key"` // indicates that a rollback should be performed +} +type SSLEndpointUpdateResult struct { + App struct { + ID string `json:"id" url:"id,key"` // unique identifier of app + Name string `json:"name" url:"name,key"` // unique name of app + } `json:"app" url:"app,key"` // application associated with this ssl-endpoint + CertificateChain string `json:"certificate_chain" url:"certificate_chain,key"` // raw contents of the public certificate chain (eg: .crt or .pem file) + CName string `json:"cname" url:"cname,key"` // canonical name record, the address to point a domain at + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when endpoint was created + ID string `json:"id" url:"id,key"` // unique identifier of this SSL endpoint + Name string `json:"name" url:"name,key"` // unique name for SSL endpoint + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when endpoint was updated } // Update an existing SSL endpoint. -func (s *Service) SSLEndpointUpdate(appIdentity string, sslEndpointIdentity string, o struct { - CertificateChain *string `json:"certificate_chain,omitempty"` // raw contents of the public certificate chain (eg: .crt or .pem file) - Preprocess *bool `json:"preprocess,omitempty"` // allow Heroku to modify an uploaded public certificate chain if deemed - // advantageous by adding missing intermediaries, stripping unnecessary - // ones, etc. - PrivateKey *string `json:"private_key,omitempty"` // contents of the private key (eg .key file) - Rollback *bool `json:"rollback,omitempty"` // indicates that a rollback should be performed -}) (*SSLEndpoint, error) { - var sslEndpoint SSLEndpoint - return &sslEndpoint, s.Patch(&sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints/%v", appIdentity, sslEndpointIdentity), o) +func (s *Service) SSLEndpointUpdate(ctx context.Context, appIdentity string, sslEndpointIdentity string, o SSLEndpointUpdateOpts) (*SSLEndpointUpdateResult, error) { + var sslEndpoint SSLEndpointUpdateResult + return &sslEndpoint, s.Patch(ctx, &sslEndpoint, fmt.Sprintf("/apps/%v/ssl-endpoints/%v", appIdentity, sslEndpointIdentity), o) } // Stacks are the different application execution environments available // in the Heroku platform. type Stack struct { - CreatedAt time.Time `json:"created_at"` // when stack was introduced - ID string `json:"id"` // unique identifier of stack - Name string `json:"name"` // unique name of stack - State string `json:"state"` // availability of this stack: beta, deprecated or public - UpdatedAt time.Time `json:"updated_at"` // when stack was last modified + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when stack was introduced + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + State string `json:"state" url:"state,key"` // availability of this stack: beta, deprecated or public + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when stack was last modified +} +type StackInfoResult struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when stack was introduced + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + State string `json:"state" url:"state,key"` // availability of this stack: beta, deprecated or public + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when stack was last modified } // Stack info. -func (s *Service) StackInfo(stackIdentity string) (*Stack, error) { - var stack Stack - return &stack, s.Get(&stack, fmt.Sprintf("/stacks/%v", stackIdentity), nil) +func (s *Service) StackInfo(ctx context.Context, stackIdentity string) (*StackInfoResult, error) { + var stack StackInfoResult + return &stack, s.Get(ctx, &stack, fmt.Sprintf("/stacks/%v", stackIdentity), nil, nil) +} + +type StackListResult []struct { + CreatedAt time.Time `json:"created_at" url:"created_at,key"` // when stack was introduced + ID string `json:"id" url:"id,key"` // unique identifier of stack + Name string `json:"name" url:"name,key"` // unique name of stack + State string `json:"state" url:"state,key"` // availability of this stack: beta, deprecated or public + UpdatedAt time.Time `json:"updated_at" url:"updated_at,key"` // when stack was last modified } // List available stacks. -func (s *Service) StackList(lr *ListRange) ([]*Stack, error) { - var stackList []*Stack - return stackList, s.Get(&stackList, fmt.Sprintf("/stacks"), lr) +func (s *Service) StackList(ctx context.Context, lr *ListRange) (StackListResult, error) { + var stack StackListResult + return stack, s.Get(ctx, &stack, fmt.Sprintf("/stacks"), nil, lr) } +// Tracks a user's preferences and message dismissals +type UserPreferences struct { + DefaultOrganization *string `json:"default-organization" url:"default-organization,key"` // User's default organization + DismissedGettingStarted *bool `json:"dismissed-getting-started" url:"dismissed-getting-started,key"` // Whether the user has dismissed the getting started banner + DismissedGithubBanner *bool `json:"dismissed-github-banner" url:"dismissed-github-banner,key"` // Whether the user has dismissed the GitHub link banner + DismissedOrgAccessControls *bool `json:"dismissed-org-access-controls" url:"dismissed-org-access-controls,key"` // Whether the user has dismissed the Organization Access Controls + // banner + DismissedOrgWizardNotification *bool `json:"dismissed-org-wizard-notification" url:"dismissed-org-wizard-notification,key"` // Whether the user has dismissed the Organization Wizard + DismissedPipelinesBanner *bool `json:"dismissed-pipelines-banner" url:"dismissed-pipelines-banner,key"` // Whether the user has dismissed the Pipelines banner + DismissedPipelinesGithubBanner *bool `json:"dismissed-pipelines-github-banner" url:"dismissed-pipelines-github-banner,key"` // Whether the user has dismissed the GitHub banner on a pipeline + // overview + DismissedPipelinesGithubBanners *[]string `json:"dismissed-pipelines-github-banners" url:"dismissed-pipelines-github-banners,key"` // Which pipeline uuids the user has dismissed the GitHub banner for + DismissedSmsBanner *bool `json:"dismissed-sms-banner" url:"dismissed-sms-banner,key"` // Whether the user has dismissed the 2FA SMS banner + Timezone *string `json:"timezone" url:"timezone,key"` // User's default timezone +} +type UserPreferencesListResult struct { + DefaultOrganization *string `json:"default-organization" url:"default-organization,key"` // User's default organization + DismissedGettingStarted *bool `json:"dismissed-getting-started" url:"dismissed-getting-started,key"` // Whether the user has dismissed the getting started banner + DismissedGithubBanner *bool `json:"dismissed-github-banner" url:"dismissed-github-banner,key"` // Whether the user has dismissed the GitHub link banner + DismissedOrgAccessControls *bool `json:"dismissed-org-access-controls" url:"dismissed-org-access-controls,key"` // Whether the user has dismissed the Organization Access Controls + // banner + DismissedOrgWizardNotification *bool `json:"dismissed-org-wizard-notification" url:"dismissed-org-wizard-notification,key"` // Whether the user has dismissed the Organization Wizard + DismissedPipelinesBanner *bool `json:"dismissed-pipelines-banner" url:"dismissed-pipelines-banner,key"` // Whether the user has dismissed the Pipelines banner + DismissedPipelinesGithubBanner *bool `json:"dismissed-pipelines-github-banner" url:"dismissed-pipelines-github-banner,key"` // Whether the user has dismissed the GitHub banner on a pipeline + // overview + DismissedPipelinesGithubBanners *[]string `json:"dismissed-pipelines-github-banners" url:"dismissed-pipelines-github-banners,key"` // Which pipeline uuids the user has dismissed the GitHub banner for + DismissedSmsBanner *bool `json:"dismissed-sms-banner" url:"dismissed-sms-banner,key"` // Whether the user has dismissed the 2FA SMS banner + Timezone *string `json:"timezone" url:"timezone,key"` // User's default timezone +} + +// Retrieve User Preferences +func (s *Service) UserPreferencesList(ctx context.Context, userPreferencesIdentity string) (*UserPreferencesListResult, error) { + var userPreferences UserPreferencesListResult + return &userPreferences, s.Get(ctx, &userPreferences, fmt.Sprintf("/users/%v/preferences", userPreferencesIdentity), nil, nil) +} + +type UserPreferencesUpdateOpts struct { + DefaultOrganization *string `json:"default-organization,omitempty" url:"default-organization,omitempty,key"` // User's default organization + DismissedGettingStarted *bool `json:"dismissed-getting-started,omitempty" url:"dismissed-getting-started,omitempty,key"` // Whether the user has dismissed the getting started banner + DismissedGithubBanner *bool `json:"dismissed-github-banner,omitempty" url:"dismissed-github-banner,omitempty,key"` // Whether the user has dismissed the GitHub link banner + DismissedOrgAccessControls *bool `json:"dismissed-org-access-controls,omitempty" url:"dismissed-org-access-controls,omitempty,key"` // Whether the user has dismissed the Organization Access Controls + // banner + DismissedOrgWizardNotification *bool `json:"dismissed-org-wizard-notification,omitempty" url:"dismissed-org-wizard-notification,omitempty,key"` // Whether the user has dismissed the Organization Wizard + DismissedPipelinesBanner *bool `json:"dismissed-pipelines-banner,omitempty" url:"dismissed-pipelines-banner,omitempty,key"` // Whether the user has dismissed the Pipelines banner + DismissedPipelinesGithubBanner *bool `json:"dismissed-pipelines-github-banner,omitempty" url:"dismissed-pipelines-github-banner,omitempty,key"` // Whether the user has dismissed the GitHub banner on a pipeline + // overview + DismissedPipelinesGithubBanners *[]*string `json:"dismissed-pipelines-github-banners,omitempty" url:"dismissed-pipelines-github-banners,omitempty,key"` // Which pipeline uuids the user has dismissed the GitHub banner for + DismissedSmsBanner *bool `json:"dismissed-sms-banner,omitempty" url:"dismissed-sms-banner,omitempty,key"` // Whether the user has dismissed the 2FA SMS banner + Timezone *string `json:"timezone,omitempty" url:"timezone,omitempty,key"` // User's default timezone +} +type UserPreferencesUpdateResult struct { + DefaultOrganization *string `json:"default-organization" url:"default-organization,key"` // User's default organization + DismissedGettingStarted *bool `json:"dismissed-getting-started" url:"dismissed-getting-started,key"` // Whether the user has dismissed the getting started banner + DismissedGithubBanner *bool `json:"dismissed-github-banner" url:"dismissed-github-banner,key"` // Whether the user has dismissed the GitHub link banner + DismissedOrgAccessControls *bool `json:"dismissed-org-access-controls" url:"dismissed-org-access-controls,key"` // Whether the user has dismissed the Organization Access Controls + // banner + DismissedOrgWizardNotification *bool `json:"dismissed-org-wizard-notification" url:"dismissed-org-wizard-notification,key"` // Whether the user has dismissed the Organization Wizard + DismissedPipelinesBanner *bool `json:"dismissed-pipelines-banner" url:"dismissed-pipelines-banner,key"` // Whether the user has dismissed the Pipelines banner + DismissedPipelinesGithubBanner *bool `json:"dismissed-pipelines-github-banner" url:"dismissed-pipelines-github-banner,key"` // Whether the user has dismissed the GitHub banner on a pipeline + // overview + DismissedPipelinesGithubBanners *[]string `json:"dismissed-pipelines-github-banners" url:"dismissed-pipelines-github-banners,key"` // Which pipeline uuids the user has dismissed the GitHub banner for + DismissedSmsBanner *bool `json:"dismissed-sms-banner" url:"dismissed-sms-banner,key"` // Whether the user has dismissed the 2FA SMS banner + Timezone *string `json:"timezone" url:"timezone,key"` // User's default timezone +} + +// Update User Preferences +func (s *Service) UserPreferencesUpdate(ctx context.Context, userPreferencesIdentity string, o UserPreferencesUpdateOpts) (*UserPreferencesUpdateResult, error) { + var userPreferences UserPreferencesUpdateResult + return &userPreferences, s.Patch(ctx, &userPreferences, fmt.Sprintf("/users/%v/preferences", userPreferencesIdentity), o) +} + +// Entities that have been whitelisted to be used by an Organization +type WhitelistedAddOnService struct { + AddedAt time.Time `json:"added_at" url:"added_at,key"` // when the add-on service was whitelisted + AddedBy struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"added_by" url:"added_by,key"` // the user which whitelisted the Add-on Service + AddonService struct { + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // the Add-on Service whitelisted for use + ID string `json:"id" url:"id,key"` // unique identifier for this whitelisting entity +} +type WhitelistedAddOnServiceListResult []struct { + AddedAt time.Time `json:"added_at" url:"added_at,key"` // when the add-on service was whitelisted + AddedBy struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"added_by" url:"added_by,key"` // the user which whitelisted the Add-on Service + AddonService struct { + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // the Add-on Service whitelisted for use + ID string `json:"id" url:"id,key"` // unique identifier for this whitelisting entity +} + +// List all whitelisted Add-on Services for an Organization +func (s *Service) WhitelistedAddOnServiceList(ctx context.Context, organizationIdentity string, lr *ListRange) (WhitelistedAddOnServiceListResult, error) { + var whitelistedAddOnService WhitelistedAddOnServiceListResult + return whitelistedAddOnService, s.Get(ctx, &whitelistedAddOnService, fmt.Sprintf("/organizations/%v/whitelisted-addon-services", organizationIdentity), nil, lr) +} + +type WhitelistedAddOnServiceCreateOpts struct { + AddonService *string `json:"addon_service,omitempty" url:"addon_service,omitempty,key"` // name of the Add-on to whitelist +} +type WhitelistedAddOnServiceCreateResult []struct { + AddedAt time.Time `json:"added_at" url:"added_at,key"` // when the add-on service was whitelisted + AddedBy struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"added_by" url:"added_by,key"` // the user which whitelisted the Add-on Service + AddonService struct { + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // the Add-on Service whitelisted for use + ID string `json:"id" url:"id,key"` // unique identifier for this whitelisting entity +} + +// Whitelist an Add-on Service +func (s *Service) WhitelistedAddOnServiceCreate(ctx context.Context, organizationIdentity string, o WhitelistedAddOnServiceCreateOpts) (WhitelistedAddOnServiceCreateResult, error) { + var whitelistedAddOnService WhitelistedAddOnServiceCreateResult + return whitelistedAddOnService, s.Post(ctx, &whitelistedAddOnService, fmt.Sprintf("/organizations/%v/whitelisted-addon-services", organizationIdentity), o) +} + +type WhitelistedAddOnServiceDeleteResult struct { + AddedAt time.Time `json:"added_at" url:"added_at,key"` // when the add-on service was whitelisted + AddedBy struct { + Email string `json:"email" url:"email,key"` // unique email address of account + ID string `json:"id" url:"id,key"` // unique identifier of an account + } `json:"added_by" url:"added_by,key"` // the user which whitelisted the Add-on Service + AddonService struct { + HumanName string `json:"human_name" url:"human_name,key"` // human-readable name of the add-on service provider + ID string `json:"id" url:"id,key"` // unique identifier of this add-on-service + Name string `json:"name" url:"name,key"` // unique name of this add-on-service + } `json:"addon_service" url:"addon_service,key"` // the Add-on Service whitelisted for use + ID string `json:"id" url:"id,key"` // unique identifier for this whitelisting entity +} + +// Remove a whitelisted entity +func (s *Service) WhitelistedAddOnServiceDelete(ctx context.Context, organizationIdentity string, whitelistedAddOnServiceIdentity string) (*WhitelistedAddOnServiceDeleteResult, error) { + var whitelistedAddOnService WhitelistedAddOnServiceDeleteResult + return &whitelistedAddOnService, s.Delete(ctx, &whitelistedAddOnService, fmt.Sprintf("/organizations/%v/whitelisted-addon-services/%v", organizationIdentity, whitelistedAddOnServiceIdentity)) +} diff --git a/vendor/github.com/cyberdelia/heroku-go/v3/schema.json b/vendor/github.com/cyberdelia/heroku-go/v3/schema.json index 40fac78958..6e79eef2f8 100644 --- a/vendor/github.com/cyberdelia/heroku-go/v3/schema.json +++ b/vendor/github.com/cyberdelia/heroku-go/v3/schema.json @@ -1,5 +1,8 @@ { "$schema": "http://interagent.github.io/interagent-hyper-schema", + "type": [ + "object" + ], "definitions": { "account-feature": { "description": "An account feature represents a Heroku labs capability that can be enabled or disabled for an account on Heroku.", @@ -212,6 +215,14 @@ "string" ] }, + "federated": { + "description": "whether the user is federated and belongs to an Identity Provider", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, "id": { "description": "unique identifier of an account", "example": "01234567-89ab-cdef-0123-456789abcdef", @@ -228,6 +239,9 @@ }, { "$ref": "#/definitions/account/definitions/id" + }, + { + "$ref": "#/definitions/account/definitions/self" } ] }, @@ -237,7 +251,8 @@ "format": "date-time", "readOnly": true, "type": [ - "string" + "string", + "null" ] }, "name": { @@ -249,14 +264,6 @@ "null" ] }, - "new_password": { - "description": "the new password for the account when changing the password", - "example": "newpassword", - "readOnly": true, - "type": [ - "string" - ] - }, "password": { "description": "current password on the account", "example": "currentpassword", @@ -265,6 +272,54 @@ "string" ] }, + "self": { + "description": "Implicit reference to currently authorized user", + "enum": [ + "~" + ], + "example": "~", + "readOnly": true, + "type": [ + "string" + ] + }, + "sms_number": { + "description": "SMS number of account", + "example": "+1 ***-***-1234", + "readOnly": true, + "type": [ + "string", + "null" + ] + }, + "suspended_at": { + "description": "when account was suspended", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string", + "null" + ] + }, + "delinquent_at": { + "description": "when account became delinquent", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string", + "null" + ] + }, + "two_factor_authentication": { + "description": "whether two-factor auth is enabled on the account", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, "updated_at": { "description": "when account was updated", "example": "2012-01-01T12:00:00Z", @@ -310,14 +365,8 @@ }, "name": { "$ref": "#/definitions/account/definitions/name" - }, - "password": { - "$ref": "#/definitions/account/definitions/password" } }, - "required": [ - "password" - ], "type": [ "object" ] @@ -328,47 +377,42 @@ "title": "Update" }, { - "description": "Change Email for account.", + "description": "Delete account. Note that this action cannot be undone.", "href": "/account", - "method": "PATCH", - "rel": "update", - "schema": { - "properties": { - "email": { - "$ref": "#/definitions/account/definitions/email" - }, - "password": { - "$ref": "#/definitions/account/definitions/password" - } - }, - "required": [ - "password", - "email" - ], - "type": [ - "object" - ] + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/account" }, - "title": "Change Email" + "title": "Delete" }, { - "description": "Change Password for account.", - "href": "/account", + "description": "Info for account.", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/account" + }, + "title": "Info" + }, + { + "description": "Update account.", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}", "method": "PATCH", "rel": "update", "schema": { "properties": { - "new_password": { - "$ref": "#/definitions/account/definitions/new_password" + "allow_tracking": { + "$ref": "#/definitions/account/definitions/allow_tracking" }, - "password": { - "$ref": "#/definitions/account/definitions/password" + "beta": { + "$ref": "#/definitions/account/definitions/beta" + }, + "name": { + "$ref": "#/definitions/account/definitions/name" } }, - "required": [ - "new_password", - "password" - ], "type": [ "object" ] @@ -376,7 +420,17 @@ "targetSchema": { "$ref": "#/definitions/account" }, - "title": "Change Password" + "title": "Update" + }, + { + "description": "Delete account. Note that this action cannot be undone.", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/account" + }, + "title": "Delete" } ], "properties": { @@ -392,24 +446,642 @@ "email": { "$ref": "#/definitions/account/definitions/email" }, + "federated": { + "$ref": "#/definitions/account/definitions/federated" + }, "id": { "$ref": "#/definitions/account/definitions/id" }, + "identity_provider": { + "description": "Identity Provider details for federated users.", + "properties": { + "id": { + "$ref": "#/definitions/identity-provider/definitions/id" + }, + "organization": { + "type": [ + "object" + ], + "properties": { + "name": { + "$ref": "#/definitions/organization/definitions/name" + } + } + } + }, + "type": [ + "object", + "null" + ] + }, "last_login": { "$ref": "#/definitions/account/definitions/last_login" }, "name": { "$ref": "#/definitions/account/definitions/name" }, + "sms_number": { + "$ref": "#/definitions/account/definitions/sms_number" + }, + "suspended_at": { + "$ref": "#/definitions/account/definitions/suspended_at" + }, + "delinquent_at": { + "$ref": "#/definitions/account/definitions/delinquent_at" + }, + "two_factor_authentication": { + "$ref": "#/definitions/account/definitions/two_factor_authentication" + }, "updated_at": { "$ref": "#/definitions/account/definitions/updated_at" }, "verified": { "$ref": "#/definitions/account/definitions/verified" + }, + "default_organization": { + "description": "organization selected by default", + "properties": { + "id": { + "$ref": "#/definitions/organization/definitions/id" + }, + "name": { + "$ref": "#/definitions/organization/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object", + "null" + ] } } }, - "addon-service": { + "add-on-action": { + "description": "Add-on Actions are lifecycle operations for add-on provisioning and deprovisioning. They allow whitelisted add-on providers to (de)provision add-ons in the background and then report back when (de)provisioning is complete.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "development", + "strictProperties": true, + "title": "Heroku Platform API - Add-on Action", + "type": [ + "object" + ], + "definitions": { + }, + "links": [ + { + "description": "Mark an add-on as provisioned for use.", + "href": "/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}/actions/provision", + "method": "POST", + "rel": "create", + "targetSchema": { + "$ref": "#/definitions/add-on" + }, + "title": "Create - Provision" + }, + { + "description": "Mark an add-on as deprovisioned.", + "href": "/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}/actions/deprovision", + "method": "POST", + "rel": "create", + "targetSchema": { + "$ref": "#/definitions/add-on" + }, + "title": "Create - Deprovision" + } + ], + "properties": { + } + }, + "add-on-attachment": { + "description": "An add-on attachment represents a connection between an app and an add-on that it has been given access to.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Add-on Attachment", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when add-on attachment was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of this add-on attachment", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "force": { + "default": false, + "description": "whether or not to allow existing attachment with same name to be replaced", + "example": false, + "readOnly": false, + "type": [ + "boolean" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/add-on-attachment/definitions/id" + } + ] + }, + "scopedIdentity": { + "anyOf": [ + { + "$ref": "#/definitions/add-on-attachment/definitions/id" + }, + { + "$ref": "#/definitions/add-on-attachment/definitions/name" + } + ] + }, + "name": { + "description": "unique name for this add-on attachment to this app", + "example": "DATABASE", + "readOnly": true, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when add-on attachment was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "web_url": { + "description": "URL for logging into web interface of add-on in attached app context", + "example": "https://postgres.heroku.com/databases/01234567-89ab-cdef-0123-456789abcdef", + "format": "uri", + "readOnly": true, + "type": [ + "null", + "string" + ] + } + }, + "links": [ + { + "description": "Create a new add-on attachment.", + "href": "/addon-attachments", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "addon": { + "$ref": "#/definitions/add-on/definitions/identity" + }, + "app": { + "$ref": "#/definitions/app/definitions/identity" + }, + "force": { + "$ref": "#/definitions/add-on-attachment/definitions/force" + }, + "name": { + "$ref": "#/definitions/add-on-attachment/definitions/name" + } + }, + "required": [ + "addon", + "app" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/add-on-attachment" + }, + "title": "Create" + }, + { + "description": "Delete an existing add-on attachment.", + "href": "/addon-attachments/{(%23%2Fdefinitions%2Fadd-on-attachment%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/add-on-attachment" + }, + "title": "Delete" + }, + { + "description": "Info for existing add-on attachment.", + "href": "/addon-attachments/{(%23%2Fdefinitions%2Fadd-on-attachment%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/add-on-attachment" + }, + "title": "Info" + }, + { + "description": "List existing add-on attachments.", + "href": "/addon-attachments", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on-attachment" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "List existing add-on attachments for an add-on.", + "href": "/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}/addon-attachments", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on-attachment" + }, + "type": [ + "array" + ] + }, + "title": "List by Add-on" + }, + { + "description": "List existing add-on attachments for an app.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addon-attachments", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on-attachment" + }, + "type": [ + "array" + ] + }, + "title": "List by App" + }, + { + "description": "Info for existing add-on attachment for an app.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addon-attachments/{(%23%2Fdefinitions%2Fadd-on-attachment%2Fdefinitions%2FscopedIdentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/add-on-attachment" + }, + "title": "Info by App" + } + ], + "properties": { + "addon": { + "description": "identity of add-on", + "properties": { + "id": { + "$ref": "#/definitions/add-on/definitions/id" + }, + "name": { + "$ref": "#/definitions/add-on/definitions/name" + }, + "app": { + "description": "billing application associated with this add-on", + "type": [ + "object" + ], + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + }, + "name": { + "$ref": "#/definitions/app/definitions/name" + } + }, + "strictProperties": true + }, + "plan": { + "description": "identity of add-on plan", + "properties": { + "id": { + "$ref": "#/definitions/plan/definitions/id" + }, + "name": { + "$ref": "#/definitions/plan/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + }, + "additionalProperties": false, + "required": [ + "id", + "name", + "app" + ], + "type": [ + "object" + ] + }, + "app": { + "description": "application that is attached to add-on", + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + }, + "name": { + "$ref": "#/definitions/app/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "created_at": { + "$ref": "#/definitions/add-on-attachment/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/add-on-attachment/definitions/id" + }, + "name": { + "$ref": "#/definitions/add-on-attachment/definitions/name" + }, + "updated_at": { + "$ref": "#/definitions/add-on-attachment/definitions/updated_at" + }, + "web_url": { + "$ref": "#/definitions/add-on-attachment/definitions/web_url" + } + } + }, + "add-on-config": { + "description": "Configuration of an Add-on", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "development", + "strictProperties": true, + "title": "Heroku Platform API - Add-on Config", + "type": [ + "object" + ], + "definitions": { + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/add-on-config/definitions/name" + } + ] + }, + "name": { + "description": "unique name of the config", + "example": "FOO", + "type": [ + "string" + ] + }, + "value": { + "description": "value of the config", + "example": "bar", + "type": [ + "string", + "null" + ] + } + }, + "links": [ + { + "description": "Get an add-on's config. Accessible by customers with access and by the add-on partner providing this add-on.", + "href": "/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}/config", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on-config" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Update an add-on's config. Can only be accessed by the add-on partner providing this add-on.", + "href": "/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}/config", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "config": { + "items": { + "$ref": "#/definitions/add-on-config" + }, + "type": [ + "array" + ] + } + }, + "type": [ + "object" + ] + }, + "targetSchema": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/add-on-config" + } + }, + "title": "Update" + } + ], + "properties": { + "name": { + "$ref": "#/definitions/add-on-config/definitions/name" + }, + "value": { + "$ref": "#/definitions/add-on-config/definitions/value" + } + } + }, + "add-on-plan-action": { + "description": "Add-on Plan Actions are Provider functionality for specific add-on installations", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "development", + "strictProperties": true, + "title": "Heroku Platform API - Add-on Plan Action", + "type": [ + "object" + ], + "definitions": { + "id": { + "description": "a unique identifier", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "$ref": "#/definitions/add-on-plan-action/definitions/id" + }, + "label": { + "description": "the display text shown in Dashboard", + "example": "Example", + "readOnly": true, + "type": [ + "string" + ] + }, + "action": { + "description": "identifier of the action to take that is sent via SSO", + "example": "example", + "readOnly": true, + "type": [ + "string" + ] + }, + "url": { + "description": "absolute URL to use instead of an action", + "example": "http://example.com?resource_id=:resource_id", + "readOnly": true, + "type": [ + "string" + ] + }, + "requires_owner": { + "description": "if the action requires the user to own the app", + "example": true, + "readOnly": true, + "type": [ + "boolean" + ] + } + }, + "properties": { + "id": { + "$ref": "#/definitions/add-on-plan-action/definitions/id" + }, + "label": { + "$ref": "#/definitions/add-on-plan-action/definitions/label" + }, + "action": { + "$ref": "#/definitions/add-on-plan-action/definitions/action" + }, + "url": { + "$ref": "#/definitions/add-on-plan-action/definitions/url" + }, + "requires_owner": { + "$ref": "#/definitions/add-on-plan-action/definitions/requires_owner" + } + } + }, + "add-on-region-capability": { + "description": "Add-on region capabilities represent the relationship between an Add-on Service and a specific Region. Only Beta and GA add-ons are returned by these endpoints.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "production", + "strictProperties": true, + "title": "Heroku Platform API - Add-on Region Capability", + "type": [ + "object" + ], + "definitions": { + "id": { + "description": "unique identifier of this add-on-region-capability", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "supports_private_networking": { + "description": "whether the add-on can be installed to a Space", + "readOnly": true, + "type": [ + "boolean" + ] + }, + "identity": { + "$ref": "#/definitions/add-on-region-capability/definitions/id" + } + }, + "links": [ + { + "description": "List all existing add-on region capabilities.", + "href": "/addon-region-capabilities", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on-region-capability" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "List existing add-on region capabilities for an add-on-service", + "href": "/addon-services/{(%23%2Fdefinitions%2Fadd-on-service%2Fdefinitions%2Fidentity)}/region-capabilities", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on-region-capability" + }, + "type": [ + "array" + ] + }, + "title": "List by Add-on Service" + }, + { + "description": "List existing add-on region capabilities for a region.", + "href": "/regions/{(%23%2Fdefinitions%2Fregion%2Fdefinitions%2Fidentity)}/addon-region-capabilities", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on-region-capability" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "id": { + "$ref": "#/definitions/add-on-region-capability/definitions/id" + }, + "supports_private_networking": { + "$ref": "#/definitions/add-on-region-capability/definitions/supports_private_networking" + }, + "addon_service": { + "$ref": "#/definitions/add-on-service" + }, + "region": { + "$ref": "#/definitions/region" + } + } + }, + "add-on-service": { "description": "Add-on services represent add-ons that may be provisioned for apps. Endpoints under add-on services can be accessed without authentication.", "$schema": "http://json-schema.org/draft-04/hyper-schema", "stability": "production", @@ -419,8 +1091,17 @@ "object" ], "definitions": { + "cli_plugin_name": { + "description": "npm package name of the add-on service's Heroku CLI plugin", + "example": "heroku-papertrail", + "readOnly": true, + "type": [ + "string", + "null" + ] + }, "created_at": { - "description": "when addon-service was created", + "description": "when add-on-service was created", "example": "2012-01-01T12:00:00Z", "format": "date-time", "readOnly": true, @@ -428,8 +1109,16 @@ "string" ] }, + "human_name": { + "description": "human-readable name of the add-on service provider", + "example": "Heroku Postgres", + "readOnly": true, + "type": [ + "string" + ] + }, "id": { - "description": "unique identifier of this addon-service", + "description": "unique identifier of this add-on-service", "example": "01234567-89ab-cdef-0123-456789abcdef", "format": "uuid", "readOnly": true, @@ -440,23 +1129,55 @@ "identity": { "anyOf": [ { - "$ref": "#/definitions/addon-service/definitions/id" + "$ref": "#/definitions/add-on-service/definitions/id" }, { - "$ref": "#/definitions/addon-service/definitions/name" + "$ref": "#/definitions/add-on-service/definitions/name" } ] }, "name": { - "description": "unique name of this addon-service", + "description": "unique name of this add-on-service", "example": "heroku-postgresql", "readOnly": true, "type": [ "string" ] }, + "state": { + "description": "release status for add-on service", + "enum": [ + "alpha", + "beta", + "ga", + "shutdown" + ], + "example": "ga", + "readOnly": true, + "type": [ + "string" + ] + }, + "supports_multiple_installations": { + "default": false, + "description": "whether or not apps can have access to more than one instance of this add-on at the same time", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, + "supports_sharing": { + "default": false, + "description": "whether or not apps can have access to add-ons billed to a different app", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, "updated_at": { - "description": "when addon-service was updated", + "description": "when add-on-service was updated", "example": "2012-01-01T12:00:00Z", "format": "date-time", "readOnly": true, @@ -467,23 +1188,23 @@ }, "links": [ { - "description": "Info for existing addon-service.", - "href": "/addon-services/{(%23%2Fdefinitions%2Faddon-service%2Fdefinitions%2Fidentity)}", + "description": "Info for existing add-on-service.", + "href": "/addon-services/{(%23%2Fdefinitions%2Fadd-on-service%2Fdefinitions%2Fidentity)}", "method": "GET", "rel": "self", "targetSchema": { - "$ref": "#/definitions/addon-service" + "$ref": "#/definitions/add-on-service" }, "title": "Info" }, { - "description": "List existing addon-services.", + "description": "List existing add-on-services.", "href": "/addon-services", "method": "GET", "rel": "instances", "targetSchema": { "items": { - "$ref": "#/definitions/addon-service" + "$ref": "#/definitions/add-on-service" }, "type": [ "array" @@ -493,22 +1214,37 @@ } ], "properties": { + "cli_plugin_name": { + "$ref": "#/definitions/add-on-service/definitions/cli_plugin_name" + }, "created_at": { - "$ref": "#/definitions/addon-service/definitions/created_at" + "$ref": "#/definitions/add-on-service/definitions/created_at" + }, + "human_name": { + "$ref": "#/definitions/add-on-service/definitions/human_name" }, "id": { - "$ref": "#/definitions/addon-service/definitions/id" + "$ref": "#/definitions/add-on-service/definitions/id" }, "name": { - "$ref": "#/definitions/addon-service/definitions/name" + "$ref": "#/definitions/add-on-service/definitions/name" + }, + "state": { + "$ref": "#/definitions/add-on-service/definitions/state" + }, + "supports_multiple_installations": { + "$ref": "#/definitions/add-on-service/definitions/supports_multiple_installations" + }, + "supports_sharing": { + "$ref": "#/definitions/add-on-service/definitions/supports_sharing" }, "updated_at": { - "$ref": "#/definitions/addon-service/definitions/updated_at" + "$ref": "#/definitions/add-on-service/definitions/updated_at" } } }, - "addon": { - "description": "Add-ons represent add-ons that have been provisioned for an app.", + "add-on": { + "description": "Add-ons represent add-ons that have been provisioned and attached to one or more apps.", "$schema": "http://json-schema.org/draft-04/hyper-schema", "stability": "production", "strictProperties": true, @@ -517,8 +1253,37 @@ "object" ], "definitions": { + "actions": { + "description": "provider actions for this specific add-on", + "type": [ + "array" + ], + "items": { + "type": [ + "object" + ] + }, + "readOnly": true, + "properties": { + "id": { + "$ref": "#/definitions/add-on-plan-action/definitions/id" + }, + "label": { + "$ref": "#/definitions/add-on-plan-action/definitions/label" + }, + "action": { + "$ref": "#/definitions/add-on-plan-action/definitions/action" + }, + "url": { + "$ref": "#/definitions/add-on-plan-action/definitions/url" + }, + "requires_owner": { + "$ref": "#/definitions/add-on-plan-action/definitions/requires_owner" + } + } + }, "config_vars": { - "description": "config vars associated with this application", + "description": "config vars exposed to the owning app by this add-on", "example": [ "FOO", "BAZ" @@ -534,7 +1299,7 @@ ] }, "created_at": { - "description": "when add-on was updated", + "description": "when add-on was created", "example": "2012-01-01T12:00:00Z", "format": "date-time", "readOnly": true, @@ -554,17 +1319,17 @@ "identity": { "anyOf": [ { - "$ref": "#/definitions/addon/definitions/id" + "$ref": "#/definitions/add-on/definitions/id" }, { - "$ref": "#/definitions/addon/definitions/name" + "$ref": "#/definitions/add-on/definitions/name" } ] }, "name": { - "description": "name of the add-on unique within its app", - "example": "heroku-postgresql-teal", - "pattern": "^[a-z][a-z0-9-]+$", + "description": "globally unique name of the add-on", + "example": "acme-inc-primary-database", + "pattern": "^[a-zA-Z][A-Za-z0-9_-]+$", "readOnly": true, "type": [ "string" @@ -572,7 +1337,20 @@ }, "provider_id": { "description": "id of this add-on with its provider", - "example": "app123@heroku.com", + "example": "abcd1234", + "readOnly": true, + "type": [ + "string" + ] + }, + "state": { + "description": "state in the add-on's lifecycle", + "enum": [ + "provisioning", + "provisioned", + "deprovisioned" + ], + "example": "provisioned", "readOnly": true, "type": [ "string" @@ -586,6 +1364,16 @@ "type": [ "string" ] + }, + "web_url": { + "description": "URL for logging into web interface of add-on (e.g. a dashboard)", + "example": "https://postgres.heroku.com/databases/01234567-89ab-cdef-0123-456789abcdef", + "format": "uri", + "readOnly": true, + "type": [ + "null", + "string" + ] } }, "links": [ @@ -596,6 +1384,18 @@ "rel": "create", "schema": { "properties": { + "attachment": { + "description": "name for add-on's initial attachment", + "example": { + "name": "DATABASE_FOLLOWER" + }, + "name": { + "$ref": "#/definitions/add-on-attachment/definitions/name" + }, + "type": [ + "object" + ] + }, "config": { "additionalProperties": false, "description": "custom add-on provisioning options", @@ -625,38 +1425,38 @@ ] }, "targetSchema": { - "$ref": "#/definitions/addon" + "$ref": "#/definitions/add-on" }, "title": "Create" }, { "description": "Delete an existing add-on.", - "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons/{(%23%2Fdefinitions%2Faddon%2Fdefinitions%2Fidentity)}", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}", "method": "DELETE", "rel": "destroy", "targetSchema": { - "$ref": "#/definitions/addon" + "$ref": "#/definitions/add-on" }, "title": "Delete" }, { "description": "Info for an existing add-on.", - "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons/{(%23%2Fdefinitions%2Faddon%2Fdefinitions%2Fidentity)}", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}", "method": "GET", "rel": "self", "targetSchema": { - "$ref": "#/definitions/addon" + "$ref": "#/definitions/add-on" }, "title": "Info" }, { - "description": "List existing add-ons.", - "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons", + "description": "List all existing add-ons.", + "href": "/addons", "method": "GET", "rel": "instances", "targetSchema": { "items": { - "$ref": "#/definitions/addon" + "$ref": "#/definitions/add-on" }, "type": [ "array" @@ -664,9 +1464,49 @@ }, "title": "List" }, + { + "description": "Info for an existing add-on.", + "href": "/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/add-on" + }, + "title": "Info" + }, + { + "description": "List all existing add-ons a user has access to", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}/addons", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on" + }, + "type": [ + "array" + ] + }, + "title": "List by User" + }, + { + "description": "List existing add-ons for an app.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on" + }, + "type": [ + "array" + ] + }, + "title": "List by App" + }, { "description": "Change add-on plan. Some add-ons may not support changing plans. In that case, an error will be returned.", - "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons/{(%23%2Fdefinitions%2Faddon%2Fdefinitions%2Fidentity)}", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/addons/{(%23%2Fdefinitions%2Fadd-on%2Fdefinitions%2Fidentity)}", "method": "PATCH", "rel": "update", "schema": { @@ -686,14 +1526,17 @@ } ], "properties": { + "actions": { + "$ref": "#/definitions/add-on/definitions/actions" + }, "addon_service": { "description": "identity of add-on service", "properties": { "id": { - "$ref": "#/definitions/addon-service/definitions/id" + "$ref": "#/definitions/add-on-service/definitions/id" }, "name": { - "$ref": "#/definitions/addon-service/definitions/name" + "$ref": "#/definitions/add-on-service/definitions/name" } }, "strictProperties": true, @@ -701,17 +1544,32 @@ "object" ] }, + "app": { + "description": "billing application associated with this add-on", + "type": [ + "object" + ], + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + }, + "name": { + "$ref": "#/definitions/app/definitions/name" + } + }, + "strictProperties": true + }, "config_vars": { - "$ref": "#/definitions/addon/definitions/config_vars" + "$ref": "#/definitions/add-on/definitions/config_vars" }, "created_at": { - "$ref": "#/definitions/addon/definitions/created_at" + "$ref": "#/definitions/add-on/definitions/created_at" }, "id": { - "$ref": "#/definitions/addon/definitions/id" + "$ref": "#/definitions/add-on/definitions/id" }, "name": { - "$ref": "#/definitions/addon/definitions/name" + "$ref": "#/definitions/add-on/definitions/name" }, "plan": { "description": "identity of add-on plan", @@ -729,10 +1587,16 @@ ] }, "provider_id": { - "$ref": "#/definitions/addon/definitions/provider_id" + "$ref": "#/definitions/add-on/definitions/provider_id" + }, + "state": { + "$ref": "#/definitions/add-on/definitions/state" }, "updated_at": { - "$ref": "#/definitions/addon/definitions/updated_at" + "$ref": "#/definitions/add-on/definitions/updated_at" + }, + "web_url": { + "$ref": "#/definitions/add-on/definitions/web_url" } } }, @@ -901,6 +1765,64 @@ } } }, + "app-formation-set": { + "description": "App formation set describes the combination of process types with their quantities and sizes as well as application process tier", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "development", + "strictProperties": true, + "title": "Heroku Platform API - Application Formation Set", + "type": [ + "object" + ], + "properties": { + "description": { + "description": "a string representation of the formation set", + "example": "web@2:Standard-2X worker@3:Performance-M", + "readOnly": true, + "type": [ + "string" + ] + }, + "process_tier": { + "description": "application process tier", + "enum": [ + "production", + "traditional", + "free", + "hobby", + "private" + ], + "example": "production", + "readOnly": true, + "type": [ + "string" + ] + }, + "app": { + "description": "app being described by the formation-set", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "type": [ + "object" + ] + }, + "updated_at": { + "description": "last time fomation-set was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + } + }, "app-setup": { "description": "An app setup represents an app on Heroku that is setup using an environment, addons, and scripts described in an app.json manifest file.", "$schema": "http://json-schema.org/draft-04/hyper-schema", @@ -927,6 +1849,21 @@ } ] }, + "buildpack_override": { + "description": "a buildpack override", + "properties": { + "url": { + "description": "location of the buildpack", + "example": "https://example.com/buildpack.tgz", + "type": [ + "string" + ] + } + }, + "type": [ + "object" + ] + }, "created_at": { "description": "when app setup was created", "example": "2012-01-01T12:00:00Z", @@ -947,7 +1884,7 @@ }, "status": { "description": "the overall status of app setup", - "example": "succeeded", + "example": "failed", "enum": [ "failed", "pending", @@ -960,7 +1897,7 @@ }, "resolved_success_url": { "description": "fully qualified success url", - "example": "http://example.herokuapp.com/welcome", + "example": "https://example.herokuapp.com/welcome", "readOnly": true, "type": [ "string", @@ -991,6 +1928,58 @@ "array" ] }, + "overrides": { + "description": "overrides of keys in the app.json manifest file", + "example": { + "buildpacks": [ + { + "url": "https://example.com/buildpack.tgz" + } + ], + "env": { + "FOO": "bar", + "BAZ": "qux" + } + }, + "properties": { + "buildpacks": { + "description": "overrides the buildpacks specified in the app.json manifest file", + "example": [ + { + "url": "https://example.com/buildpack.tgz" + } + ], + "items": { + "$ref": "#/definitions/app-setup/definitions/buildpack_override" + }, + "type": [ + "array" + ] + }, + "env": { + "description": "overrides of the env specified in the app.json manifest file", + "example": { + "FOO": "bar", + "BAZ": "qux" + }, + "readOnly": true, + "additionalProperties": false, + "patternProperties": { + "^\\w+$": { + "type": [ + "string" + ] + } + }, + "type": [ + "object" + ] + } + }, + "type": [ + "object" + ] + }, "postdeploy": { "description": "result of postdeploy script", "type": [ @@ -1053,6 +2042,7 @@ "description": "identity and status of build", "strictProperties": true, "type": [ + "null", "object" ], "properties": { @@ -1061,6 +2051,9 @@ }, "status": { "$ref": "#/definitions/build/definitions/status" + }, + "output_stream_url": { + "$ref": "#/definitions/build/definitions/output_stream_url" } } }, @@ -1107,6 +2100,9 @@ "region": { "$ref": "#/definitions/region/definitions/name" }, + "space": { + "$ref": "#/definitions/space/definitions/name" + }, "stack": { "$ref": "#/definitions/stack/definitions/name" } @@ -1117,8 +2113,16 @@ }, "source_blob": { "description": "gzipped tarball of source code containing app.json manifest file", - "example": "https://example.com/source.tgz?token=xyz", "properties": { + "checksum": { + "description": "an optional checksum of the gzipped tarball for verifying its integrity", + "example": "SHA256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + "readOnly": true, + "type": [ + "null", + "string" + ] + }, "url": { "description": "URL of gzipped tarball of source code containing app.json manifest file", "example": "https://example.com/source.tgz?token=xyz", @@ -1126,6 +2130,15 @@ "type": [ "string" ] + }, + "version": { + "description": "Version of the gzipped tarball.", + "example": "v1.3.0", + "readOnly": true, + "type": [ + "string", + "null" + ] } }, "type": [ @@ -1133,37 +2146,7 @@ ] }, "overrides": { - "description": "overrides of keys in the app.json manifest file", - "example": { - "env": { - "FOO": "bar", - "BAZ": "qux" - } - }, - "properties": { - "env": { - "description": "overrides of the env specified in the app.json manifest file", - "example": { - "FOO": "bar", - "BAZ": "qux" - }, - "readOnly": true, - "additionalProperties": false, - "patternProperties": { - "^\\w+$": { - "type": [ - "string" - ] - } - }, - "type": [ - "object" - ] - } - }, - "type": [ - "object" - ] + "$ref": "#/definitions/app-setup/definitions/overrides" } } }, @@ -1221,6 +2204,15 @@ } ] }, + "silent": { + "default": false, + "description": "whether to suppress email notification when transferring apps", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, "state": { "description": "the current state of an app transfer", "enum": [ @@ -1257,6 +2249,9 @@ }, "recipient": { "$ref": "#/definitions/account/definitions/identity" + }, + "silent": { + "$ref": "#/definitions/app-transfer/definitions/silent" } }, "required": [ @@ -1430,8 +2425,8 @@ }, "git_url": { "description": "git repo URL of app", - "example": "git@heroku.com:example.git", - "pattern": "^git@heroku\\.com:[a-z][a-z0-9-]{3,30}\\.git$", + "example": "https://git.heroku.com/example.git", + "pattern": "^https://git\\.heroku\\.com/[a-z][a-z0-9-]{2,29}\\.git$", "readOnly": true, "type": [ "string" @@ -1468,7 +2463,7 @@ "name": { "description": "unique name of app", "example": "example", - "pattern": "^[a-z][a-z0-9-]{3,30}$", + "pattern": "^[a-z][a-z0-9-]{2,29}$", "readOnly": false, "type": [ "string" @@ -1516,13 +2511,22 @@ }, "web_url": { "description": "web URL of app", - "example": "http://example.herokuapp.com/", + "example": "https://example.herokuapp.com/", "format": "uri", - "pattern": "^http://[a-z][a-z0-9-]{3,30}\\.herokuapp\\.com/$", + "pattern": "^https?://[a-z][a-z0-9-]{3,30}\\.herokuapp\\.com/$", "readOnly": true, "type": [ "string" ] + }, + "acm": { + "description": "ACM status of this app", + "default": false, + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] } }, "links": [ @@ -1592,6 +2596,26 @@ }, "title": "List" }, + { + "description": "List owned and collaborated apps (excludes organization apps).", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}/apps", + "method": "GET", + "ranges": [ + "id", + "name", + "updated_at" + ], + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/app" + }, + "type": [ + "array" + ] + }, + "title": "List Owned and Collaborated" + }, { "description": "Update an existing app.", "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}", @@ -1599,6 +2623,9 @@ "rel": "update", "schema": { "properties": { + "build_stack": { + "$ref": "#/definitions/stack/definitions/identity" + }, "maintenance": { "$ref": "#/definitions/app/definitions/maintenance" }, @@ -1623,6 +2650,21 @@ "buildpack_provided_description": { "$ref": "#/definitions/app/definitions/buildpack_provided_description" }, + "build_stack": { + "description": "identity of the stack that will be used for new builds", + "properties": { + "id": { + "$ref": "#/definitions/stack/definitions/id" + }, + "name": { + "$ref": "#/definitions/stack/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, "created_at": { "$ref": "#/definitions/app/definitions/created_at" }, @@ -1653,6 +2695,21 @@ "object" ] }, + "organization": { + "description": "identity of organization", + "properties": { + "id": { + "$ref": "#/definitions/organization/definitions/id" + }, + "name": { + "$ref": "#/definitions/organization/definitions/name" + } + }, + "type": [ + "null", + "object" + ] + }, "region": { "description": "identity of app region", "properties": { @@ -1677,6 +2734,24 @@ "slug_size": { "$ref": "#/definitions/app/definitions/slug_size" }, + "space": { + "description": "identity of space", + "properties": { + "id": { + "$ref": "#/definitions/space/definitions/id" + }, + "name": { + "$ref": "#/definitions/space/definitions/name" + }, + "shield": { + "$ref": "#/definitions/space/definitions/shield" + } + }, + "type": [ + "null", + "object" + ] + }, "stack": { "description": "identity of app stack", "properties": { @@ -1702,9 +2777,10 @@ }, "build-result": { "$schema": "http://json-schema.org/draft-04/hyper-schema", + "deactivate_on": "2016-10-01", "description": "A build result contains the output from a build.", "title": "Heroku Build API - Build Result", - "stability": "production", + "stability": "deprecation", "strictProperties": true, "type": [ "object" @@ -1784,6 +2860,9 @@ }, "status": { "$ref": "#/definitions/build/definitions/status" + }, + "output_stream_url": { + "$ref": "#/definitions/build/definitions/output_stream_url" } }, "type": [ @@ -1800,7 +2879,7 @@ "items": { "$ref": "#/definitions/build-result/definitions/line" }, - "description": "A list of all the lines of a build's output.", + "description": "A list of all the lines of a build's output. This has been replaced by the `output_stream_url` attribute on the build resource.", "example": [ { "line": "-----> Ruby app detected\n", @@ -1820,6 +2899,24 @@ "object" ], "definitions": { + "buildpacks": { + "description": "buildpacks executed for this build, in order", + "type": [ + "array", + "null" + ], + "items": { + "description": "Buildpack to execute in a build", + "type": [ + "object" + ], + "properties": { + "url": { + "$ref": "#/definitions/buildpack-installation/definitions/url" + } + } + } + }, "created_at": { "description": "when build was created", "example": "2012-01-01T12:00:00Z", @@ -1845,9 +2942,52 @@ } ] }, + "output_stream_url": { + "description": "Build process output will be available from this URL as a stream. The stream is available as either `text/plain` or `text/event-stream`. Clients should be prepared to handle disconnects and can resume the stream by sending a `Range` header (for `text/plain`) or a `Last-Event-Id` header (for `text/event-stream`).", + "example": "https://build-output.heroku.com/streams/01234567-89ab-cdef-0123-456789abcdef", + "readOnly": true, + "type": [ + "string" + ] + }, + "release": { + "description": "release resulting from the build", + "strictProperties": true, + "properties": { + "id": { + "$ref": "#/definitions/release/definitions/id" + } + }, + "example": { + "id": "01234567-89ab-cdef-0123-456789abcdef" + }, + "readOnly": true, + "type": [ + "null", + "object" + ], + "definitions": { + "id": { + "description": "unique identifier of release", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "type": [ + "string" + ] + } + } + }, "source_blob": { "description": "location of gzipped tarball of source code used to create build", "properties": { + "checksum": { + "description": "an optional checksum of the gzipped tarball for verifying its integrity", + "example": "SHA256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + "readOnly": true, + "type": [ + "null", + "string" + ] + }, "url": { "description": "URL where gzipped tar archive of source code for build was downloaded.", "example": "https://example.com/source.tgz?token=xyz", @@ -1905,6 +3045,9 @@ "object" ], "properties": { + "buildpacks": { + "$ref": "#/definitions/build/definitions/buildpacks" + }, "source_blob": { "$ref": "#/definitions/build/definitions/source_blob" } @@ -1949,15 +3092,36 @@ } ], "properties": { + "app": { + "description": "app that the build belongs to", + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "buildpacks": { + "$ref": "#/definitions/build/definitions/buildpacks" + }, "created_at": { "$ref": "#/definitions/build/definitions/created_at" }, "id": { "$ref": "#/definitions/build/definitions/id" }, + "output_stream_url": { + "$ref": "#/definitions/build/definitions/output_stream_url" + }, "source_blob": { "$ref": "#/definitions/build/definitions/source_blob" }, + "release": { + "$ref": "#/definitions/build/definitions/release" + }, "slug": { "description": "slug created by this build", "properties": { @@ -1994,11 +3158,140 @@ } } }, - "collaborator": { - "description": "A collaborator represents an account that has been given access to an app on Heroku.", + "buildpack-installation": { + "description": "A buildpack installation represents a buildpack that will be run against an app.", "$schema": "http://json-schema.org/draft-04/hyper-schema", "stability": "production", "strictProperties": true, + "title": "Heroku Platform API - Buildpack Installations", + "type": [ + "object" + ], + "definitions": { + "ordinal": { + "description": "determines the order in which the buildpacks will execute", + "example": 0, + "readOnly": true, + "type": [ + "integer" + ] + }, + "update": { + "additionalProperties": false, + "description": "Properties to update a buildpack installation", + "properties": { + "buildpack": { + "$ref": "#/definitions/buildpack-installation/definitions/url" + } + }, + "readOnly": false, + "required": [ + "buildpack" + ], + "type": [ + "object" + ] + }, + "url": { + "description": "location of the buildpack for the app. Either a url (unofficial buildpacks) or an internal urn (heroku official buildpacks).", + "example": "https://github.com/heroku/heroku-buildpack-ruby", + "readOnly": false, + "type": [ + "string" + ] + }, + "name": { + "description": "either the shorthand name (heroku official buildpacks) or url (unofficial buildpacks) of the buildpack for the app", + "example": "heroku/ruby", + "readOnly": false, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Update an app's buildpack installations.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/buildpack-installations", + "method": "PUT", + "rel": "update", + "schema": { + "properties": { + "updates": { + "description": "The buildpack attribute can accept a name, a url, or a urn.", + "items": { + "$ref": "#/definitions/buildpack-installation/definitions/update" + }, + "type": [ + "array" + ] + } + }, + "required": [ + "updates" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "items": { + "$ref": "#/definitions/buildpack-installation" + }, + "type": [ + "array" + ] + }, + "title": "Update" + }, + { + "description": "List an app's existing buildpack installations.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/buildpack-installations", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/buildpack-installation" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "ordinal": { + "$ref": "#/definitions/buildpack-installation/definitions/ordinal" + }, + "buildpack": { + "description": "buildpack", + "properties": { + "url": { + "$ref": "#/definitions/buildpack-installation/definitions/url" + }, + "name": { + "$ref": "#/definitions/buildpack-installation/definitions/name" + } + }, + "type": [ + "object" + ] + } + } + }, + "collaborator": { + "description": "A collaborator represents an account that has been given access to an app on Heroku.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "additionalProperties": false, + "required": [ + "app", + "created_at", + "id", + "updated_at", + "user" + ], + "stability": "production", "title": "Heroku Platform API - Collaborator", "type": [ "object" @@ -2124,12 +3417,38 @@ } ], "properties": { + "app": { + "description": "app collaborator belongs to", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, "created_at": { "$ref": "#/definitions/collaborator/definitions/created_at" }, "id": { "$ref": "#/definitions/collaborator/definitions/id" }, + "permissions": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/organization-app-permission" + } + }, + "role": { + "$ref": "#/definitions/organization/definitions/role" + }, "updated_at": { "$ref": "#/definitions/collaborator/definitions/updated_at" }, @@ -2139,6 +3458,9 @@ "email": { "$ref": "#/definitions/account/definitions/email" }, + "federated": { + "$ref": "#/definitions/account/definitions/federated" + }, "id": { "$ref": "#/definitions/account/definitions/id" } @@ -2189,16 +3511,26 @@ "targetSchema": { "$ref": "#/definitions/config-var/definitions/config_vars" }, - "title": "Info" + "title": "Info for App" }, { - "description": "Update config-vars for app. You can update existing config-vars by setting them again, and remove by setting it to `NULL`.", + "description": "Get config-vars for a release.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/releases/{(%23%2Fdefinitions%2Frelease%2Fdefinitions%2Fidentity)}/config-vars", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/config-var/definitions/config_vars" + }, + "title": "Info for App Release" + }, + { + "description": "Update config-vars for app. You can update existing config-vars by setting them again, and remove by setting it to `null`.", "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/config-vars", "method": "PATCH", "rel": "update", "schema": { "additionalProperties": false, - "description": "hash of config changes – update values or delete by seting it to NULL", + "description": "hash of config changes – update values or delete by seting it to `null`", "example": { "FOO": "bar", "BAZ": "qux" @@ -2282,7 +3614,11 @@ ] }, "identity": { - "$ref": "#/definitions/credit/definitions/id" + "anyOf": [ + { + "$ref": "#/definitions/credit/definitions/id" + } + ] }, "title": { "description": "a name for credit", @@ -2301,6 +3637,37 @@ } }, "links": [ + { + "description": "Create a new credit.", + "href": "/account/credits", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "code1": { + "description": "first code from a discount card", + "example": "012abc", + "type": [ + "string" + ] + }, + "code2": { + "description": "second code from a discount card", + "example": "012abc", + "type": [ + "string" + ] + } + }, + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/credit" + }, + "title": "Create" + }, { "description": "Info for existing credit.", "href": "/account/credits/{(%23%2Fdefinitions%2Fcredit%2Fdefinitions%2Fidentity)}", @@ -2370,9 +3737,27 @@ "string" ] }, + "cname": { + "description": "canonical name record, the address to point a domain at", + "example": "example.herokudns.com", + "readOnly": true, + "type": [ + "null", + "string" + ] + }, + "status": { + "description": "status of this record's cname", + "example": "pending", + "readOnly": true, + "type": [ + "string" + ] + }, "hostname": { "description": "full hostname", "example": "subdomain.example.com", + "format": "uri", "readOnly": true, "type": [ "string" @@ -2397,6 +3782,18 @@ } ] }, + "kind": { + "description": "type of domain name", + "enum": [ + "heroku", + "custom" + ], + "example": "custom", + "readOnly": true, + "type": [ + "string" + ] + }, "updated_at": { "description": "when domain was updated", "example": "2012-01-01T12:00:00Z", @@ -2468,6 +3865,23 @@ } ], "properties": { + "app": { + "description": "app that owns the domain", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "type": [ + "object" + ] + }, + "cname": { + "$ref": "#/definitions/domain/definitions/cname" + }, "created_at": { "$ref": "#/definitions/domain/definitions/created_at" }, @@ -2477,13 +3891,177 @@ "id": { "$ref": "#/definitions/domain/definitions/id" }, + "kind": { + "$ref": "#/definitions/domain/definitions/kind" + }, "updated_at": { "$ref": "#/definitions/domain/definitions/updated_at" + }, + "status": { + "$ref": "#/definitions/domain/definitions/status" + } + } + }, + "dyno-size": { + "description": "Dyno sizes are the values and details of sizes that can be assigned to dynos. This information can also be found at : [https://devcenter.heroku.com/articles/dyno-types](https://devcenter.heroku.com/articles/dyno-types).", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Dyno Size", + "type": [ + "object" + ], + "definitions": { + "compute": { + "description": "minimum vCPUs, non-dedicated may get more depending on load", + "example": 1, + "readOnly": true, + "type": [ + "integer" + ] + }, + "dedicated": { + "description": "whether this dyno will be dedicated to one user", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, + "id": { + "description": "unique identifier of this dyno size", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/dyno-size/definitions/id" + }, + { + "$ref": "#/definitions/dyno-size/definitions/name" + } + ] + }, + "memory": { + "description": "amount of RAM in GB", + "example": 0.5, + "readOnly": true, + "type": [ + "number" + ] + }, + "name": { + "description": "the name of this dyno-size", + "example": "free", + "readOnly": true, + "type": [ + "string" + ] + }, + "cost": { + "description": "price information for this dyno size", + "readOnly": true, + "type": [ + "null", + "object" + ], + "definitions": { + "cents": { + "description": "price in cents per unit time", + "example": 0, + "readOnly": true, + "type": [ + "integer" + ] + }, + "unit": { + "description": "unit of price for dyno", + "readOnly": true, + "example": "month", + "type": [ + "string" + ] + } + } + }, + "dyno_units": { + "description": "unit of consumption for Heroku Enterprise customers", + "example": 0, + "readOnly": true, + "type": [ + "integer" + ] + }, + "private_space_only": { + "description": "whether this dyno can only be provisioned in a private space", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + } + }, + "links": [ + { + "description": "Info for existing dyno size.", + "href": "/dyno-sizes/{(%23%2Fdefinitions%2Fdyno-size%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/dyno-size" + }, + "title": "Info" + }, + { + "description": "List existing dyno sizes.", + "href": "/dyno-sizes", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/dyno-size" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "compute": { + "$ref": "#/definitions/dyno-size/definitions/compute" + }, + "cost": { + "$ref": "#/definitions/dyno-size/definitions/cost" + }, + "dedicated": { + "$ref": "#/definitions/dyno-size/definitions/dedicated" + }, + "dyno_units": { + "$ref": "#/definitions/dyno-size/definitions/dyno_units" + }, + "id": { + "$ref": "#/definitions/dyno-size/definitions/id" + }, + "memory": { + "$ref": "#/definitions/dyno-size/definitions/memory" + }, + "name": { + "$ref": "#/definitions/dyno-size/definitions/name" + }, + "private_space_only": { + "$ref": "#/definitions/dyno-size/definitions/private_space_only" } } }, "dyno": { - "description": "Dynos encapsulate running processes of an app on Heroku.", + "description": "Dynos encapsulate running processes of an app on Heroku. Detailed information about dyno sizes can be found at: [https://devcenter.heroku.com/articles/dyno-types](https://devcenter.heroku.com/articles/dyno-types).", "$schema": "http://json-schema.org/draft-04/hyper-schema", "stability": "production", "strictProperties": true, @@ -2573,9 +4151,18 @@ "string" ] }, + "force_no_tty": { + "description": "force an attached one-off dyno to not run in a tty", + "example": null, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + }, "size": { - "description": "dyno size (default: \"1X\")", - "example": "1X", + "description": "dyno size (default: \"standard-1X\")", + "example": "standard-1X", "readOnly": false, "type": [ "string" @@ -2592,11 +4179,19 @@ "type": { "description": "type of process", "example": "run", - "readOnly": true, + "readOnly": false, "type": [ "string" ] }, + "time_to_live": { + "description": "seconds until dyno expires, after which it will soon be killed", + "example": 1800, + "readOnly": false, + "type": [ + "integer" + ] + }, "updated_at": { "description": "when process last changed state", "example": "2012-01-01T12:00:00Z", @@ -2624,8 +4219,17 @@ "env": { "$ref": "#/definitions/dyno/definitions/env" }, + "force_no_tty": { + "$ref": "#/definitions/dyno/definitions/force_no_tty" + }, "size": { "$ref": "#/definitions/dyno/definitions/size" + }, + "type": { + "$ref": "#/definitions/dyno/definitions/type" + }, + "time_to_live": { + "$ref": "#/definitions/dyno/definitions/time_to_live" } }, "required": [ @@ -2654,7 +4258,7 @@ "title": "Restart" }, { - "description": "Restart all dynos", + "description": "Restart all dynos.", "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/dynos", "method": "DELETE", "rel": "empty", @@ -2666,6 +4270,19 @@ }, "title": "Restart all" }, + { + "description": "Stop dyno.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/dynos/{(%23%2Fdefinitions%2Fdyno%2Fdefinitions%2Fidentity)}/actions/stop", + "method": "POST", + "rel": "empty", + "targetSchema": { + "additionalPoperties": false, + "type": [ + "object" + ] + }, + "title": "Stop" + }, { "description": "Info for existing dyno.", "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/dynos/{(%23%2Fdefinitions%2Fdyno%2Fdefinitions%2Fidentity)}", @@ -2723,6 +4340,20 @@ "object" ] }, + "app": { + "description": "app formation belongs to", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "type": [ + "object" + ] + }, "size": { "$ref": "#/definitions/dyno/definitions/size" }, @@ -2737,6 +4368,429 @@ } } }, + "event": { + "description": "An event represents an action performed on another API resource.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "development", + "strictProperties": true, + "title": "Heroku Platform API - Event", + "type": [ + "object" + ], + "definitions": { + "action": { + "description": "the operation performed on the resource", + "enum": [ + "create", + "destroy", + "update" + ], + "example": "create", + "readOnly": true, + "type": [ + "string" + ] + }, + "created_at": { + "description": "when the event was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "data": { + "description": "the serialized resource affected by the event", + "example": { + }, + "anyOf": [ + { + "$ref": "#/definitions/account" + }, + { + "$ref": "#/definitions/add-on" + }, + { + "$ref": "#/definitions/add-on-attachment" + }, + { + "$ref": "#/definitions/app" + }, + { + "$ref": "#/definitions/app-formation-set" + }, + { + "$ref": "#/definitions/app-setup" + }, + { + "$ref": "#/definitions/app-transfer" + }, + { + "$ref": "#/definitions/build" + }, + { + "$ref": "#/definitions/collaborator" + }, + { + "$ref": "#/definitions/domain" + }, + { + "$ref": "#/definitions/dyno" + }, + { + "$ref": "#/definitions/failed-event" + }, + { + "$ref": "#/definitions/formation" + }, + { + "$ref": "#/definitions/inbound-ruleset" + }, + { + "$ref": "#/definitions/organization" + }, + { + "$ref": "#/definitions/release" + }, + { + "$ref": "#/definitions/space" + } + ], + "readOnly": true, + "type": [ + "object" + ] + }, + "id": { + "description": "unique identifier of an event", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/event/definitions/id" + } + ] + }, + "published_at": { + "description": "when the event was published", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "null", + "string" + ] + }, + "resource": { + "description": "the type of resource affected", + "enum": [ + "addon", + "addon-attachment", + "app", + "app-setup", + "app-transfer", + "build", + "collaborator", + "domain", + "dyno", + "failed-event", + "formation", + "formation-set", + "inbound-ruleset", + "organization", + "release", + "space", + "user" + ], + "example": "app", + "readOnly": true, + "type": [ + "string" + ] + }, + "sequence": { + "description": "a numeric string representing the event's sequence", + "example": "1234567890", + "pattern": "^[0-9]{1,128}$", + "readOnly": true, + "type": [ + "null", + "string" + ] + }, + "updated_at": { + "description": "when the event was updated (same as created)", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "version": { + "description": "the event's API version string", + "example": "application/vnd.heroku+json; version=3", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + ], + "properties": { + "action": { + "$ref": "#/definitions/event/definitions/action" + }, + "actor": { + "description": "user that performed the operation", + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "id": { + "$ref": "#/definitions/account/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "created_at": { + "$ref": "#/definitions/event/definitions/created_at" + }, + "data": { + "$ref": "#/definitions/event/definitions/data" + }, + "id": { + "$ref": "#/definitions/event/definitions/id" + }, + "previous_data": { + "description": "data fields that were changed during update with previous values", + "type": [ + "object" + ] + }, + "published_at": { + "$ref": "#/definitions/event/definitions/published_at" + }, + "resource": { + "$ref": "#/definitions/event/definitions/resource" + }, + "sequence": { + "$ref": "#/definitions/event/definitions/sequence" + }, + "updated_at": { + "$ref": "#/definitions/event/definitions/updated_at" + }, + "version": { + "$ref": "#/definitions/event/definitions/version" + } + } + }, + "failed-event": { + "description": "A failed event represents a failure of an action performed on another API resource.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "development", + "strictProperties": true, + "title": "Heroku Platform API - Failed Event", + "type": [ + "object" + ], + "definitions": { + "action": { + "description": "The attempted operation performed on the resource.", + "enum": [ + "create", + "destroy", + "update", + "unknown" + ], + "example": "create", + "readOnly": true, + "type": [ + "string" + ] + }, + "error_id": { + "description": "ID of error raised.", + "example": "rate_limit", + "readOnly": true, + "type": [ + "string", + "null" + ] + }, + "message": { + "description": "A detailed error message.", + "example": "Your account reached the API rate limit\nPlease wait a few minutes before making new requests", + "readOnly": true, + "type": [ + "string" + ] + }, + "method": { + "description": "The HTTP method type of the failed action.", + "enum": [ + "DELETE", + "GET", + "HEAD", + "OPTIONS", + "PATCH", + "POST", + "PUT" + ], + "example": "POST", + "readOnly": true, + "type": [ + "string" + ] + }, + "code": { + "description": "An HTTP status code.", + "example": 404, + "readOnly": true, + "type": [ + "integer", + "null" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/event/definitions/id" + } + ] + }, + "path": { + "description": "The path of the attempted operation.", + "example": "/apps/my-app", + "readOnly": true, + "type": [ + "string" + ] + }, + "resource_id": { + "description": "Unique identifier of a resource.", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + ], + "properties": { + "action": { + "$ref": "#/definitions/failed-event/definitions/action" + }, + "code": { + "$ref": "#/definitions/failed-event/definitions/code" + }, + "error_id": { + "$ref": "#/definitions/failed-event/definitions/error_id" + }, + "message": { + "$ref": "#/definitions/failed-event/definitions/message" + }, + "method": { + "$ref": "#/definitions/failed-event/definitions/method" + }, + "path": { + "$ref": "#/definitions/failed-event/definitions/path" + }, + "resource": { + "description": "The related resource of the failed action.", + "properties": { + "id": { + "$ref": "#/definitions/failed-event/definitions/resource_id" + }, + "name": { + "$ref": "#/definitions/event/definitions/resource" + } + }, + "strictProperties": true, + "type": [ + "object", + "null" + ] + } + } + }, + "filter-apps": { + "description": "Filters are special endpoints to allow for API consumers to specify a subset of resources to consume in order to reduce the number of requests that are performed. Each filter endpoint endpoint is responsible for determining its supported request format. The endpoints are over POST in order to handle large request bodies without hitting request uri query length limitations, but the requests themselves are idempotent and will not have side effects.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "development", + "title": "Heroku Platform API - Filters", + "type": [ + "object" + ], + "definitions": { + "filter": { + "type": [ + "object" + ], + "properties": { + "in": { + "$ref": "#/definitions/filter-apps/definitions/in" + } + } + }, + "in": { + "type": [ + "object" + ], + "properties": { + "id": { + "$ref": "#/definitions/filter-apps/definitions/id" + } + } + }, + "id": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/app/definitions/id" + } + } + }, + "links": [ + { + "description": "Request an apps list filtered by app id.", + "title": "Apps", + "href": "/filters/apps", + "method": "POST", + "ranges": [ + "id", + "name", + "updated_at" + ], + "rel": "instances", + "schema": { + "$ref": "#/definitions/filter-apps/definitions/filter" + }, + "targetSchema": { + "items": { + "$ref": "#/definitions/organization-app" + }, + "type": [ + "array" + ] + } + } + ] + }, "formation": { "description": "The formation of processes that should be maintained for an app. Update the formation to scale processes or change dyno sizes. Available process type names and commands are defined by the `process_types` attribute for the [slug](#slug) currently released on an app.", "$schema": "http://json-schema.org/draft-04/hyper-schema", @@ -2792,8 +4846,8 @@ ] }, "size": { - "description": "dyno size (default: \"1X\")", - "example": "1X", + "description": "dyno size (default: \"standard-1X\")", + "example": "standard-1X", "readOnly": false, "type": [ "string" @@ -2803,6 +4857,7 @@ "description": "type of process to maintain", "example": "web", "readOnly": true, + "pattern": "^[-\\w]{1,128}$", "type": [ "string" ] @@ -2820,19 +4875,19 @@ "additionalProperties": false, "description": "Properties to update a process type", "properties": { - "process": { - "$ref": "#/definitions/formation/definitions/identity" - }, "quantity": { "$ref": "#/definitions/formation/definitions/quantity" }, "size": { "$ref": "#/definitions/formation/definitions/size" + }, + "type": { + "$ref": "#/definitions/formation/definitions/type" } }, "readOnly": false, "required": [ - "process" + "type" ], "type": [ "object" @@ -2879,14 +4934,7 @@ "items": { "$ref": "#/definitions/formation/definitions/update" }, - "description": "Array with formation updates. Each element must have \"process\", the id or name of the process type to be updated, and can optionally update its \"quantity\" or \"size\".", - "example": [ - { - "process": "web", - "quantity": 1, - "size": "2X" - } - ] + "description": "Array with formation updates. Each element must have \"type\", the id or name of the process type to be updated, and can optionally update its \"quantity\" or \"size\"." } }, "required": [ @@ -2934,6 +4982,20 @@ } ], "properties": { + "app": { + "description": "app formation belongs to", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "type": [ + "object" + ] + }, "command": { "$ref": "#/definitions/formation/definitions/command" }, @@ -2957,6 +5019,880 @@ } } }, + "identity-provider": { + "description": "Identity Providers represent the SAML configuration of an Organization.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "production", + "strictProperties": true, + "title": "Heroku Platform API - Identity Provider", + "type": [ + "object" + ], + "definitions": { + "certificate": { + "description": "raw contents of the public certificate (eg: .crt or .pem file)", + "example": "-----BEGIN CERTIFICATE----- ...", + "readOnly": false, + "type": [ + "string" + ] + }, + "created_at": { + "description": "when provider record was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "entity_id": { + "description": "URL identifier provided by the identity provider", + "example": "https://customer-domain.idp.com", + "readOnly": false, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of this identity provider", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "slo_target_url": { + "description": "single log out URL for this identity provider", + "example": "https://example.com/idp/logout", + "readOnly": false, + "type": [ + "string" + ] + }, + "sso_target_url": { + "description": "single sign on URL for this identity provider", + "example": "https://example.com/idp/login", + "readOnly": false, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when the identity provider record was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Get a list of an organization's Identity Providers", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fname)}/identity-providers", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/identity-provider" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Create an Identity Provider for an organization", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fname)}/identity-providers", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "certificate": { + "$ref": "#/definitions/identity-provider/definitions/certificate" + }, + "entity_id": { + "$ref": "#/definitions/identity-provider/definitions/entity_id" + }, + "slo_target_url": { + "$ref": "#/definitions/identity-provider/definitions/slo_target_url" + }, + "sso_target_url": { + "$ref": "#/definitions/identity-provider/definitions/sso_target_url" + } + }, + "required": [ + "certificate", + "sso_target_url", + "entity_id" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/identity-provider" + }, + "title": "Create" + }, + { + "description": "Update an organization's Identity Provider", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fname)}/identity-providers/{(%23%2Fdefinitions%2Fidentity-provider%2Fdefinitions%2Fid)}", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "certificate": { + "$ref": "#/definitions/identity-provider/definitions/certificate" + }, + "entity_id": { + "$ref": "#/definitions/identity-provider/definitions/entity_id" + }, + "slo_target_url": { + "$ref": "#/definitions/identity-provider/definitions/slo_target_url" + }, + "sso_target_url": { + "$ref": "#/definitions/identity-provider/definitions/sso_target_url" + } + }, + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/identity-provider" + }, + "title": "Update" + }, + { + "description": "Delete an organization's Identity Provider", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fname)}/identity-providers/{(%23%2Fdefinitions%2Fidentity-provider%2Fdefinitions%2Fid)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/identity-provider" + }, + "title": "Delete" + } + ], + "properties": { + "certificate": { + "$ref": "#/definitions/identity-provider/definitions/certificate" + }, + "created_at": { + "$ref": "#/definitions/identity-provider/definitions/created_at" + }, + "entity_id": { + "$ref": "#/definitions/identity-provider/definitions/entity_id" + }, + "id": { + "$ref": "#/definitions/identity-provider/definitions/id" + }, + "slo_target_url": { + "$ref": "#/definitions/identity-provider/definitions/slo_target_url" + }, + "sso_target_url": { + "$ref": "#/definitions/identity-provider/definitions/sso_target_url" + }, + "organization": { + "description": "organization associated with this identity provider", + "properties": { + "name": { + "$ref": "#/definitions/organization/definitions/name" + } + }, + "type": [ + "null", + "object" + ] + }, + "updated_at": { + "$ref": "#/definitions/identity-provider/definitions/updated_at" + } + } + }, + "inbound-ruleset": { + "description": "An inbound-ruleset is a collection of rules that specify what hosts can or cannot connect to an application.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Inbound Ruleset", + "type": [ + "object" + ], + "definitions": { + "action": { + "description": "states whether the connection is allowed or denied", + "example": "allow", + "readOnly": false, + "type": [ + "string" + ], + "enum": [ + "allow", + "deny" + ] + }, + "source": { + "description": "is the request’s source in CIDR notation", + "example": "1.1.1.1/1", + "pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|[1-2][0-9]|3[0-2]))$", + "readOnly": false, + "type": [ + "string" + ] + }, + "created_at": { + "description": "when inbound-ruleset was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of an inbound-ruleset", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/inbound-ruleset/definitions/id" + } + ] + }, + "rule": { + "description": "the combination of an IP address in CIDR notation and whether to allow or deny it's traffic.", + "type": [ + "object" + ], + "properties": { + "action": { + "$ref": "#/definitions/inbound-ruleset/definitions/action" + }, + "source": { + "$ref": "#/definitions/inbound-ruleset/definitions/source" + } + }, + "required": [ + "source", + "action" + ] + } + }, + "links": [ + { + "description": "Current inbound ruleset for a space", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/inbound-ruleset", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/inbound-ruleset" + }, + "title": "Info" + }, + { + "description": "Info on an existing Inbound Ruleset", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/inbound-rulesets/{(%23%2Fdefinitions%2Finbound-ruleset%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/inbound-ruleset" + }, + "title": "Info" + }, + { + "description": "List all inbound rulesets for a space", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/inbound-rulesets", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/inbound-ruleset" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Create a new inbound ruleset", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/inbound-ruleset", + "method": "PUT", + "rel": "create", + "schema": { + "type": [ + "object" + ], + "properties": { + "rules": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/inbound-ruleset/definitions/rule" + } + } + } + }, + "title": "Create" + } + ], + "properties": { + "id": { + "$ref": "#/definitions/inbound-ruleset/definitions/id" + }, + "created_at": { + "$ref": "#/definitions/inbound-ruleset/definitions/created_at" + }, + "rules": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/inbound-ruleset/definitions/rule" + } + }, + "created_by": { + "$ref": "#/definitions/account/definitions/email" + } + } + }, + "invitation": { + "description": "An invitation represents an invite sent to a user to use the Heroku platform.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "production", + "strictProperties": true, + "title": "Heroku Platform API - Invitation", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when invitation was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/invitation/definitions/token" + } + ] + }, + "receive_newsletter": { + "description": "whether this user should receive a newsletter or not", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, + "verification_required": { + "description": "if the invitation requires verification", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, + "token": { + "description": "Unique identifier of an invitation", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "phone_number": { + "description": "Phone number to send verification code", + "example": "+1 123-123-1234", + "type": [ + "string" + ] + }, + "method": { + "description": "Transport used to send verification code", + "example": "sms", + "default": "sms", + "type": [ + "string" + ], + "enum": [ + "call", + "sms" + ] + }, + "verification_code": { + "description": "Value used to verify invitation", + "example": "123456", + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Info for invitation.", + "href": "/invitations/{(%23%2Fdefinitions%2Finvitation%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "title": "Info" + }, + { + "description": "Invite a user.", + "href": "/invitations", + "method": "POST", + "rel": "self", + "schema": { + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "name": { + "$ref": "#/definitions/account/definitions/name" + } + }, + "required": [ + "email", + "name" + ], + "type": [ + "object" + ] + }, + "title": "Create" + }, + { + "description": "Send a verification code for an invitation via SMS/phone call.", + "href": "/invitations/{(%23%2Fdefinitions%2Finvitation%2Fdefinitions%2Fidentity)}/actions/send-verification", + "method": "POST", + "rel": "empty", + "schema": { + "properties": { + "phone_number": { + "$ref": "#/definitions/invitation/definitions/phone_number" + }, + "method": { + "$ref": "#/definitions/invitation/definitions/method" + } + }, + "required": [ + "phone_number" + ], + "type": [ + "object" + ] + }, + "title": "Send Verification Code" + }, + { + "description": "Verify an invitation using a verification code.", + "href": "/invitations/{(%23%2Fdefinitions%2Finvitation%2Fdefinitions%2Fidentity)}/actions/verify", + "method": "POST", + "rel": "self", + "schema": { + "properties": { + "verification_code": { + "$ref": "#/definitions/invitation/definitions/verification_code" + } + }, + "required": [ + "verification_code" + ], + "type": [ + "object" + ] + }, + "title": "Verify" + }, + { + "description": "Finalize Invitation and Create Account.", + "href": "/invitations/{(%23%2Fdefinitions%2Finvitation%2Fdefinitions%2Fidentity)}", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "password": { + "$ref": "#/definitions/account/definitions/password" + }, + "password_confirmation": { + "$ref": "#/definitions/account/definitions/password" + }, + "receive_newsletter": { + "$ref": "#/definitions/invitation/definitions/receive_newsletter" + } + }, + "required": [ + "password", + "password_confirmation" + ], + "type": [ + "object" + ] + }, + "title": "Finalize" + } + ], + "properties": { + "verification_required": { + "$ref": "#/definitions/invitation/definitions/verification_required" + }, + "created_at": { + "$ref": "#/definitions/invitation/definitions/created_at" + }, + "user": { + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "id": { + "$ref": "#/definitions/account/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + }, + "invoice-address": { + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "description": "An invoice address represents the address that should be listed on an invoice.", + "title": "Heroku Vault API - Invoice Address", + "stability": "development", + "type": [ + "object" + ], + "definitions": { + "address_1": { + "type": [ + "string" + ], + "description": "invoice street address line 1", + "example": "40 Hickory Blvd." + }, + "address_2": { + "type": [ + "string" + ], + "description": "invoice street address line 2", + "example": "Suite 300" + }, + "city": { + "type": [ + "string" + ], + "description": "invoice city", + "example": "Seattle" + }, + "country": { + "type": [ + "string" + ], + "description": "country", + "example": "US" + }, + "heroku_id": { + "type": [ + "string" + ], + "description": "heroku_id identifier reference", + "example": "user930223902@heroku.com", + "readOnly": true + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/invoice-address/definitions/heroku_id" + } + ] + }, + "other": { + "type": [ + "string" + ], + "description": "metadata / additional information to go on invoice", + "example": "Company ABC Inc. VAT 903820" + }, + "postal_code": { + "type": [ + "string" + ], + "description": "invoice zip code", + "example": "98101" + }, + "state": { + "type": [ + "string" + ], + "description": "invoice state", + "example": "WA" + }, + "use_invoice_address": { + "type": [ + "boolean" + ], + "description": "flag to use the invoice address for an account or not", + "example": true, + "default": false + } + }, + "links": [ + { + "description": "Retrieve existing invoice address.", + "href": "/account/invoice-address", + "method": "GET", + "rel": "self", + "title": "info" + }, + { + "description": "Update invoice address for an account.", + "href": "/account/invoice-address", + "method": "PUT", + "rel": "self", + "title": "update", + "schema": { + "properties": { + "address_1": { + "$ref": "#/definitions/invoice-address/definitions/address_1" + }, + "address_2": { + "$ref": "#/definitions/invoice-address/definitions/address_2" + }, + "city": { + "$ref": "#/definitions/invoice-address/definitions/city" + }, + "country": { + "$ref": "#/definitions/invoice-address/definitions/country" + }, + "other": { + "$ref": "#/definitions/invoice-address/definitions/other" + }, + "postal_code": { + "$ref": "#/definitions/invoice-address/definitions/postal_code" + }, + "state": { + "$ref": "#/definitions/invoice-address/definitions/state" + }, + "use_invoice_address": { + "$ref": "#/definitions/invoice-address/definitions/use_invoice_address" + } + }, + "type": [ + "object" + ] + } + } + ], + "properties": { + "address_1": { + "$ref": "#/definitions/invoice-address/definitions/address_1" + }, + "address_2": { + "$ref": "#/definitions/invoice-address/definitions/address_2" + }, + "city": { + "$ref": "#/definitions/invoice-address/definitions/city" + }, + "country": { + "$ref": "#/definitions/invoice-address/definitions/country" + }, + "heroku_id": { + "$ref": "#/definitions/invoice-address/definitions/identity" + }, + "other": { + "$ref": "#/definitions/invoice-address/definitions/other" + }, + "postal_code": { + "$ref": "#/definitions/invoice-address/definitions/postal_code" + }, + "state": { + "$ref": "#/definitions/invoice-address/definitions/state" + }, + "use_invoice_address": { + "$ref": "#/definitions/invoice-address/definitions/use_invoice_address" + } + } + }, + "invoice": { + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "description": "An invoice is an itemized bill of goods for an account which includes pricing and charges.", + "stability": "development", + "strictProperties": true, + "title": "Heroku Platform API - Invoice", + "type": [ + "object" + ], + "definitions": { + "charges_total": { + "description": "total charges on this invoice", + "example": 100.0, + "readOnly": true, + "type": [ + "number" + ] + }, + "created_at": { + "description": "when invoice was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "credits_total": { + "description": "total credits on this invoice", + "example": 100.0, + "readOnly": true, + "type": [ + "number" + ] + }, + "id": { + "description": "unique identifier of this invoice", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/invoice/definitions/number" + } + ] + }, + "number": { + "description": "human readable invoice number", + "example": 9403943, + "readOnly": true, + "type": [ + "integer" + ] + }, + "period_end": { + "description": "the ending date that the invoice covers", + "example": "01/31/2014", + "readOnly": true, + "type": [ + "string" + ] + }, + "period_start": { + "description": "the starting date that this invoice covers", + "example": "01/01/2014", + "readOnly": true, + "type": [ + "string" + ] + }, + "state": { + "description": "payment status for this invoice (pending, successful, failed)", + "example": 1, + "readOnly": true, + "type": [ + "integer" + ] + }, + "total": { + "description": "combined total of charges and credits on this invoice", + "example": 100.0, + "readOnly": true, + "type": [ + "number" + ] + }, + "updated_at": { + "description": "when invoice was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Info for existing invoice.", + "href": "/account/invoices/{(%23%2Fdefinitions%2Finvoice%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/invoice" + }, + "title": "Info" + }, + { + "description": "List existing invoices.", + "href": "/account/invoices", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/invoice" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "charges_total": { + "$ref": "#/definitions/invoice/definitions/charges_total" + }, + "created_at": { + "$ref": "#/definitions/invoice/definitions/created_at" + }, + "credits_total": { + "$ref": "#/definitions/invoice/definitions/credits_total" + }, + "id": { + "$ref": "#/definitions/invoice/definitions/id" + }, + "number": { + "$ref": "#/definitions/invoice/definitions/number" + }, + "period_end": { + "$ref": "#/definitions/invoice/definitions/period_end" + }, + "period_start": { + "$ref": "#/definitions/invoice/definitions/period_start" + }, + "state": { + "$ref": "#/definitions/invoice/definitions/state" + }, + "total": { + "$ref": "#/definitions/invoice/definitions/total" + }, + "updated_at": { + "$ref": "#/definitions/invoice/definitions/updated_at" + } + } + }, "key": { "description": "Keys represent public SSH keys associated with an account and are used to authorize accounts as they are performing git operations.", "$schema": "http://json-schema.org/draft-04/hyper-schema", @@ -3039,51 +5975,13 @@ } }, "links": [ - { - "description": "Create a new key.", - "href": "/account/keys", - "method": "POST", - "rel": "create", - "schema": { - "properties": { - "public_key": { - "$ref": "#/definitions/key/definitions/public_key" - } - }, - "required": [ - "public_key" - ], - "type": [ - "object" - ] - }, - "targetSchema": { - "$ref": "#/definitions/key" - }, - "title": "Create" - }, - { - "description": "Delete an existing key", - "href": "/account/keys/{(%23%2Fdefinitions%2Fkey%2Fdefinitions%2Fidentity)}", - "method": "DELETE", - "rel": "destroy", - "targetSchema": { - "$ref": "#/definitions/key" - }, - "title": "Delete" - }, { "description": "Info for existing key.", "href": "/account/keys/{(%23%2Fdefinitions%2Fkey%2Fdefinitions%2Fidentity)}", "method": "GET", "rel": "self", "targetSchema": { - "items": { - "$ref": "#/definitions/key" - }, - "type": [ - "array" - ] + "$ref": "#/definitions/key" }, "title": "Info" }, @@ -3092,6 +5990,14 @@ "href": "/account/keys", "method": "GET", "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/key" + }, + "type": [ + "array" + ] + }, "title": "List" } ], @@ -3120,7 +6026,7 @@ } }, "log-drain": { - "description": "[Log drains](https://devcenter.heroku.com/articles/logging#syslog-drains) provide a way to forward your Heroku logs to an external syslog server for long-term archiving. This external service must be configured to receive syslog packets from Heroku, whereupon its URL can be added to an app using this API. Some addons will add a log drain when they are provisioned to an app. These drains can only be removed by removing the add-on.", + "description": "[Log drains](https://devcenter.heroku.com/articles/log-drains) provide a way to forward your Heroku logs to an external syslog server for long-term archiving. This external service must be configured to receive syslog packets from Heroku, whereupon its URL can be added to an app using this API. Some add-ons will add a log drain when they are provisioned to an app. These drains can only be removed by removing the add-on.", "$schema": "http://json-schema.org/draft-04/hyper-schema", "stability": "production", "strictProperties": true, @@ -3130,11 +6036,17 @@ ], "definitions": { "addon": { - "description": "addon that created the drain", - "example": "example", + "description": "add-on that created the drain", + "example": { + "id": "01234567-89ab-cdef-0123-456789abcdef", + "name": "singing-swiftly-1242" + }, "properties": { "id": { - "$ref": "#/definitions/addon/definitions/id" + "$ref": "#/definitions/add-on/definitions/id" + }, + "name": { + "$ref": "#/definitions/add-on/definitions/name" } }, "readOnly": true, @@ -3161,11 +6073,21 @@ "string" ] }, - "identity": { + "query_identity": { "anyOf": [ { "$ref": "#/definitions/log-drain/definitions/id" }, + { + "$ref": "#/definitions/log-drain/definitions/url" + }, + { + "$ref": "#/definitions/log-drain/definitions/token" + } + ] + }, + "identity": { + "anyOf": [ { "$ref": "#/definitions/log-drain/definitions/url" } @@ -3223,7 +6145,7 @@ }, { "description": "Delete an existing log drain. Log drains added by add-ons can only be removed by removing the add-on.", - "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/log-drains/{(%23%2Fdefinitions%2Flog-drain%2Fdefinitions%2Fidentity)}", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/log-drains/{(%23%2Fdefinitions%2Flog-drain%2Fdefinitions%2Fquery_identity)}", "method": "DELETE", "rel": "destroy", "targetSchema": { @@ -3233,7 +6155,7 @@ }, { "description": "Info for existing log drain.", - "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/log-drains/{(%23%2Fdefinitions%2Flog-drain%2Fdefinitions%2Fidentity)}", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/log-drains/{(%23%2Fdefinitions%2Flog-drain%2Fdefinitions%2Fquery_identity)}", "method": "GET", "rel": "self", "targetSchema": { @@ -3544,6 +6466,16 @@ ] }, "title": "List" + }, + { + "description": "Regenerate OAuth tokens. This endpoint is only available to direct authorizations or privileged OAuth clients.", + "href": "/oauth/authorizations/{(%23%2Fdefinitions%2Foauth-authorization%2Fdefinitions%2Fidentity)}/actions/regenerate-tokens", + "method": "POST", + "rel": "update", + "targetSchema": { + "$ref": "#/definitions/oauth-authorization" + }, + "title": "Regenerate" } ], "properties": { @@ -3632,6 +6564,24 @@ }, "updated_at": { "$ref": "#/definitions/oauth-authorization/definitions/updated_at" + }, + "user": { + "description": "authenticated user associated with this authorization", + "properties": { + "id": { + "$ref": "#/definitions/account/definitions/id" + }, + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "full_name": { + "$ref": "#/definitions/account/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] } } }, @@ -3795,6 +6745,16 @@ "$ref": "#/definitions/oauth-client" }, "title": "Update" + }, + { + "description": "Rotate credentials for an OAuth client", + "href": "/oauth/clients/{(%23%2Fdefinitions%2Foauth-client%2Fdefinitions%2Fidentity)}/actions/rotate-credentials", + "method": "POST", + "rel": "update", + "targetSchema": { + "$ref": "#/definitions/oauth-client" + }, + "title": "Rotate Credentials" } ], "properties": { @@ -3994,6 +6954,16 @@ "$ref": "#/definitions/oauth-token" }, "title": "Create" + }, + { + "description": "Revoke OAuth access token.", + "href": "/oauth/tokens/{(%23%2Fdefinitions%2Foauth-token%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/oauth-token" + }, + "title": "Delete" } ], "properties": { @@ -4108,11 +7078,36 @@ } } }, + "organization-add-on": { + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "description": "A list of add-ons the Organization uses across all apps", + "stability": "production", + "title": "Heroku Platform API - Organization Add-on", + "type": [ + "object" + ], + "links": [ + { + "description": "List add-ons used across all Organization apps", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/addons", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/add-on" + }, + "type": [ + "array" + ] + }, + "title": "List For Organization" + } + ] + }, "organization-app-collaborator": { "description": "An organization collaborator represents an account that has been given access to an organization app on Heroku.", "$schema": "http://json-schema.org/draft-04/hyper-schema", "stability": "prototype", - "strictProperties": true, "title": "Heroku Platform API - Organization App Collaborator", "type": [ "object" @@ -4128,7 +7123,7 @@ }, "links": [ { - "description": "Create a new collaborator on an organization app. Use this endpoint instead of the `/apps/{app_id_or_name}/collaborator` endpoint when you want the collaborator to be granted [privileges] (https://devcenter.heroku.com/articles/org-users-access#roles) according to their role in the organization.", + "description": "Create a new collaborator on an organization app. Use this endpoint instead of the `/apps/{app_id_or_name}/collaborator` endpoint when you want the collaborator to be granted [permissions] (https://devcenter.heroku.com/articles/org-users-access#roles-and-app-permissions) according to their role in the organization.", "href": "/organizations/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/collaborators", "method": "POST", "rel": "create", @@ -4149,7 +7144,7 @@ ] }, "targetSchema": { - "$ref": "#/definitions/collaborator" + "$ref": "#/definitions/organization-app-collaborator" }, "title": "Create" }, @@ -4159,7 +7154,7 @@ "method": "DELETE", "rel": "destroy", "targetSchema": { - "$ref": "#/definitions/collaborator" + "$ref": "#/definitions/organization-app-collaborator" }, "title": "Delete" }, @@ -4169,10 +7164,20 @@ "method": "GET", "rel": "self", "targetSchema": { - "$ref": "#/definitions/collaborator" + "$ref": "#/definitions/organization-app-collaborator" }, "title": "Info" }, + { + "description": "Update an existing collaborator from an organization app.", + "href": "/organizations/apps/{(%23%2Fdefinitions%2Forganization-app%2Fdefinitions%2Fidentity)}/collaborators/{(%23%2Fdefinitions%2Forganization-app-collaborator%2Fdefinitions%2Fidentity)}", + "method": "PATCH", + "rel": "update", + "targetSchema": { + "$ref": "#/definitions/organization-app-collaborator" + }, + "title": "Update" + }, { "description": "List collaborators on an organization app.", "href": "/organizations/apps/{(%23%2Fdefinitions%2Forganization-app%2Fdefinitions%2Fidentity)}/collaborators", @@ -4180,7 +7185,7 @@ "rel": "instances", "targetSchema": { "items": { - "$ref": "#/definitions/collaborator" + "$ref": "#/definitions/organization-app-collaborator" }, "type": [ "array" @@ -4190,6 +7195,21 @@ } ], "properties": { + "app": { + "description": "app collaborator belongs to", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, "created_at": { "$ref": "#/definitions/collaborator/definitions/created_at" }, @@ -4208,6 +7228,9 @@ "email": { "$ref": "#/definitions/account/definitions/email" }, + "federated": { + "$ref": "#/definitions/account/definitions/federated" + }, "id": { "$ref": "#/definitions/account/definitions/id" } @@ -4283,6 +7306,9 @@ "region": { "$ref": "#/definitions/region/definitions/name" }, + "space": { + "$ref": "#/definitions/space/definitions/name" + }, "stack": { "$ref": "#/definitions/stack/definitions/name" } @@ -4475,6 +7501,21 @@ "slug_size": { "$ref": "#/definitions/app/definitions/slug_size" }, + "space": { + "description": "identity of space", + "properties": { + "id": { + "$ref": "#/definitions/space/definitions/id" + }, + "name": { + "$ref": "#/definitions/space/definitions/name" + } + }, + "type": [ + "null", + "object" + ] + }, "stack": { "description": "identity of app stack", "properties": { @@ -4497,18 +7538,577 @@ } } }, + "organization-feature": { + "description": "An organization feature represents a feature enabled on an organization account.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Organization Feature", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when organization feature was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "description": { + "description": "description of organization feature", + "example": "Causes account to example.", + "readOnly": true, + "type": [ + "string" + ] + }, + "doc_url": { + "description": "documentation URL of organization feature", + "example": "http://devcenter.heroku.com/articles/example", + "readOnly": true, + "type": [ + "string" + ] + }, + "enabled": { + "description": "whether or not account feature has been enabled", + "example": true, + "readOnly": false, + "type": [ + "boolean" + ] + }, + "id": { + "description": "unique identifier of organization feature", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/organization-feature/definitions/id" + }, + { + "$ref": "#/definitions/organization-feature/definitions/name" + } + ] + }, + "name": { + "description": "unique name of organization feature", + "example": "name", + "readOnly": true, + "type": [ + "string" + ] + }, + "state": { + "description": "state of organization feature", + "example": "public", + "readOnly": true, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when organization feature was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Info for an existing account feature.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/features/{(%23%2Fdefinitions%2Forganization-feature%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/organization-feature" + }, + "title": "Info" + }, + { + "description": "List existing organization features.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/features", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/organization-feature" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "created_at": { + "$ref": "#/definitions/account-feature/definitions/created_at" + }, + "description": { + "$ref": "#/definitions/account-feature/definitions/description" + }, + "doc_url": { + "$ref": "#/definitions/account-feature/definitions/doc_url" + }, + "enabled": { + "$ref": "#/definitions/account-feature/definitions/enabled" + }, + "id": { + "$ref": "#/definitions/account-feature/definitions/id" + }, + "name": { + "$ref": "#/definitions/account-feature/definitions/name" + }, + "state": { + "$ref": "#/definitions/account-feature/definitions/state" + }, + "updated_at": { + "$ref": "#/definitions/account-feature/definitions/updated_at" + } + } + }, + "organization-invitation": { + "description": "An organization invitation represents an invite to an organization.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Organization Invitation", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when invitation was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/organization-invitation/definitions/id" + } + ] + }, + "id": { + "description": "Unique identifier of an invitation", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "token": { + "description": "Special token for invitation", + "example": "614ae25aa2d4802096cd7c18625b526c", + "readOnly": true, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when invitation was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Get a list of an organization's Identity Providers", + "title": "List", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fname)}/invitations", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/organization-invitation" + }, + "type": [ + "array" + ] + } + }, + { + "description": "Create Organization Invitation", + "title": "Create", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/invitations", + "method": "PUT", + "rel": "update", + "schema": { + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "role": { + "$ref": "#/definitions/organization/definitions/role" + } + }, + "required": [ + "email", + "role" + ], + "type": [ + "object" + ] + } + }, + { + "description": "Revoke an organization invitation.", + "title": "Revoke", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/invitations/{(%23%2Fdefinitions%2Forganization-invitation%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "self" + }, + { + "description": "Get an invitation by its token", + "title": "Get", + "href": "/organizations/invitations/{(%23%2Fdefinitions%2Forganization-invitation%2Fdefinitions%2Ftoken)}", + "method": "GET", + "rel": "instances", + "targetSchema": { + "$ref": "#/definitions/organization-invitation" + } + }, + { + "description": "Accept Organization Invitation", + "title": "Accept", + "href": "/organizations/invitations/{(%23%2Fdefinitions%2Forganization-invitation%2Fdefinitions%2Ftoken)}/accept", + "method": "POST", + "rel": "create", + "targetSchema": { + "$ref": "#/definitions/organization-member" + } + } + ], + "properties": { + "created_at": { + "$ref": "#/definitions/organization-invitation/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/organization-invitation/definitions/id" + }, + "invited_by": { + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "id": { + "$ref": "#/definitions/account/definitions/id" + }, + "name": { + "$ref": "#/definitions/account/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "organization": { + "properties": { + "id": { + "$ref": "#/definitions/organization/definitions/id" + }, + "name": { + "$ref": "#/definitions/organization/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "role": { + "$ref": "#/definitions/organization/definitions/role" + }, + "updated_at": { + "$ref": "#/definitions/organization-invitation/definitions/updated_at" + }, + "user": { + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "id": { + "$ref": "#/definitions/account/definitions/id" + }, + "name": { + "$ref": "#/definitions/account/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + }, + "organization-invoice": { + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "description": "An organization invoice is an itemized bill of goods for an organization which includes pricing and charges.", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Organization Invoice", + "type": [ + "object" + ], + "definitions": { + "addons_total": { + "description": "total add-ons charges in on this invoice", + "example": 25000, + "readOnly": true, + "type": [ + "integer" + ] + }, + "database_total": { + "description": "total database charges on this invoice", + "example": 25000, + "readOnly": true, + "type": [ + "integer" + ] + }, + "charges_total": { + "description": "total charges on this invoice", + "example": 0, + "readOnly": true, + "type": [ + "integer" + ] + }, + "created_at": { + "description": "when invoice was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "credits_total": { + "description": "total credits on this invoice", + "example": 100000, + "readOnly": true, + "type": [ + "integer" + ] + }, + "dyno_units": { + "description": "The total amount of dyno units consumed across dyno types.", + "example": 1.92, + "readOnly": true, + "type": [ + "number" + ] + }, + "id": { + "description": "unique identifier of this invoice", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/organization-invoice/definitions/number" + } + ] + }, + "number": { + "description": "human readable invoice number", + "example": 9403943, + "readOnly": true, + "type": [ + "integer" + ] + }, + "payment_status": { + "description": "Status of the invoice payment.", + "example": "Paid", + "readOnly": true, + "type": [ + "string" + ] + }, + "platform_total": { + "description": "total platform charges on this invoice", + "example": 50000, + "readOnly": true, + "type": [ + "integer" + ] + }, + "period_end": { + "description": "the ending date that the invoice covers", + "example": "01/31/2014", + "readOnly": true, + "type": [ + "string" + ] + }, + "period_start": { + "description": "the starting date that this invoice covers", + "example": "01/01/2014", + "readOnly": true, + "type": [ + "string" + ] + }, + "state": { + "description": "payment status for this invoice (pending, successful, failed)", + "example": 1, + "readOnly": true, + "type": [ + "integer" + ] + }, + "total": { + "description": "combined total of charges and credits on this invoice", + "example": 100000, + "readOnly": true, + "type": [ + "integer" + ] + }, + "updated_at": { + "description": "when invoice was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "weighted_dyno_hours": { + "description": "The total amount of hours consumed across dyno types.", + "example": 1488, + "readOnly": true, + "type": [ + "number" + ] + } + }, + "links": [ + { + "description": "Info for existing invoice.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/invoices/{(%23%2Fdefinitions%2Forganization-invoice%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/organization-invoice" + }, + "title": "Info" + }, + { + "description": "List existing invoices.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/invoices", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/organization-invoice" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "addons_total": { + "$ref": "#/definitions/organization-invoice/definitions/addons_total" + }, + "database_total": { + "$ref": "#/definitions/organization-invoice/definitions/database_total" + }, + "charges_total": { + "$ref": "#/definitions/organization-invoice/definitions/charges_total" + }, + "created_at": { + "$ref": "#/definitions/organization-invoice/definitions/created_at" + }, + "credits_total": { + "$ref": "#/definitions/organization-invoice/definitions/credits_total" + }, + "dyno_units": { + "$ref": "#/definitions/organization-invoice/definitions/dyno_units" + }, + "id": { + "$ref": "#/definitions/organization-invoice/definitions/id" + }, + "number": { + "$ref": "#/definitions/organization-invoice/definitions/number" + }, + "payment_status": { + "$ref": "#/definitions/organization-invoice/definitions/payment_status" + }, + "period_end": { + "$ref": "#/definitions/organization-invoice/definitions/period_end" + }, + "period_start": { + "$ref": "#/definitions/organization-invoice/definitions/period_start" + }, + "platform_total": { + "$ref": "#/definitions/organization-invoice/definitions/platform_total" + }, + "state": { + "$ref": "#/definitions/organization-invoice/definitions/state" + }, + "total": { + "$ref": "#/definitions/organization-invoice/definitions/total" + }, + "updated_at": { + "$ref": "#/definitions/organization-invoice/definitions/updated_at" + }, + "weighted_dyno_hours": { + "$ref": "#/definitions/organization-invoice/definitions/weighted_dyno_hours" + } + } + }, "organization-member": { "$schema": "http://json-schema.org/draft-04/hyper-schema", "description": "An organization member is an individual with access to an organization.", "stability": "prototype", - "strictProperties": true, + "additionalProperties": false, + "required": [ + "created_at", + "email", + "federated", + "updated_at" + ], "title": "Heroku Platform API - Organization Member", "type": [ "object" ], "definitions": { "created_at": { - "description": "when organization-member was created", + "description": "when the membership record was created", "example": "2012-01-01T12:00:00Z", "format": "date-time", "readOnly": true, @@ -4524,15 +8124,52 @@ "string" ] }, + "federated": { + "description": "whether the user is federated and belongs to an Identity Provider", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, + "id": { + "description": "unique identifier of organization member", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, "identity": { "anyOf": [ { "$ref": "#/definitions/organization-member/definitions/email" + }, + { + "$ref": "#/definitions/organization-member/definitions/id" } ] }, + "name": { + "description": "full name of the organization member", + "example": "Tina Edmonds", + "readOnly": true, + "type": [ + "string", + "null" + ] + }, + "two_factor_authentication": { + "description": "whether the Enterprise organization member has two factor authentication enabled", + "example": true, + "readOnly": true, + "type": [ + "boolean" + ] + }, "updated_at": { - "description": "when organization-member was updated", + "description": "when the membership record was updated", "example": "2012-01-01T12:00:00Z", "format": "date-time", "readOnly": true, @@ -4552,6 +8189,9 @@ "email": { "$ref": "#/definitions/organization-member/definitions/email" }, + "federated": { + "$ref": "#/definitions/organization-member/definitions/federated" + }, "role": { "$ref": "#/definitions/organization/definitions/role" } @@ -4569,6 +8209,66 @@ }, "title": "Create or Update" }, + { + "description": "Create a new organization member.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/members", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "email": { + "$ref": "#/definitions/organization-member/definitions/email" + }, + "federated": { + "$ref": "#/definitions/organization-member/definitions/federated" + }, + "role": { + "$ref": "#/definitions/organization/definitions/role" + } + }, + "required": [ + "email", + "role" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/organization-member" + }, + "title": "Create" + }, + { + "description": "Update an organization member.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/members", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "email": { + "$ref": "#/definitions/organization-member/definitions/email" + }, + "federated": { + "$ref": "#/definitions/organization-member/definitions/federated" + }, + "role": { + "$ref": "#/definitions/organization/definitions/role" + } + }, + "required": [ + "email", + "role" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/organization-member" + }, + "title": "update" + }, { "description": "Remove a member from the organization.", "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/members/{(%23%2Fdefinitions%2Forganization-member%2Fdefinitions%2Fidentity)}", @@ -4583,6 +8283,9 @@ "description": "List members of the organization.", "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/members", "method": "GET", + "ranges": [ + "email" + ], "rel": "instances", "targetSchema": { "items": { @@ -4593,6 +8296,21 @@ ] }, "title": "List" + }, + { + "description": "List the apps of a member.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/members/{(%23%2Fdefinitions%2Forganization-member%2Fdefinitions%2Fidentity)}/apps", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/organization-app" + }, + "type": [ + "array" + ] + }, + "title": "List" } ], "properties": { @@ -4602,11 +8320,117 @@ "email": { "$ref": "#/definitions/organization-member/definitions/email" }, + "federated": { + "$ref": "#/definitions/organization-member/definitions/federated" + }, + "id": { + "$ref": "#/definitions/organization-member/definitions/id" + }, "role": { "$ref": "#/definitions/organization/definitions/role" }, + "two_factor_authentication": { + "$ref": "#/definitions/organization-member/definitions/two_factor_authentication" + }, "updated_at": { "$ref": "#/definitions/organization-member/definitions/updated_at" + }, + "user": { + "description": "user information for the membership", + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "id": { + "$ref": "#/definitions/account/definitions/id" + }, + "name": { + "$ref": "#/definitions/account/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + }, + "organization-preferences": { + "description": "Tracks an organization's preferences", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Organization Preferences", + "type": [ + "object" + ], + "definitions": { + "default-permission": { + "description": "The default permission used when adding new members to the organization", + "example": "member", + "readOnly": false, + "enum": [ + "admin", + "member", + "viewer", + null + ], + "type": [ + "null", + "string" + ] + }, + "identity": { + "$ref": "#/definitions/organization/definitions/identity" + }, + "whitelisting-enabled": { + "description": "Whether whitelisting rules should be applied to add-on installations", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + } + }, + "links": [ + { + "description": "Retrieve Organization Preferences", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization-preferences%2Fdefinitions%2Fidentity)}/preferences", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/organization-preferences" + }, + "title": "List" + }, + { + "description": "Update Organization Preferences", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization-preferences%2Fdefinitions%2Fidentity)}/preferences", + "method": "PATCH", + "rel": "update", + "schema": { + "type": [ + "object" + ], + "properties": { + "whitelisting-enabled": { + "$ref": "#/definitions/organization-preferences/definitions/whitelisting-enabled" + } + } + }, + "targetSchema": { + "$ref": "#/definitions/organization-preferences" + }, + "title": "Update" + } + ], + "properties": { + "default-permission": { + "$ref": "#/definitions/organization-preferences/definitions/default-permission" + }, + "whitelisting-enabled": { + "$ref": "#/definitions/organization-preferences/definitions/whitelisting-enabled" } } }, @@ -4620,9 +8444,18 @@ "object" ], "definitions": { + "created_at": { + "description": "when the organization was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, "credit_card_collections": { "description": "whether charges incurred by the org are paid by credit card.", - "example": "true", + "example": true, "readOnly": true, "type": [ "boolean" @@ -4630,19 +8463,131 @@ }, "default": { "description": "whether to use this organization when none is specified", - "example": "true", + "example": true, "readOnly": false, "type": [ "boolean" ] }, + "id": { + "description": "unique identifier of organization", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, "identity": { "anyOf": [ { "$ref": "#/definitions/organization/definitions/name" + }, + { + "$ref": "#/definitions/organization/definitions/id" } ] }, + "address_1": { + "type": [ + "string" + ], + "description": "street address line 1", + "example": "40 Hickory Lane" + }, + "address_2": { + "type": [ + "string" + ], + "description": "street address line 2", + "example": "Suite 103" + }, + "card_number": { + "type": [ + "string" + ], + "description": "encrypted card number of payment method", + "example": "encrypted-card-number" + }, + "city": { + "type": [ + "string" + ], + "description": "city", + "example": "San Francisco" + }, + "country": { + "type": [ + "string" + ], + "description": "country", + "example": "US" + }, + "cvv": { + "type": [ + "string" + ], + "description": "card verification value", + "example": "123" + }, + "expiration_month": { + "type": [ + "string" + ], + "description": "expiration month", + "example": "11" + }, + "expiration_year": { + "type": [ + "string" + ], + "description": "expiration year", + "example": "2014" + }, + "first_name": { + "type": [ + "string" + ], + "description": "the first name for payment method", + "example": "Jason" + }, + "last_name": { + "type": [ + "string" + ], + "description": "the last name for payment method", + "example": "Walker" + }, + "other": { + "type": [ + "string" + ], + "description": "metadata", + "example": "Additional information for payment method" + }, + "postal_code": { + "type": [ + "string" + ], + "description": "postal code", + "example": "90210" + }, + "state": { + "type": [ + "string" + ], + "description": "state", + "example": "CA" + }, + "membership_limit": { + "description": "upper limit of members allowed in an organization.", + "example": 25, + "readOnly": true, + "type": [ + "number", + "null" + ] + }, "name": { "description": "unique name of organization", "example": "example", @@ -4653,7 +8598,7 @@ }, "provisioned_licenses": { "description": "whether the org is provisioned licenses by salesforce.", - "example": "true", + "example": true, "readOnly": true, "type": [ "boolean" @@ -4663,11 +8608,35 @@ "description": "role in the organization", "enum": [ "admin", + "collaborator", "member", - "collaborator" + "owner", + null ], "example": "admin", "readOnly": true, + "type": [ + "null", + "string" + ] + }, + "type": { + "description": "type of organization.", + "example": "team", + "enum": [ + "enterprise", + "team" + ], + "readOnly": true, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when the organization was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, "type": [ "string" ] @@ -4690,7 +8659,14 @@ "title": "List" }, { - "description": "Set or unset the organization as your default organization.", + "description": "Info for an organization.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "title": "Info" + }, + { + "description": "Update organization properties.", "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}", "method": "PATCH", "rel": "update", @@ -4698,6 +8674,9 @@ "properties": { "default": { "$ref": "#/definitions/organization/definitions/default" + }, + "name": { + "$ref": "#/definitions/organization/definitions/name" } }, "type": [ @@ -4708,15 +8687,96 @@ "$ref": "#/definitions/organization" }, "title": "Update" + }, + { + "description": "Create a new organization.", + "href": "/organizations", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "name": { + "$ref": "#/definitions/organization/definitions/name" + }, + "address_1": { + "$ref": "#/definitions/organization/definitions/address_1" + }, + "address_2": { + "$ref": "#/definitions/organization/definitions/address_2" + }, + "card_number": { + "$ref": "#/definitions/organization/definitions/card_number" + }, + "city": { + "$ref": "#/definitions/organization/definitions/city" + }, + "country": { + "$ref": "#/definitions/organization/definitions/country" + }, + "cvv": { + "$ref": "#/definitions/organization/definitions/cvv" + }, + "expiration_month": { + "$ref": "#/definitions/organization/definitions/expiration_month" + }, + "expiration_year": { + "$ref": "#/definitions/organization/definitions/expiration_year" + }, + "first_name": { + "$ref": "#/definitions/organization/definitions/first_name" + }, + "last_name": { + "$ref": "#/definitions/organization/definitions/last_name" + }, + "other": { + "$ref": "#/definitions/organization/definitions/other" + }, + "postal_code": { + "$ref": "#/definitions/organization/definitions/postal_code" + }, + "state": { + "$ref": "#/definitions/organization/definitions/state" + } + }, + "required": [ + "name" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/organization" + }, + "title": "Create" + }, + { + "description": "Delete an existing organization.", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/organization" + }, + "title": "Delete" } ], "properties": { + "id": { + "$ref": "#/definitions/organization/definitions/id" + }, + "created_at": { + "$ref": "#/definitions/organization/definitions/created_at" + }, "credit_card_collections": { "$ref": "#/definitions/organization/definitions/credit_card_collections" }, "default": { "$ref": "#/definitions/organization/definitions/default" }, + "membership_limit": { + "$ref": "#/definitions/organization/definitions/membership_limit" + }, "name": { "$ref": "#/definitions/organization/definitions/name" }, @@ -4725,6 +8785,1017 @@ }, "role": { "$ref": "#/definitions/organization/definitions/role" + }, + "type": { + "$ref": "#/definitions/organization/definitions/type" + }, + "updated_at": { + "$ref": "#/definitions/organization/definitions/updated_at" + } + } + }, + "outbound-ruleset": { + "description": "An outbound-ruleset is a collection of rules that specify what hosts Dynos are allowed to communicate with. ", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Outbound Ruleset", + "type": [ + "object" + ], + "definitions": { + "target": { + "description": "is the target destination in CIDR notation", + "example": "1.1.1.1/1", + "pattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|[1-2][0-9]|3[0-2]))$", + "readOnly": false, + "type": [ + "string" + ] + }, + "created_at": { + "description": "when outbound-ruleset was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of an outbound-ruleset", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "port": { + "description": "an endpoint of communication in an operating system.", + "example": 80, + "readOnly": false, + "type": [ + "integer" + ] + }, + "protocol": { + "description": "formal standards and policies comprised of rules, procedures and formats that define communication between two or more devices over a network", + "example": "tcp", + "readOnly": false, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/outbound-ruleset/definitions/id" + } + ] + }, + "rule": { + "description": "the combination of an IP address in CIDR notation, a from_port, to_port and protocol.", + "type": [ + "object" + ], + "properties": { + "target": { + "$ref": "#/definitions/outbound-ruleset/definitions/target" + }, + "from_port": { + "$ref": "#/definitions/outbound-ruleset/definitions/port" + }, + "to_port": { + "$ref": "#/definitions/outbound-ruleset/definitions/port" + }, + "protocol": { + "$ref": "#/definitions/outbound-ruleset/definitions/protocol" + } + }, + "required": [ + "target", + "from_port", + "to_port", + "protocol" + ] + } + }, + "links": [ + { + "description": "Current outbound ruleset for a space", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/outbound-ruleset", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/outbound-ruleset" + }, + "title": "Info" + }, + { + "description": "Info on an existing Outbound Ruleset", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/outbound-rulesets/{(%23%2Fdefinitions%2Foutbound-ruleset%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/outbound-ruleset" + }, + "title": "Info" + }, + { + "description": "List all Outbound Rulesets for a space", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/outbound-rulesets", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/outbound-ruleset" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Create a new outbound ruleset", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/outbound-ruleset", + "method": "PUT", + "rel": "create", + "schema": { + "type": [ + "object" + ], + "properties": { + "rules": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/outbound-ruleset/definitions/rule" + } + } + } + }, + "title": "Create" + } + ], + "properties": { + "id": { + "$ref": "#/definitions/outbound-ruleset/definitions/id" + }, + "created_at": { + "$ref": "#/definitions/outbound-ruleset/definitions/created_at" + }, + "rules": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/outbound-ruleset/definitions/rule" + } + }, + "created_by": { + "$ref": "#/definitions/account/definitions/email" + } + } + }, + "password-reset": { + "description": "A password reset represents a in-process password reset attempt.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "production", + "strictProperties": true, + "title": "Heroku Platform API - PasswordReset", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when password reset was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/account/definitions/email" + } + ] + }, + "password_confirmation": { + "description": "confirmation of the new password", + "example": "newpassword", + "readOnly": true, + "type": [ + "string" + ] + }, + "reset_password_token": { + "description": "unique identifier of a password reset attempt", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Reset account's password. This will send a reset password link to the user's email address.", + "href": "/password-resets", + "method": "POST", + "rel": "self", + "schema": { + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + } + }, + "type": [ + "object" + ] + }, + "title": "Reset Password" + }, + { + "description": "Complete password reset.", + "href": "/password-resets/{(%23%2Fdefinitions%2Fpassword-reset%2Fdefinitions%2Freset_password_token)}/actions/finalize", + "method": "POST", + "rel": "self", + "schema": { + "properties": { + "password": { + "$ref": "#/definitions/account/definitions/password" + }, + "password_confirmation": { + "$ref": "#/definitions/password-reset/definitions/password_confirmation" + } + }, + "type": [ + "object" + ] + }, + "title": "Complete Reset Password" + } + ], + "properties": { + "created_at": { + "$ref": "#/definitions/password-reset/definitions/created_at" + }, + "user": { + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "id": { + "$ref": "#/definitions/account/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + }, + "organization-app-permission": { + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "description": "An organization app permission is a behavior that is assigned to a user in an organization app.", + "stability": "prototype", + "title": "Heroku Platform API - Organization App Permission", + "type": [ + "object" + ], + "definitions": { + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/organization-app-permission/definitions/name" + } + ] + }, + "name": { + "description": "The name of the app permission.", + "example": "view", + "readOnly": true, + "type": [ + "string" + ] + }, + "description": { + "description": "A description of what the app permission allows.", + "example": "Can manage config, deploy, run commands and restart the app.", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Lists permissions available to organizations.", + "href": "/organizations/permissions", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/organization-app-permission" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "name": { + "$ref": "#/definitions/organization-app-permission/definitions/name" + }, + "description": { + "$ref": "#/definitions/organization-app-permission/definitions/description" + } + } + }, + "pipeline-coupling": { + "description": "Information about an app's coupling to a pipeline", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "title": "Heroku Platform API - Pipeline Coupling", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when pipeline coupling was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of pipeline coupling", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/pipeline-coupling/definitions/id" + } + ] + }, + "stage": { + "description": "target pipeline stage", + "example": "production", + "enum": [ + "test", + "review", + "development", + "staging", + "production" + ], + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when pipeline coupling was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "List couplings for a pipeline", + "href": "/pipelines/{(%23%2Fdefinitions%2Fpipeline%2Fdefinitions%2Fid)}/pipeline-couplings", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/pipeline-coupling" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "List pipeline couplings.", + "href": "/pipeline-couplings", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/pipeline-coupling" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Create a new pipeline coupling.", + "href": "/pipeline-couplings", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "app": { + "$ref": "#/definitions/app/definitions/identity" + }, + "pipeline": { + "$ref": "#/definitions/pipeline/definitions/id" + }, + "stage": { + "$ref": "#/definitions/pipeline-coupling/definitions/stage" + } + }, + "required": [ + "app", + "pipeline", + "stage" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/pipeline-coupling" + }, + "title": "Create" + }, + { + "description": "Info for an existing pipeline coupling.", + "href": "/pipeline-couplings/{(%23%2Fdefinitions%2Fpipeline-coupling%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/pipeline-coupling" + }, + "title": "Info" + }, + { + "description": "Delete an existing pipeline coupling.", + "href": "/pipeline-couplings/{(%23%2Fdefinitions%2Fpipeline-coupling%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "delete", + "targetSchema": { + "$ref": "#/definitions/pipeline-coupling" + }, + "title": "Delete" + }, + { + "description": "Update an existing pipeline coupling.", + "href": "/pipeline-couplings/{(%23%2Fdefinitions%2Fpipeline-coupling%2Fdefinitions%2Fidentity)}", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "stage": { + "$ref": "#/definitions/pipeline-coupling/definitions/stage" + } + }, + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/pipeline-coupling" + }, + "title": "Update" + }, + { + "description": "Info for an existing pipeline coupling.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/pipeline-couplings", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/pipeline-coupling" + }, + "title": "Info" + } + ], + "properties": { + "app": { + "description": "app involved in the pipeline coupling", + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "type": [ + "object" + ] + }, + "created_at": { + "$ref": "#/definitions/pipeline-coupling/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/pipeline-coupling/definitions/id" + }, + "pipeline": { + "description": "pipeline involved in the coupling", + "properties": { + "id": { + "$ref": "#/definitions/pipeline/definitions/id" + } + }, + "type": [ + "object" + ] + }, + "stage": { + "$ref": "#/definitions/pipeline-coupling/definitions/stage" + }, + "updated_at": { + "$ref": "#/definitions/pipeline-coupling/definitions/updated_at" + } + } + }, + "pipeline-promotion-target": { + "description": "Promotion targets represent an individual app being promoted to", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Pipeline Promotion Target", + "type": [ + "object" + ], + "definitions": { + "error_message": { + "description": "an error message for why the promotion failed", + "example": "User does not have access to that app", + "type": [ + "null", + "string" + ] + }, + "id": { + "description": "unique identifier of promotion target", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "readOnly": true, + "format": "uuid", + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/pipeline-promotion-target/definitions/id" + } + ] + }, + "status": { + "description": "status of promotion", + "example": "pending", + "readOnly": true, + "enum": [ + "pending", + "succeeded", + "failed" + ], + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "List promotion targets belonging to an existing promotion.", + "href": "/pipeline-promotions/{(%23%2Fdefinitions%2Fpipeline-promotion%2Fdefinitions%2Fid)}/promotion-targets", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/pipeline-promotion-target" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "app": { + "description": "the app which was promoted to", + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "error_message": { + "$ref": "#/definitions/pipeline-promotion-target/definitions/error_message" + }, + "id": { + "$ref": "#/definitions/pipeline-promotion-target/definitions/id" + }, + "pipeline_promotion": { + "description": "the promotion which the target belongs to", + "properties": { + "id": { + "$ref": "#/definitions/pipeline-promotion/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "release": { + "description": "the release which was created on the target app", + "properties": { + "id": { + "$ref": "#/definitions/release/definitions/id" + } + }, + "type": [ + "object", + "null" + ] + }, + "status": { + "$ref": "#/definitions/pipeline-promotion-target/definitions/status" + } + } + }, + "pipeline-promotion": { + "description": "Promotions allow you to move code from an app in a pipeline to all targets", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Pipeline Promotion", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when promotion was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of promotion", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "readOnly": true, + "format": "uuid", + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/pipeline-promotion/definitions/id" + } + ] + }, + "status": { + "description": "status of promotion", + "example": "pending", + "readOnly": true, + "enum": [ + "pending", + "completed" + ], + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when promotion was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "type": [ + "string", + "null" + ] + } + }, + "links": [ + { + "description": "Create a new promotion.", + "href": "/pipeline-promotions", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "pipeline": { + "description": "pipeline involved in the promotion", + "properties": { + "id": { + "$ref": "#/definitions/pipeline/definitions/id" + } + }, + "required": [ + "id" + ], + "type": [ + "object" + ] + }, + "source": { + "description": "the app being promoted from", + "type": [ + "object" + ], + "properties": { + "app": { + "description": "the app which was promoted from", + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + }, + "targets": { + "type": [ + "array" + ], + "items": { + "type": [ + "object" + ], + "properties": { + "app": { + "description": "the app is being promoted to", + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + } + } + }, + "required": [ + "pipeline", + "source", + "targets" + ], + "type": [ + "object" + ] + }, + "title": "Create" + }, + { + "description": "Info for existing pipeline promotion.", + "href": "/pipeline-promotions/{(%23%2Fdefinitions%2Fpipeline-promotion%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/pipeline-promotion" + }, + "title": "Info" + } + ], + "properties": { + "created_at": { + "$ref": "#/definitions/pipeline-promotion/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/pipeline-promotion/definitions/id" + }, + "pipeline": { + "description": "the pipeline which the promotion belongs to", + "properties": { + "id": { + "$ref": "#/definitions/pipeline/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "source": { + "description": "the app being promoted from", + "properties": { + "app": { + "description": "the app which was promoted from", + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "release": { + "description": "the release used to promoted from", + "properties": { + "id": { + "$ref": "#/definitions/release/definitions/id" + } + }, + "type": [ + "object" + ] + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "status": { + "$ref": "#/definitions/pipeline-promotion/definitions/status" + }, + "updated_at": { + "$ref": "#/definitions/pipeline-promotion/definitions/updated_at" + } + } + }, + "pipeline": { + "description": "A pipeline allows grouping of apps into different stages.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Pipeline", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when pipeline was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of pipeline", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/pipeline/definitions/id" + }, + { + "$ref": "#/definitions/pipeline/definitions/name" + } + ] + }, + "name": { + "description": "name of pipeline", + "example": "example", + "pattern": "^[a-z][a-z0-9-]{2,29}$", + "readOnly": false, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when pipeline was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Create a new pipeline.", + "href": "/pipelines", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "name": { + "$ref": "#/definitions/pipeline/definitions/name" + } + }, + "required": [ + "name" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/pipeline" + }, + "title": "Create" + }, + { + "description": "Info for existing pipeline.", + "href": "/pipelines/{(%23%2Fdefinitions%2Fpipeline%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/pipeline" + }, + "title": "Info" + }, + { + "description": "Delete an existing pipeline.", + "href": "/pipelines/{(%23%2Fdefinitions%2Fpipeline%2Fdefinitions%2Fid)}", + "method": "DELETE", + "rel": "delete", + "targetSchema": { + "$ref": "#/definitions/pipeline" + }, + "title": "Delete" + }, + { + "description": "Update an existing pipeline.", + "href": "/pipelines/{(%23%2Fdefinitions%2Fpipeline%2Fdefinitions%2Fid)}", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "name": { + "$ref": "#/definitions/pipeline/definitions/name" + } + }, + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/pipeline" + }, + "title": "Update" + }, + { + "description": "List existing pipelines.", + "href": "/pipelines", + "method": "GET", + "rel": "instances", + "targetSchema": { + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/pipeline" + } + }, + "title": "List" + } + ], + "properties": { + "created_at": { + "$ref": "#/definitions/pipeline/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/pipeline/definitions/id" + }, + "name": { + "$ref": "#/definitions/pipeline/definitions/name" + }, + "updated_at": { + "$ref": "#/definitions/pipeline/definitions/updated_at" } } }, @@ -4747,8 +9818,22 @@ "string" ] }, + "compliance": { + "description": "the compliance regimes applied to an add-on plan", + "example": [ + "HIPAA" + ], + "readOnly": false, + "type": [ + "null", + "array" + ], + "items": { + "$ref": "#/definitions/plan/definitions/regime" + } + }, "default": { - "description": "whether this plan is the default for its addon service", + "description": "whether this plan is the default for its add-on service", "example": false, "readOnly": true, "type": [ @@ -4763,6 +9848,14 @@ "string" ] }, + "human_name": { + "description": "human readable name of the add-on plan", + "example": "Dev", + "readOnly": true, + "type": [ + "string" + ] + }, "id": { "description": "unique identifier of this plan", "example": "01234567-89ab-cdef-0123-456789abcdef", @@ -4772,6 +9865,22 @@ "string" ] }, + "installable_inside_private_network": { + "description": "whether this plan is installable to a Private Spaces app", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, + "installable_outside_private_network": { + "description": "whether this plan is installable to a Common Runtime app", + "example": true, + "readOnly": true, + "type": [ + "boolean" + ] + }, "identity": { "anyOf": [ { @@ -4790,6 +9899,18 @@ "string" ] }, + "regime": { + "description": "compliance requirements an add-on plan must adhere to", + "readOnly": true, + "example": "HIPAA", + "type": [ + "string" + ], + "enum": [ + "HIPAA", + "PCI" + ] + }, "cents": { "description": "price in cents per unit of plan", "example": 0, @@ -4806,6 +9927,14 @@ "string" ] }, + "space_default": { + "description": "whether this plan is the default for apps in Private Spaces", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, "state": { "description": "release status for plan", "example": "public", @@ -4822,12 +9951,20 @@ "type": [ "string" ] + }, + "visible": { + "description": "whether this plan is publicly visible", + "example": true, + "readOnly": true, + "type": [ + "boolean" + ] } }, "links": [ { "description": "Info for existing plan.", - "href": "/addon-services/{(%23%2Fdefinitions%2Faddon-service%2Fdefinitions%2Fidentity)}/plans/{(%23%2Fdefinitions%2Fplan%2Fdefinitions%2Fidentity)}", + "href": "/addon-services/{(%23%2Fdefinitions%2Fadd-on-service%2Fdefinitions%2Fidentity)}/plans/{(%23%2Fdefinitions%2Fplan%2Fdefinitions%2Fidentity)}", "method": "GET", "rel": "self", "targetSchema": { @@ -4837,7 +9974,7 @@ }, { "description": "List existing plans.", - "href": "/addon-services/{(%23%2Fdefinitions%2Faddon-service%2Fdefinitions%2Fidentity)}/plans", + "href": "/addon-services/{(%23%2Fdefinitions%2Fadd-on-service%2Fdefinitions%2Fidentity)}/plans", "method": "GET", "rel": "instances", "targetSchema": { @@ -4852,18 +9989,45 @@ } ], "properties": { + "addon_service": { + "description": "identity of add-on service", + "properties": { + "id": { + "$ref": "#/definitions/add-on-service/definitions/id" + }, + "name": { + "$ref": "#/definitions/add-on-service/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, "created_at": { "$ref": "#/definitions/plan/definitions/created_at" }, + "compliance": { + "$ref": "#/definitions/plan/definitions/compliance" + }, "default": { "$ref": "#/definitions/plan/definitions/default" }, "description": { "$ref": "#/definitions/plan/definitions/description" }, + "human_name": { + "$ref": "#/definitions/plan/definitions/human_name" + }, "id": { "$ref": "#/definitions/plan/definitions/id" }, + "installable_inside_private_network": { + "$ref": "#/definitions/plan/definitions/installable_inside_private_network" + }, + "installable_outside_private_network": { + "$ref": "#/definitions/plan/definitions/installable_outside_private_network" + }, "name": { "$ref": "#/definitions/plan/definitions/name" }, @@ -4882,11 +10046,17 @@ "object" ] }, + "space_default": { + "$ref": "#/definitions/plan/definitions/space_default" + }, "state": { "$ref": "#/definitions/plan/definitions/state" }, "updated_at": { "$ref": "#/definitions/plan/definitions/updated_at" + }, + "visible": { + "$ref": "#/definitions/plan/definitions/visible" } } }, @@ -4939,6 +10109,14 @@ "object" ], "definitions": { + "country": { + "description": "country where the region exists", + "example": "United States", + "readOnly": true, + "type": [ + "string" + ] + }, "created_at": { "description": "when region was created", "example": "2012-01-01T12:00:00Z", @@ -4975,6 +10153,14 @@ } ] }, + "locale": { + "description": "area in the country where the region exists", + "example": "Virginia", + "readOnly": true, + "type": [ + "string" + ] + }, "name": { "description": "unique name of region", "example": "us", @@ -4983,6 +10169,39 @@ "string" ] }, + "private_capable": { + "description": "whether or not region is available for creating a Private Space", + "example": false, + "readOnly": true, + "type": [ + "boolean" + ] + }, + "provider": { + "description": "provider of underlying substrate", + "type": [ + "object" + ], + "properties": { + "name": { + "description": "name of provider", + "example": "amazon-web-services", + "readOnly": true, + "type": [ + "string" + ] + }, + "region": { + "description": "region name used by provider", + "example": "us-east-1", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "readOnly": true + }, "updated_at": { "description": "when region was updated", "example": "2012-01-01T12:00:00Z", @@ -5021,6 +10240,9 @@ } ], "properties": { + "country": { + "$ref": "#/definitions/region/definitions/country" + }, "created_at": { "$ref": "#/definitions/region/definitions/created_at" }, @@ -5030,9 +10252,18 @@ "id": { "$ref": "#/definitions/region/definitions/id" }, + "locale": { + "$ref": "#/definitions/region/definitions/locale" + }, "name": { "$ref": "#/definitions/region/definitions/name" }, + "private_capable": { + "$ref": "#/definitions/region/definitions/private_capable" + }, + "provider": { + "$ref": "#/definitions/region/definitions/provider" + }, "updated_at": { "$ref": "#/definitions/region/definitions/updated_at" } @@ -5065,6 +10296,19 @@ "string" ] }, + "status": { + "description": "current status of the release", + "enum": [ + "failed", + "pending", + "succeeded" + ], + "example": "succeeded", + "readOnly": true, + "type": [ + "string" + ] + }, "id": { "description": "unique identifier of release", "example": "01234567-89ab-cdef-0123-456789abcdef", @@ -5100,6 +10344,14 @@ "type": [ "integer" ] + }, + "current": { + "description": "indicates this release as being the current one for the app", + "example": true, + "readOnly": true, + "type": [ + "boolean" + ] } }, "links": [ @@ -5129,7 +10381,7 @@ "title": "List" }, { - "description": "Create new release. The API cannot be used to create releases on Bamboo apps.", + "description": "Create new release.", "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/releases", "method": "POST", "rel": "create", @@ -5179,6 +10431,29 @@ } ], "properties": { + "addon_plan_names": { + "description": "add-on plans installed on the app for this release", + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/plan/definitions/name" + } + }, + "app": { + "description": "app involved in the release", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "type": [ + "object" + ] + }, "created_at": { "$ref": "#/definitions/release/definitions/created_at" }, @@ -5204,6 +10479,9 @@ "null" ] }, + "status": { + "$ref": "#/definitions/release/definitions/status" + }, "user": { "description": "user that created the release", "properties": { @@ -5221,6 +10499,9 @@ }, "version": { "$ref": "#/definitions/release/definitions/version" + }, + "current": { + "$ref": "#/definitions/release/definitions/current" } } }, @@ -5243,6 +10524,15 @@ "string" ] }, + "checksum": { + "description": "an optional checksum of the slug for verifying its integrity", + "example": "SHA256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + "readOnly": true, + "type": [ + "null", + "string" + ] + }, "commit": { "description": "identification of the code with your version control system (eg: SHA of the git HEAD)", "example": "60883d9e8947a57e04dc9124f25df004866a2051", @@ -5252,6 +10542,15 @@ "string" ] }, + "commit_description": { + "description": "an optional description of the provided commit", + "example": "fixed a bug with API documentation", + "readOnly": false, + "type": [ + "null", + "string" + ] + }, "created_at": { "description": "when slug was created", "example": "2012-01-01T12:00:00Z", @@ -5292,7 +10591,7 @@ "web": "./bin/web -p $PORT" }, "patternProperties": { - "^\\w+$": { + "^[-\\w]{1,128}$": { "type": [ "string" ] @@ -5343,7 +10642,7 @@ "title": "Info" }, { - "description": "Create a new slug. For more information please refer to [Deploying Slugs using the Platform API](https://devcenter.heroku.com/articles/platform-api-deploying-slugs?preview=1).", + "description": "Create a new slug. For more information please refer to [Deploying Slugs using the Platform API](https://devcenter.heroku.com/articles/platform-api-deploying-slugs).", "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/slugs", "method": "POST", "rel": "create", @@ -5352,11 +10651,20 @@ "buildpack_provided_description": { "$ref": "#/definitions/slug/definitions/buildpack_provided_description" }, + "checksum": { + "$ref": "#/definitions/slug/definitions/checksum" + }, "commit": { "$ref": "#/definitions/slug/definitions/commit" }, + "commit_description": { + "$ref": "#/definitions/slug/definitions/commit_description" + }, "process_types": { "$ref": "#/definitions/slug/definitions/process_types" + }, + "stack": { + "$ref": "#/definitions/stack/definitions/identity" } }, "required": [ @@ -5367,7 +10675,28 @@ ] }, "targetSchema": { - "$ref": "#/definitions/slug" + "$ref": "#/definitions/slug", + "example": { + "blob": { + "method": "PUT", + "url": "https://api.heroku.com/slugs/1234.tgz" + }, + "buildpack_provided_description": "Ruby/Rack", + "checksum": "SHA256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + "commit": "60883d9e8947a57e04dc9124f25df004866a2051", + "commit_description": "fixed a bug with API documentation", + "created_at": "2012-01-01T12:00:00Z", + "id": "01234567-89ab-cdef-0123-456789abcdef", + "process_types": { + "web": "./bin/web -p $PORT" + }, + "size": 2048, + "stack": { + "id": "01234567-89ab-cdef-0123-456789abcdef", + "name": "cedar-14" + }, + "updated_at": "2012-01-01T12:00:00Z" + } }, "title": "Create" } @@ -5391,9 +10720,15 @@ "buildpack_provided_description": { "$ref": "#/definitions/slug/definitions/buildpack_provided_description" }, + "checksum": { + "$ref": "#/definitions/slug/definitions/checksum" + }, "commit": { "$ref": "#/definitions/slug/definitions/commit" }, + "commit_description": { + "$ref": "#/definitions/slug/definitions/commit_description" + }, "created_at": { "$ref": "#/definitions/slug/definitions/created_at" }, @@ -5406,13 +10741,795 @@ "size": { "$ref": "#/definitions/slug/definitions/size" }, + "stack": { + "description": "identity of slug stack", + "properties": { + "id": { + "$ref": "#/definitions/stack/definitions/id" + }, + "name": { + "$ref": "#/definitions/stack/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, "updated_at": { "$ref": "#/definitions/slug/definitions/updated_at" } } }, + "sms-number": { + "description": "SMS numbers are used for recovery on accounts with two-factor authentication enabled.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "production", + "strictProperties": true, + "title": "Heroku Platform API - SMS Number", + "type": [ + "object" + ], + "definitions": { + "sms_number": { + "$ref": "#/definitions/account/definitions/sms_number" + } + }, + "links": [ + { + "description": "Recover an account using an SMS recovery code", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}/sms-number", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/sms-number" + }, + "title": "SMS Number" + }, + { + "description": "Recover an account using an SMS recovery code", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}/sms-number/actions/recover", + "method": "POST", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/sms-number" + }, + "title": "Recover" + }, + { + "description": "Confirm an SMS number change with a confirmation code", + "href": "/users/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}/sms-number/actions/confirm", + "method": "POST", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/sms-number" + }, + "title": "Confirm" + } + ], + "properties": { + "sms_number": { + "$ref": "#/definitions/account/definitions/sms_number" + } + } + }, + "sni-endpoint": { + "description": "SNI Endpoint is a public address serving a custom SSL cert for HTTPS traffic, using the SNI TLS extension, to a Heroku app.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "title": "Heroku Platform API - SNI Endpoint", + "stability": "development", + "strictProperties": true, + "type": [ + "object" + ], + "definitions": { + "certificate_chain": { + "description": "raw contents of the public certificate chain (eg: .crt or .pem file)", + "example": "-----BEGIN CERTIFICATE----- ...", + "readOnly": false, + "type": [ + "string" + ] + }, + "cname": { + "description": "deprecated; refer to GET /apps/:id/domains for valid CNAMEs for this app", + "example": "example.herokussl.com", + "readOnly": false, + "type": [ + "string" + ] + }, + "created_at": { + "description": "when endpoint was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of this SNI endpoint", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/sni-endpoint/definitions/id" + }, + { + "$ref": "#/definitions/sni-endpoint/definitions/name" + } + ] + }, + "name": { + "description": "unique name for SNI endpoint", + "example": "example", + "pattern": "^[a-z][a-z0-9-]{2,29}$", + "readOnly": true, + "type": [ + "string" + ] + }, + "private_key": { + "description": "contents of the private key (eg .key file)", + "example": "-----BEGIN RSA PRIVATE KEY----- ...", + "readOnly": false, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when SNI endpoint was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Create a new SNI endpoint.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/sni-endpoints", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "certificate_chain": { + "$ref": "#/definitions/sni-endpoint/definitions/certificate_chain" + }, + "private_key": { + "$ref": "#/definitions/sni-endpoint/definitions/private_key" + } + }, + "required": [ + "certificate_chain", + "private_key" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/sni-endpoint" + }, + "title": "Create" + }, + { + "description": "Delete existing SNI endpoint.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/sni-endpoints/{(%23%2Fdefinitions%2Fsni-endpoint%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/sni-endpoint" + }, + "title": "Delete" + }, + { + "description": "Info for existing SNI endpoint.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/sni-endpoints/{(%23%2Fdefinitions%2Fsni-endpoint%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/sni-endpoint" + }, + "title": "Info" + }, + { + "description": "List existing SNI endpoints.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/sni-endpoints", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/sni-endpoint" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Update an existing SNI endpoint.", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/sni-endpoints/{(%23%2Fdefinitions%2Fsni-endpoint%2Fdefinitions%2Fidentity)}", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "certificate_chain": { + "$ref": "#/definitions/sni-endpoint/definitions/certificate_chain" + }, + "private_key": { + "$ref": "#/definitions/sni-endpoint/definitions/private_key" + } + }, + "required": [ + "certificate_chain", + "private_key" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/sni-endpoint" + }, + "title": "Update" + } + ], + "properties": { + "certificate_chain": { + "$ref": "#/definitions/sni-endpoint/definitions/certificate_chain" + }, + "cname": { + "$ref": "#/definitions/sni-endpoint/definitions/cname" + }, + "created_at": { + "$ref": "#/definitions/sni-endpoint/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/sni-endpoint/definitions/id" + }, + "name": { + "$ref": "#/definitions/sni-endpoint/definitions/name" + }, + "updated_at": { + "$ref": "#/definitions/sni-endpoint/definitions/updated_at" + } + } + }, + "source": { + "description": "A source is a location for uploading and downloading an application's source code.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "production", + "strictProperties": true, + "title": "Heroku Platform API - Source", + "type": [ + "object" + ], + "definitions": { + "get_url": { + "description": "URL to download the source", + "example": "https://api.heroku.com/sources/1234.tgz", + "readOnly": true, + "type": [ + "string" + ] + }, + "put_url": { + "description": "URL to upload the source", + "example": "https://api.heroku.com/sources/1234.tgz", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Create URLs for uploading and downloading source.", + "href": "/sources", + "method": "POST", + "rel": "create", + "targetSchema": { + "$ref": "#/definitions/source" + }, + "title": "Create" + }, + { + "deactivate_on": "2017-08-01", + "description": "Create URLs for uploading and downloading source. Deprecated in favor of `POST /sources`", + "href": "/apps/{(%23%2Fdefinitions%2Fapp%2Fdefinitions%2Fidentity)}/sources", + "method": "POST", + "rel": "create", + "targetSchema": { + "$ref": "#/definitions/source" + }, + "title": "Create - Deprecated" + } + ], + "properties": { + "source_blob": { + "description": "pointer to the URL where clients can fetch or store the source", + "properties": { + "get_url": { + "$ref": "#/definitions/source/definitions/get_url" + }, + "put_url": { + "$ref": "#/definitions/source/definitions/put_url" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + }, + "space-app-access": { + "description": "Space access represents the permissions a particular user has on a particular space.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "title": "Heroku Platform API - Space Access", + "type": [ + "object" + ], + "definitions": { + "id": { + "description": "unique identifier of the space a user has permissions on", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/space-app-access/definitions/id" + } + ] + } + }, + "links": [ + { + "description": "List permissions for a given user on a given space.", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/members/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/space-app-access" + }, + "title": "Info" + }, + { + "description": "Update an existing user's set of permissions on a space.", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/members/{(%23%2Fdefinitions%2Faccount%2Fdefinitions%2Fidentity)}", + "method": "PATCH", + "rel": "update", + "schema": { + "type": [ + "object" + ], + "properties": { + "permissions": { + "type": [ + "array" + ], + "items": { + "type": [ + "object" + ], + "properties": { + "name": { + "type": [ + "string" + ] + } + } + } + } + } + }, + "targetSchema": { + "$ref": "#/definitions/space-app-access" + }, + "title": "Update" + }, + { + "description": "List all users and their permissions on a space.", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/members", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/space-app-access" + }, + "type": [ + "array" + ] + }, + "title": "List" + } + ], + "properties": { + "space": { + "description": "space user belongs to", + "properties": { + "name": { + "$ref": "#/definitions/app/definitions/name" + }, + "id": { + "$ref": "#/definitions/app/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "created_at": { + "$ref": "#/definitions/space/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/space/definitions/id" + }, + "permissions": { + "description": "user space permissions", + "type": [ + "array" + ], + "items": { + "type": [ + "object" + ], + "properties": { + "description": { + "type": [ + "string" + ] + }, + "name": { + "type": [ + "string" + ] + } + } + } + }, + "updated_at": { + "$ref": "#/definitions/space/definitions/updated_at" + }, + "user": { + "description": "identity of user account", + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email" + }, + "id": { + "$ref": "#/definitions/account/definitions/id" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + } + } + }, + "space-nat": { + "description": "Network address translation (NAT) for stable outbound IP addresses from a space", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Space Network Address Translation", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when network address translation for a space was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "ip_v4_address": { + "example": "123.123.123.123", + "format": "ipv4", + "pattern": "^(([01]?\\d?\\d|2[0-4]\\d|25[0-5])\\.){3}([01]?\\d?\\d|2[0-4]\\d|25[0-5])$", + "type": [ + "string" + ] + }, + "sources": { + "description": "potential IPs from which outbound network traffic will originate", + "readOnly": true, + "type": [ + "array" + ], + "items": { + "$ref": "#/definitions/space-nat/definitions/ip_v4_address" + } + }, + "state": { + "description": "availability of network address translation for a space", + "enum": [ + "disabled", + "updating", + "enabled" + ], + "example": "enabled", + "readOnly": true, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when network address translation for a space was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "Current state of network address translation for a space.", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}/nat", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/space-nat" + }, + "title": "Info" + } + ], + "properties": { + "created_at": { + "$ref": "#/definitions/space-nat/definitions/created_at" + }, + "sources": { + "$ref": "#/definitions/space-nat/definitions/sources" + }, + "state": { + "$ref": "#/definitions/space-nat/definitions/state" + }, + "updated_at": { + "$ref": "#/definitions/space-nat/definitions/updated_at" + } + } + }, + "space": { + "description": "A space is an isolated, highly available, secure app execution environments, running in the modern VPC substrate.", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Space", + "type": [ + "object" + ], + "definitions": { + "created_at": { + "description": "when space was created", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "id": { + "description": "unique identifier of space", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/space/definitions/id" + }, + { + "$ref": "#/definitions/space/definitions/name" + } + ] + }, + "name": { + "description": "unique name of space", + "example": "nasa", + "readOnly": false, + "pattern": "^[a-z0-9](?:[a-z0-9]|-(?!-))+[a-z0-9]$", + "type": [ + "string" + ] + }, + "shield": { + "description": "true if this space has shield enabled", + "readOnly": true, + "example": true, + "type": [ + "boolean" + ] + }, + "state": { + "description": "availability of this space", + "enum": [ + "allocating", + "allocated", + "deleting" + ], + "example": "allocated", + "readOnly": true, + "type": [ + "string" + ] + }, + "updated_at": { + "description": "when space was updated", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + } + }, + "links": [ + { + "description": "List existing spaces.", + "href": "/spaces", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/space" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Info for existing space.", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/space" + }, + "title": "Info" + }, + { + "description": "Update an existing space.", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}", + "method": "PATCH", + "rel": "update", + "schema": { + "properties": { + "name": { + "$ref": "#/definitions/space/definitions/name" + } + }, + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/space" + }, + "title": "Update" + }, + { + "description": "Delete an existing space.", + "href": "/spaces/{(%23%2Fdefinitions%2Fspace%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/space" + }, + "title": "Delete" + }, + { + "description": "Create a new space.", + "href": "/spaces", + "method": "POST", + "rel": "create", + "schema": { + "properties": { + "name": { + "$ref": "#/definitions/space/definitions/name" + }, + "organization": { + "$ref": "#/definitions/organization/definitions/name" + }, + "region": { + "$ref": "#/definitions/region/definitions/identity" + }, + "shield": { + "$ref": "#/definitions/space/definitions/shield" + } + }, + "required": [ + "name", + "organization" + ], + "type": [ + "object" + ] + }, + "targetSchema": { + "$ref": "#/definitions/space" + }, + "title": "Create" + } + ], + "properties": { + "created_at": { + "$ref": "#/definitions/space/definitions/created_at" + }, + "id": { + "$ref": "#/definitions/space/definitions/id" + }, + "name": { + "$ref": "#/definitions/space/definitions/name" + }, + "organization": { + "description": "organization that owns this space", + "properties": { + "name": { + "$ref": "#/definitions/organization/definitions/name" + } + }, + "type": [ + "object" + ] + }, + "region": { + "description": "identity of space region", + "properties": { + "id": { + "$ref": "#/definitions/region/definitions/id" + }, + "name": { + "$ref": "#/definitions/region/definitions/name" + } + }, + "strictProperties": true, + "type": [ + "object" + ] + }, + "shield": { + "$ref": "#/definitions/space/definitions/shield" + }, + "state": { + "$ref": "#/definitions/space/definitions/state" + }, + "updated_at": { + "$ref": "#/definitions/space/definitions/updated_at" + } + } + }, "ssl-endpoint": { - "description": "[SSL Endpoint](https://devcenter.heroku.com/articles/ssl-endpoint) is a public address serving custom SSL cert for HTTPS traffic to a Heroku app. Note that an app must have the `ssl:endpoint` addon installed before it can provision an SSL Endpoint using these APIs.", + "description": "[SSL Endpoint](https://devcenter.heroku.com/articles/ssl-endpoint) is a public address serving custom SSL cert for HTTPS traffic to a Heroku app. Note that an app must have the `ssl:endpoint` add-on installed before it can provision an SSL Endpoint using these APIs.", "$schema": "http://json-schema.org/draft-04/hyper-schema", "title": "Heroku Platform API - SSL Endpoint", "stability": "production", @@ -5468,7 +11585,7 @@ "name": { "description": "unique name for SSL endpoint", "example": "example", - "pattern": "^[a-z][a-z0-9-]{3,30}$", + "pattern": "^[a-z][a-z0-9-]{2,29}$", "readOnly": true, "type": [ "string" @@ -5607,6 +11724,21 @@ } ], "properties": { + "app": { + "description": "application associated with this ssl-endpoint", + "type": [ + "object" + ], + "properties": { + "id": { + "$ref": "#/definitions/app/definitions/id" + }, + "name": { + "$ref": "#/definitions/app/definitions/name" + } + }, + "strictProperties": true + }, "certificate_chain": { "$ref": "#/definitions/ssl-endpoint/definitions/certificate_chain" }, @@ -5667,7 +11799,7 @@ }, "name": { "description": "unique name of stack", - "example": "cedar", + "example": "cedar-14", "readOnly": true, "type": [ "string" @@ -5735,6 +11867,373 @@ "$ref": "#/definitions/stack/definitions/updated_at" } } + }, + "user-preferences": { + "description": "Tracks a user's preferences and message dismissals", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "production", + "strictProperties": true, + "title": "Heroku Platform API - User Preferences", + "type": [ + "object" + ], + "definitions": { + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/user-preferences/definitions/self" + } + ] + }, + "self": { + "description": "Implicit reference to currently authorized user", + "enum": [ + "~" + ], + "example": "~", + "readOnly": true, + "type": [ + "string" + ] + }, + "timezone": { + "description": "User's default timezone", + "example": "UTC", + "readOnly": false, + "type": [ + "string", + "null" + ] + }, + "default-organization": { + "description": "User's default organization", + "example": "sushi-inc", + "readOnly": false, + "type": [ + "string", + "null" + ] + }, + "dismissed-github-banner": { + "description": "Whether the user has dismissed the GitHub link banner", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + }, + "dismissed-getting-started": { + "description": "Whether the user has dismissed the getting started banner", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + }, + "dismissed-org-access-controls": { + "description": "Whether the user has dismissed the Organization Access Controls banner", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + }, + "dismissed-org-wizard-notification": { + "description": "Whether the user has dismissed the Organization Wizard", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + }, + "dismissed-pipelines-banner": { + "description": "Whether the user has dismissed the Pipelines banner", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + }, + "dismissed-pipelines-github-banner": { + "description": "Whether the user has dismissed the GitHub banner on a pipeline overview", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + }, + "dismissed-pipelines-github-banners": { + "description": "Which pipeline uuids the user has dismissed the GitHub banner for", + "example": [ + "96c68759-f310-4910-9867-e0b062064098" + ], + "readOnly": false, + "type": [ + "null", + "array" + ], + "items": { + "$ref": "#/definitions/pipeline/definitions/id" + } + }, + "dismissed-sms-banner": { + "description": "Whether the user has dismissed the 2FA SMS banner", + "example": true, + "readOnly": false, + "type": [ + "boolean", + "null" + ] + } + }, + "links": [ + { + "description": "Retrieve User Preferences", + "href": "/users/{(%23%2Fdefinitions%2Fuser-preferences%2Fdefinitions%2Fidentity)}/preferences", + "method": "GET", + "rel": "self", + "targetSchema": { + "$ref": "#/definitions/user-preferences" + }, + "title": "List" + }, + { + "description": "Update User Preferences", + "href": "/users/{(%23%2Fdefinitions%2Fuser-preferences%2Fdefinitions%2Fidentity)}/preferences", + "method": "PATCH", + "rel": "update", + "schema": { + "type": [ + "object" + ], + "properties": { + "timezone": { + "$ref": "#/definitions/user-preferences/definitions/timezone" + }, + "default-organization": { + "$ref": "#/definitions/user-preferences/definitions/default-organization" + }, + "dismissed-github-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-github-banner" + }, + "dismissed-getting-started": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-getting-started" + }, + "dismissed-org-access-controls": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-org-access-controls" + }, + "dismissed-org-wizard-notification": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-org-wizard-notification" + }, + "dismissed-pipelines-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-pipelines-banner" + }, + "dismissed-pipelines-github-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-pipelines-github-banner" + }, + "dismissed-pipelines-github-banners": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-pipelines-github-banners" + }, + "dismissed-sms-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-sms-banner" + } + } + }, + "targetSchema": { + "$ref": "#/definitions/user-preferences" + }, + "title": "Update" + } + ], + "properties": { + "timezone": { + "$ref": "#/definitions/user-preferences/definitions/timezone" + }, + "default-organization": { + "$ref": "#/definitions/user-preferences/definitions/default-organization" + }, + "dismissed-github-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-github-banner" + }, + "dismissed-getting-started": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-getting-started" + }, + "dismissed-org-access-controls": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-org-access-controls" + }, + "dismissed-org-wizard-notification": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-org-wizard-notification" + }, + "dismissed-pipelines-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-pipelines-banner" + }, + "dismissed-pipelines-github-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-pipelines-github-banner" + }, + "dismissed-pipelines-github-banners": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-pipelines-github-banners" + }, + "dismissed-sms-banner": { + "$ref": "#/definitions/user-preferences/definitions/dismissed-sms-banner" + } + } + }, + "whitelisted-add-on-service": { + "description": "Entities that have been whitelisted to be used by an Organization", + "$schema": "http://json-schema.org/draft-04/hyper-schema", + "stability": "prototype", + "strictProperties": true, + "title": "Heroku Platform API - Whitelisted Entity", + "type": [ + "object" + ], + "definitions": { + "added_at": { + "description": "when the add-on service was whitelisted", + "example": "2012-01-01T12:00:00Z", + "format": "date-time", + "readOnly": true, + "type": [ + "string" + ] + }, + "added_by": { + "description": "the user which whitelisted the Add-on Service", + "properties": { + "email": { + "$ref": "#/definitions/account/definitions/email", + "type": [ + "string", + "null" + ] + }, + "id": { + "$ref": "#/definitions/account/definitions/id", + "type": [ + "string", + "null" + ] + } + }, + "readOnly": true, + "type": [ + "object" + ] + }, + "addon_service": { + "description": "the Add-on Service whitelisted for use", + "properties": { + "id": { + "$ref": "#/definitions/add-on-service/definitions/id" + }, + "name": { + "$ref": "#/definitions/add-on-service/definitions/name" + }, + "human_name": { + "$ref": "#/definitions/add-on-service/definitions/human_name" + } + }, + "readOnly": true, + "type": [ + "object" + ] + }, + "id": { + "description": "unique identifier for this whitelisting entity", + "example": "01234567-89ab-cdef-0123-456789abcdef", + "format": "uuid", + "readOnly": true, + "type": [ + "string" + ] + }, + "identity": { + "anyOf": [ + { + "$ref": "#/definitions/whitelisted-add-on-service/definitions/id" + }, + { + "$ref": "#/definitions/add-on-service/definitions/name" + } + ] + } + }, + "links": [ + { + "description": "List all whitelisted Add-on Services for an Organization", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/whitelisted-addon-services", + "method": "GET", + "rel": "instances", + "targetSchema": { + "items": { + "$ref": "#/definitions/whitelisted-add-on-service" + }, + "type": [ + "array" + ] + }, + "title": "List" + }, + { + "description": "Whitelist an Add-on Service", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/whitelisted-addon-services", + "method": "POST", + "rel": "create", + "schema": { + "type": [ + "object" + ], + "properties": { + "addon_service": { + "description": "name of the Add-on to whitelist", + "example": "heroku-postgresql", + "type": [ + "string" + ] + } + } + }, + "targetSchema": { + "items": { + "$ref": "#/definitions/whitelisted-add-on-service" + }, + "type": [ + "array" + ] + }, + "title": "Create" + }, + { + "description": "Remove a whitelisted entity", + "href": "/organizations/{(%23%2Fdefinitions%2Forganization%2Fdefinitions%2Fidentity)}/whitelisted-addon-services/{(%23%2Fdefinitions%2Fwhitelisted-add-on-service%2Fdefinitions%2Fidentity)}", + "method": "DELETE", + "rel": "destroy", + "targetSchema": { + "$ref": "#/definitions/whitelisted-add-on-service" + }, + "title": "Delete" + } + ], + "properties": { + "added_at": { + "$ref": "#/definitions/whitelisted-add-on-service/definitions/added_at" + }, + "added_by": { + "$ref": "#/definitions/whitelisted-add-on-service/definitions/added_by" + }, + "addon_service": { + "$ref": "#/definitions/whitelisted-add-on-service/definitions/addon_service" + }, + "id": { + "$ref": "#/definitions/whitelisted-add-on-service/definitions/id" + } + } } }, "properties": { @@ -5744,15 +12243,33 @@ "account": { "$ref": "#/definitions/account" }, - "addon-service": { - "$ref": "#/definitions/addon-service" + "add-on-action": { + "$ref": "#/definitions/add-on-action" }, - "addon": { - "$ref": "#/definitions/addon" + "add-on-attachment": { + "$ref": "#/definitions/add-on-attachment" + }, + "add-on-config": { + "$ref": "#/definitions/add-on-config" + }, + "add-on-plan-action": { + "$ref": "#/definitions/add-on-plan-action" + }, + "add-on-region-capability": { + "$ref": "#/definitions/add-on-region-capability" + }, + "add-on-service": { + "$ref": "#/definitions/add-on-service" + }, + "add-on": { + "$ref": "#/definitions/add-on" }, "app-feature": { "$ref": "#/definitions/app-feature" }, + "app-formation-set": { + "$ref": "#/definitions/app-formation-set" + }, "app-setup": { "$ref": "#/definitions/app-setup" }, @@ -5768,6 +12285,9 @@ "build": { "$ref": "#/definitions/build" }, + "buildpack-installation": { + "$ref": "#/definitions/buildpack-installation" + }, "collaborator": { "$ref": "#/definitions/collaborator" }, @@ -5780,12 +12300,39 @@ "domain": { "$ref": "#/definitions/domain" }, + "dyno-size": { + "$ref": "#/definitions/dyno-size" + }, "dyno": { "$ref": "#/definitions/dyno" }, + "event": { + "$ref": "#/definitions/event" + }, + "failed-event": { + "$ref": "#/definitions/failed-event" + }, + "filter-apps": { + "$ref": "#/definitions/filter-apps" + }, "formation": { "$ref": "#/definitions/formation" }, + "identity-provider": { + "$ref": "#/definitions/identity-provider" + }, + "inbound-ruleset": { + "$ref": "#/definitions/inbound-ruleset" + }, + "invitation": { + "$ref": "#/definitions/invitation" + }, + "invoice-address": { + "$ref": "#/definitions/invoice-address" + }, + "invoice": { + "$ref": "#/definitions/invoice" + }, "key": { "$ref": "#/definitions/key" }, @@ -5807,18 +12354,54 @@ "oauth-token": { "$ref": "#/definitions/oauth-token" }, + "organization-add-on": { + "$ref": "#/definitions/organization-add-on" + }, "organization-app-collaborator": { "$ref": "#/definitions/organization-app-collaborator" }, "organization-app": { "$ref": "#/definitions/organization-app" }, + "organization-feature": { + "$ref": "#/definitions/organization-feature" + }, + "organization-invitation": { + "$ref": "#/definitions/organization-invitation" + }, + "organization-invoice": { + "$ref": "#/definitions/organization-invoice" + }, "organization-member": { "$ref": "#/definitions/organization-member" }, + "organization-preferences": { + "$ref": "#/definitions/organization-preferences" + }, "organization": { "$ref": "#/definitions/organization" }, + "outbound-ruleset": { + "$ref": "#/definitions/outbound-ruleset" + }, + "password-reset": { + "$ref": "#/definitions/password-reset" + }, + "organization-app-permission": { + "$ref": "#/definitions/organization-app-permission" + }, + "pipeline-coupling": { + "$ref": "#/definitions/pipeline-coupling" + }, + "pipeline-promotion-target": { + "$ref": "#/definitions/pipeline-promotion-target" + }, + "pipeline-promotion": { + "$ref": "#/definitions/pipeline-promotion" + }, + "pipeline": { + "$ref": "#/definitions/pipeline" + }, "plan": { "$ref": "#/definitions/plan" }, @@ -5834,16 +12417,37 @@ "slug": { "$ref": "#/definitions/slug" }, + "sms-number": { + "$ref": "#/definitions/sms-number" + }, + "sni-endpoint": { + "$ref": "#/definitions/sni-endpoint" + }, + "source": { + "$ref": "#/definitions/source" + }, + "space-app-access": { + "$ref": "#/definitions/space-app-access" + }, + "space-nat": { + "$ref": "#/definitions/space-nat" + }, + "space": { + "$ref": "#/definitions/space" + }, "ssl-endpoint": { "$ref": "#/definitions/ssl-endpoint" }, "stack": { "$ref": "#/definitions/stack" + }, + "user-preferences": { + "$ref": "#/definitions/user-preferences" + }, + "whitelisted-add-on-service": { + "$ref": "#/definitions/whitelisted-add-on-service" } }, - "type": [ - "object" - ], "description": "The platform API empowers developers to automate, extend and combine Heroku with other services.", "id": "http://api.heroku.com/schema#", "links": [ diff --git a/vendor/github.com/docker/docker/NOTICE b/vendor/github.com/docker/docker/NOTICE index 0c74e15b05..8a37c1c7bc 100644 --- a/vendor/github.com/docker/docker/NOTICE +++ b/vendor/github.com/docker/docker/NOTICE @@ -1,5 +1,5 @@ Docker -Copyright 2012-2017 Docker, Inc. +Copyright 2012-2016 Docker, Inc. This product includes software developed at Docker, Inc. (https://www.docker.com). diff --git a/vendor/github.com/docker/docker/api/types/strslice/strslice.go b/vendor/github.com/docker/docker/api/types/strslice/strslice.go new file mode 100644 index 0000000000..bad493fb89 --- /dev/null +++ b/vendor/github.com/docker/docker/api/types/strslice/strslice.go @@ -0,0 +1,30 @@ +package strslice + +import "encoding/json" + +// StrSlice represents a string or an array of strings. +// We need to override the json decoder to accept both options. +type StrSlice []string + +// UnmarshalJSON decodes the byte slice whether it's a string or an array of +// strings. This method is needed to implement json.Unmarshaler. +func (e *StrSlice) UnmarshalJSON(b []byte) error { + if len(b) == 0 { + // With no input, we preserve the existing value by returning nil and + // leaving the target alone. This allows defining default values for + // the type. + return nil + } + + p := make([]string, 0, 1) + if err := json.Unmarshal(b, &p); err != nil { + var s string + if err := json.Unmarshal(b, &s); err != nil { + return err + } + p = append(p, s) + } + + *e = p + return nil +} diff --git a/vendor/github.com/docker/docker/pkg/urlutil/urlutil.go b/vendor/github.com/docker/docker/pkg/urlutil/urlutil.go new file mode 100644 index 0000000000..44152873b1 --- /dev/null +++ b/vendor/github.com/docker/docker/pkg/urlutil/urlutil.go @@ -0,0 +1,50 @@ +// Package urlutil provides helper function to check urls kind. +// It supports http urls, git urls and transport url (tcp://, …) +package urlutil + +import ( + "regexp" + "strings" +) + +var ( + validPrefixes = map[string][]string{ + "url": {"http://", "https://"}, + "git": {"git://", "github.com/", "git@"}, + "transport": {"tcp://", "tcp+tls://", "udp://", "unix://", "unixgram://"}, + } + urlPathWithFragmentSuffix = regexp.MustCompile(".git(?:#.+)?$") +) + +// IsURL returns true if the provided str is an HTTP(S) URL. +func IsURL(str string) bool { + return checkURL(str, "url") +} + +// IsGitURL returns true if the provided str is a git repository URL. +func IsGitURL(str string) bool { + if IsURL(str) && urlPathWithFragmentSuffix.MatchString(str) { + return true + } + return checkURL(str, "git") +} + +// IsGitTransport returns true if the provided str is a git transport by inspecting +// the prefix of the string for known protocols used in git. +func IsGitTransport(str string) bool { + return IsURL(str) || strings.HasPrefix(str, "git://") || strings.HasPrefix(str, "git@") +} + +// IsTransportURL returns true if the provided str is a transport (tcp, tcp+tls, udp, unix) URL. +func IsTransportURL(str string) bool { + return checkURL(str, "transport") +} + +func checkURL(str, kind string) bool { + for _, prefix := range validPrefixes[kind] { + if strings.HasPrefix(str, prefix) { + return true + } + } + return false +} diff --git a/vendor/github.com/docker/go-connections/LICENSE b/vendor/github.com/docker/go-connections/LICENSE new file mode 100644 index 0000000000..b55b37bc31 --- /dev/null +++ b/vendor/github.com/docker/go-connections/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + https://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Docker, Inc. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/docker/go-connections/nat/nat.go b/vendor/github.com/docker/go-connections/nat/nat.go new file mode 100644 index 0000000000..bca3c2c99a --- /dev/null +++ b/vendor/github.com/docker/go-connections/nat/nat.go @@ -0,0 +1,243 @@ +// Package nat is a convenience package for manipulation of strings describing network ports. +package nat + +import ( + "fmt" + "net" + "strconv" + "strings" +) + +const ( + // portSpecTemplate is the expected format for port specifications + portSpecTemplate = "ip:hostPort:containerPort" +) + +// PortBinding represents a binding between a Host IP address and a Host Port +type PortBinding struct { + // HostIP is the host IP Address + HostIP string `json:"HostIp"` + // HostPort is the host port number + HostPort string +} + +// PortMap is a collection of PortBinding indexed by Port +type PortMap map[Port][]PortBinding + +// PortSet is a collection of structs indexed by Port +type PortSet map[Port]struct{} + +// Port is a string containing port number and protocol in the format "80/tcp" +type Port string + +// NewPort creates a new instance of a Port given a protocol and port number or port range +func NewPort(proto, port string) (Port, error) { + // Check for parsing issues on "port" now so we can avoid having + // to check it later on. + + portStartInt, portEndInt, err := ParsePortRangeToInt(port) + if err != nil { + return "", err + } + + if portStartInt == portEndInt { + return Port(fmt.Sprintf("%d/%s", portStartInt, proto)), nil + } + return Port(fmt.Sprintf("%d-%d/%s", portStartInt, portEndInt, proto)), nil +} + +// ParsePort parses the port number string and returns an int +func ParsePort(rawPort string) (int, error) { + if len(rawPort) == 0 { + return 0, nil + } + port, err := strconv.ParseUint(rawPort, 10, 16) + if err != nil { + return 0, err + } + return int(port), nil +} + +// ParsePortRangeToInt parses the port range string and returns start/end ints +func ParsePortRangeToInt(rawPort string) (int, int, error) { + if len(rawPort) == 0 { + return 0, 0, nil + } + start, end, err := ParsePortRange(rawPort) + if err != nil { + return 0, 0, err + } + return int(start), int(end), nil +} + +// Proto returns the protocol of a Port +func (p Port) Proto() string { + proto, _ := SplitProtoPort(string(p)) + return proto +} + +// Port returns the port number of a Port +func (p Port) Port() string { + _, port := SplitProtoPort(string(p)) + return port +} + +// Int returns the port number of a Port as an int +func (p Port) Int() int { + portStr := p.Port() + if len(portStr) == 0 { + return 0 + } + + // We don't need to check for an error because we're going to + // assume that any error would have been found, and reported, in NewPort() + port, _ := strconv.ParseUint(portStr, 10, 16) + return int(port) +} + +// Range returns the start/end port numbers of a Port range as ints +func (p Port) Range() (int, int, error) { + return ParsePortRangeToInt(p.Port()) +} + +// SplitProtoPort splits a port in the format of proto/port +func SplitProtoPort(rawPort string) (string, string) { + parts := strings.Split(rawPort, "/") + l := len(parts) + if len(rawPort) == 0 || l == 0 || len(parts[0]) == 0 { + return "", "" + } + if l == 1 { + return "tcp", rawPort + } + if len(parts[1]) == 0 { + return "tcp", parts[0] + } + return parts[1], parts[0] +} + +func validateProto(proto string) bool { + for _, availableProto := range []string{"tcp", "udp"} { + if availableProto == proto { + return true + } + } + return false +} + +// ParsePortSpecs receives port specs in the format of ip:public:private/proto and parses +// these in to the internal types +func ParsePortSpecs(ports []string) (map[Port]struct{}, map[Port][]PortBinding, error) { + var ( + exposedPorts = make(map[Port]struct{}, len(ports)) + bindings = make(map[Port][]PortBinding) + ) + for _, rawPort := range ports { + portMappings, err := ParsePortSpec(rawPort) + if err != nil { + return nil, nil, err + } + + for _, portMapping := range portMappings { + port := portMapping.Port + if _, exists := exposedPorts[port]; !exists { + exposedPorts[port] = struct{}{} + } + bslice, exists := bindings[port] + if !exists { + bslice = []PortBinding{} + } + bindings[port] = append(bslice, portMapping.Binding) + } + } + return exposedPorts, bindings, nil +} + +// PortMapping is a data object mapping a Port to a PortBinding +type PortMapping struct { + Port Port + Binding PortBinding +} + +// ParsePortSpec parses a port specification string into a slice of PortMappings +func ParsePortSpec(rawPort string) ([]PortMapping, error) { + proto := "tcp" + + if i := strings.LastIndex(rawPort, "/"); i != -1 { + proto = rawPort[i+1:] + rawPort = rawPort[:i] + } + if !strings.Contains(rawPort, ":") { + rawPort = fmt.Sprintf("::%s", rawPort) + } else if len(strings.Split(rawPort, ":")) == 2 { + rawPort = fmt.Sprintf(":%s", rawPort) + } + + parts, err := PartParser(portSpecTemplate, rawPort) + if err != nil { + return nil, err + } + + var ( + containerPort = parts["containerPort"] + rawIP = parts["ip"] + hostPort = parts["hostPort"] + ) + + if rawIP != "" && net.ParseIP(rawIP) == nil { + return nil, fmt.Errorf("Invalid ip address: %s", rawIP) + } + if containerPort == "" { + return nil, fmt.Errorf("No port specified: %s", rawPort) + } + + startPort, endPort, err := ParsePortRange(containerPort) + if err != nil { + return nil, fmt.Errorf("Invalid containerPort: %s", containerPort) + } + + var startHostPort, endHostPort uint64 = 0, 0 + if len(hostPort) > 0 { + startHostPort, endHostPort, err = ParsePortRange(hostPort) + if err != nil { + return nil, fmt.Errorf("Invalid hostPort: %s", hostPort) + } + } + + if hostPort != "" && (endPort-startPort) != (endHostPort-startHostPort) { + // Allow host port range iff containerPort is not a range. + // In this case, use the host port range as the dynamic + // host port range to allocate into. + if endPort != startPort { + return nil, fmt.Errorf("Invalid ranges specified for container and host Ports: %s and %s", containerPort, hostPort) + } + } + + if !validateProto(strings.ToLower(proto)) { + return nil, fmt.Errorf("Invalid proto: %s", proto) + } + + ports := []PortMapping{} + for i := uint64(0); i <= (endPort - startPort); i++ { + containerPort = strconv.FormatUint(startPort+i, 10) + if len(hostPort) > 0 { + hostPort = strconv.FormatUint(startHostPort+i, 10) + } + // Set hostPort to a range only if there is a single container port + // and a dynamic host port. + if startPort == endPort && startHostPort != endHostPort { + hostPort = fmt.Sprintf("%s-%s", hostPort, strconv.FormatUint(endHostPort, 10)) + } + port, err := NewPort(strings.ToLower(proto), containerPort) + if err != nil { + return nil, err + } + + binding := PortBinding{ + HostIP: rawIP, + HostPort: hostPort, + } + ports = append(ports, PortMapping{Port: port, Binding: binding}) + } + return ports, nil +} diff --git a/vendor/github.com/docker/go-connections/nat/parse.go b/vendor/github.com/docker/go-connections/nat/parse.go new file mode 100644 index 0000000000..872050205f --- /dev/null +++ b/vendor/github.com/docker/go-connections/nat/parse.go @@ -0,0 +1,56 @@ +package nat + +import ( + "fmt" + "strconv" + "strings" +) + +// PartParser parses and validates the specified string (data) using the specified template +// e.g. ip:public:private -> 192.168.0.1:80:8000 +func PartParser(template, data string) (map[string]string, error) { + // ip:public:private + var ( + templateParts = strings.Split(template, ":") + parts = strings.Split(data, ":") + out = make(map[string]string, len(templateParts)) + ) + if len(parts) != len(templateParts) { + return nil, fmt.Errorf("Invalid format to parse. %s should match template %s", data, template) + } + + for i, t := range templateParts { + value := "" + if len(parts) > i { + value = parts[i] + } + out[t] = value + } + return out, nil +} + +// ParsePortRange parses and validates the specified string as a port-range (8000-9000) +func ParsePortRange(ports string) (uint64, uint64, error) { + if ports == "" { + return 0, 0, fmt.Errorf("Empty string specified for ports.") + } + if !strings.Contains(ports, "-") { + start, err := strconv.ParseUint(ports, 10, 16) + end := start + return start, end, err + } + + parts := strings.Split(ports, "-") + start, err := strconv.ParseUint(parts[0], 10, 16) + if err != nil { + return 0, 0, err + } + end, err := strconv.ParseUint(parts[1], 10, 16) + if err != nil { + return 0, 0, err + } + if end < start { + return 0, 0, fmt.Errorf("Invalid range specified for the Port: %s", ports) + } + return start, end, nil +} diff --git a/vendor/github.com/docker/go-connections/nat/sort.go b/vendor/github.com/docker/go-connections/nat/sort.go new file mode 100644 index 0000000000..ce950171e3 --- /dev/null +++ b/vendor/github.com/docker/go-connections/nat/sort.go @@ -0,0 +1,96 @@ +package nat + +import ( + "sort" + "strings" +) + +type portSorter struct { + ports []Port + by func(i, j Port) bool +} + +func (s *portSorter) Len() int { + return len(s.ports) +} + +func (s *portSorter) Swap(i, j int) { + s.ports[i], s.ports[j] = s.ports[j], s.ports[i] +} + +func (s *portSorter) Less(i, j int) bool { + ip := s.ports[i] + jp := s.ports[j] + + return s.by(ip, jp) +} + +// Sort sorts a list of ports using the provided predicate +// This function should compare `i` and `j`, returning true if `i` is +// considered to be less than `j` +func Sort(ports []Port, predicate func(i, j Port) bool) { + s := &portSorter{ports, predicate} + sort.Sort(s) +} + +type portMapEntry struct { + port Port + binding PortBinding +} + +type portMapSorter []portMapEntry + +func (s portMapSorter) Len() int { return len(s) } +func (s portMapSorter) Swap(i, j int) { s[i], s[j] = s[j], s[i] } + +// sort the port so that the order is: +// 1. port with larger specified bindings +// 2. larger port +// 3. port with tcp protocol +func (s portMapSorter) Less(i, j int) bool { + pi, pj := s[i].port, s[j].port + hpi, hpj := toInt(s[i].binding.HostPort), toInt(s[j].binding.HostPort) + return hpi > hpj || pi.Int() > pj.Int() || (pi.Int() == pj.Int() && strings.ToLower(pi.Proto()) == "tcp") +} + +// SortPortMap sorts the list of ports and their respected mapping. The ports +// will explicit HostPort will be placed first. +func SortPortMap(ports []Port, bindings PortMap) { + s := portMapSorter{} + for _, p := range ports { + if binding, ok := bindings[p]; ok { + for _, b := range binding { + s = append(s, portMapEntry{port: p, binding: b}) + } + bindings[p] = []PortBinding{} + } else { + s = append(s, portMapEntry{port: p}) + } + } + + sort.Sort(s) + var ( + i int + pm = make(map[Port]struct{}) + ) + // reorder ports + for _, entry := range s { + if _, ok := pm[entry.port]; !ok { + ports[i] = entry.port + pm[entry.port] = struct{}{} + i++ + } + // reorder bindings for this port + if _, ok := bindings[entry.port]; ok { + bindings[entry.port] = append(bindings[entry.port], entry.binding) + } + } +} + +func toInt(s string) uint64 { + i, _, err := ParsePortRange(s) + if err != nil { + i = 0 + } + return i +} diff --git a/vendor/github.com/docker/libcompose/LICENSE b/vendor/github.com/docker/libcompose/LICENSE new file mode 100644 index 0000000000..9023c749e6 --- /dev/null +++ b/vendor/github.com/docker/libcompose/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2015 Docker, Inc. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/docker/libcompose/config/convert.go b/vendor/github.com/docker/libcompose/config/convert.go new file mode 100644 index 0000000000..dacee03b50 --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/convert.go @@ -0,0 +1,44 @@ +package config + +import ( + "github.com/docker/libcompose/utils" + "github.com/docker/libcompose/yaml" +) + +// ConvertServices converts a set of v1 service configs to v2 service configs +func ConvertServices(v1Services map[string]*ServiceConfigV1) (map[string]*ServiceConfig, error) { + v2Services := make(map[string]*ServiceConfig) + replacementFields := make(map[string]*ServiceConfig) + + for name, service := range v1Services { + replacementFields[name] = &ServiceConfig{ + Build: yaml.Build{ + Context: service.Build, + Dockerfile: service.Dockerfile, + }, + Logging: Log{ + Driver: service.LogDriver, + Options: service.LogOpt, + }, + NetworkMode: service.Net, + } + + v1Services[name].Build = "" + v1Services[name].Dockerfile = "" + v1Services[name].LogDriver = "" + v1Services[name].LogOpt = nil + v1Services[name].Net = "" + } + + if err := utils.Convert(v1Services, &v2Services); err != nil { + return nil, err + } + + for name := range v2Services { + v2Services[name].Build = replacementFields[name].Build + v2Services[name].Logging = replacementFields[name].Logging + v2Services[name].NetworkMode = replacementFields[name].NetworkMode + } + + return v2Services, nil +} diff --git a/vendor/github.com/docker/libcompose/config/hash.go b/vendor/github.com/docker/libcompose/config/hash.go new file mode 100644 index 0000000000..a2f7f04a6a --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/hash.go @@ -0,0 +1,95 @@ +package config + +import ( + "crypto/sha1" + "encoding/hex" + "fmt" + "io" + "reflect" + "sort" + + "github.com/docker/libcompose/yaml" +) + +// GetServiceHash computes and returns a hash that will identify a service. +// This hash will be then used to detect if the service definition/configuration +// have changed and needs to be recreated. +func GetServiceHash(name string, config *ServiceConfig) string { + hash := sha1.New() + + io.WriteString(hash, name) + + //Get values of Service through reflection + val := reflect.ValueOf(config).Elem() + + //Create slice to sort the keys in Service Config, which allow constant hash ordering + serviceKeys := []string{} + + //Create a data structure of map of values keyed by a string + unsortedKeyValue := make(map[string]interface{}) + + //Get all keys and values in Service Configuration + for i := 0; i < val.NumField(); i++ { + valueField := val.Field(i) + keyField := val.Type().Field(i) + + serviceKeys = append(serviceKeys, keyField.Name) + unsortedKeyValue[keyField.Name] = valueField.Interface() + } + + //Sort serviceKeys alphabetically + sort.Strings(serviceKeys) + + //Go through keys and write hash + for _, serviceKey := range serviceKeys { + serviceValue := unsortedKeyValue[serviceKey] + + io.WriteString(hash, fmt.Sprintf("\n %v: ", serviceKey)) + + switch s := serviceValue.(type) { + case yaml.SliceorMap: + sliceKeys := []string{} + for lkey := range s { + sliceKeys = append(sliceKeys, lkey) + } + sort.Strings(sliceKeys) + + for _, sliceKey := range sliceKeys { + io.WriteString(hash, fmt.Sprintf("%s=%v, ", sliceKey, s[sliceKey])) + } + case yaml.MaporEqualSlice: + for _, sliceKey := range s { + io.WriteString(hash, fmt.Sprintf("%s, ", sliceKey)) + } + case yaml.MaporColonSlice: + for _, sliceKey := range s { + io.WriteString(hash, fmt.Sprintf("%s, ", sliceKey)) + } + case yaml.MaporSpaceSlice: + for _, sliceKey := range s { + io.WriteString(hash, fmt.Sprintf("%s, ", sliceKey)) + } + case yaml.Command: + for _, sliceKey := range s { + io.WriteString(hash, fmt.Sprintf("%s, ", sliceKey)) + } + case yaml.Stringorslice: + sort.Strings(s) + + for _, sliceKey := range s { + io.WriteString(hash, fmt.Sprintf("%s, ", sliceKey)) + } + case []string: + sliceKeys := s + sort.Strings(sliceKeys) + + for _, sliceKey := range sliceKeys { + io.WriteString(hash, fmt.Sprintf("%s, ", sliceKey)) + } + default: + io.WriteString(hash, fmt.Sprintf("%v", serviceValue)) + } + } + + return hex.EncodeToString(hash.Sum(nil)) +} diff --git a/vendor/github.com/docker/libcompose/config/interpolation.go b/vendor/github.com/docker/libcompose/config/interpolation.go new file mode 100644 index 0000000000..66c987a713 --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/interpolation.go @@ -0,0 +1,157 @@ +package config + +import ( + "bytes" + "fmt" + "strings" + + "github.com/Sirupsen/logrus" +) + +func isNum(c uint8) bool { + return c >= '0' && c <= '9' +} + +func validVariableNameChar(c uint8) bool { + return c == '_' || + c >= 'A' && c <= 'Z' || + c >= 'a' && c <= 'z' || + isNum(c) +} + +func parseVariable(line string, pos int, mapping func(string) string) (string, int, bool) { + var buffer bytes.Buffer + + for ; pos < len(line); pos++ { + c := line[pos] + + switch { + case validVariableNameChar(c): + buffer.WriteByte(c) + default: + return mapping(buffer.String()), pos - 1, true + } + } + + return mapping(buffer.String()), pos, true +} + +func parseVariableWithBraces(line string, pos int, mapping func(string) string) (string, int, bool) { + var buffer bytes.Buffer + + for ; pos < len(line); pos++ { + c := line[pos] + + switch { + case c == '}': + bufferString := buffer.String() + + if bufferString == "" { + return "", 0, false + } + + return mapping(buffer.String()), pos, true + case validVariableNameChar(c): + buffer.WriteByte(c) + default: + return "", 0, false + } + } + + return "", 0, false +} + +func parseInterpolationExpression(line string, pos int, mapping func(string) string) (string, int, bool) { + c := line[pos] + + switch { + case c == '$': + return "$", pos, true + case c == '{': + return parseVariableWithBraces(line, pos+1, mapping) + case !isNum(c) && validVariableNameChar(c): + // Variables can't start with a number + return parseVariable(line, pos, mapping) + default: + return "", 0, false + } +} + +func parseLine(line string, mapping func(string) string) (string, bool) { + var buffer bytes.Buffer + + for pos := 0; pos < len(line); pos++ { + c := line[pos] + switch { + case c == '$': + var replaced string + var success bool + + replaced, pos, success = parseInterpolationExpression(line, pos+1, mapping) + + if !success { + return "", false + } + + buffer.WriteString(replaced) + default: + buffer.WriteByte(c) + } + } + + return buffer.String(), true +} + +func parseConfig(key string, data *interface{}, mapping func(string) string) error { + switch typedData := (*data).(type) { + case string: + var success bool + + *data, success = parseLine(typedData, mapping) + + if !success { + return fmt.Errorf("Invalid interpolation format for key \"%s\": \"%s\"", key, typedData) + } + case []interface{}: + for k, v := range typedData { + err := parseConfig(key, &v, mapping) + + if err != nil { + return err + } + + typedData[k] = v + } + case map[interface{}]interface{}: + for k, v := range typedData { + err := parseConfig(key, &v, mapping) + + if err != nil { + return err + } + + typedData[k] = v + } + } + + return nil +} + +// Interpolate replaces variables in a map entry +func Interpolate(key string, data *interface{}, environmentLookup EnvironmentLookup) error { + return parseConfig(key, data, func(s string) string { + values := environmentLookup.Lookup(s, nil) + + if len(values) == 0 { + logrus.Warnf("The %s variable is not set. Substituting a blank string.", s) + return "" + } + + // Use first result if many are given + value := values[0] + + // Environment variables come in key=value format + // Return everything past first '=' + return strings.SplitN(value, "=", 2)[1] + }) +} diff --git a/vendor/github.com/docker/libcompose/config/merge.go b/vendor/github.com/docker/libcompose/config/merge.go new file mode 100644 index 0000000000..f40619d4c2 --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/merge.go @@ -0,0 +1,246 @@ +package config + +import ( + "bufio" + "bytes" + "fmt" + "strings" + + "github.com/docker/docker/pkg/urlutil" + "github.com/docker/libcompose/utils" + composeYaml "github.com/docker/libcompose/yaml" + "gopkg.in/yaml.v2" +) + +var ( + noMerge = []string{ + "links", + "volumes_from", + } + defaultParseOptions = ParseOptions{ + Interpolate: true, + Validate: true, + } +) + +// CreateConfig unmarshals bytes to config and creates config based on version +func CreateConfig(bytes []byte) (*Config, error) { + var config Config + if err := yaml.Unmarshal(bytes, &config); err != nil { + return nil, err + } + + if config.Version != "2" { + var baseRawServices RawServiceMap + if err := yaml.Unmarshal(bytes, &baseRawServices); err != nil { + return nil, err + } + config.Services = baseRawServices + } + + if config.Volumes == nil { + config.Volumes = make(map[string]interface{}) + } + if config.Networks == nil { + config.Networks = make(map[string]interface{}) + } + + return &config, nil +} + +// Merge merges a compose file into an existing set of service configs +func Merge(existingServices *ServiceConfigs, environmentLookup EnvironmentLookup, resourceLookup ResourceLookup, file string, bytes []byte, options *ParseOptions) (string, map[string]*ServiceConfig, map[string]*VolumeConfig, map[string]*NetworkConfig, error) { + if options == nil { + options = &defaultParseOptions + } + + config, err := CreateConfig(bytes) + if err != nil { + return "", nil, nil, nil, err + } + baseRawServices := config.Services + + if options.Interpolate { + if err := InterpolateRawServiceMap(&baseRawServices, environmentLookup); err != nil { + return "", nil, nil, nil, err + } + + for k, v := range config.Volumes { + if err := Interpolate(k, &v, environmentLookup); err != nil { + return "", nil, nil, nil, err + } + config.Volumes[k] = v + } + + for k, v := range config.Networks { + if err := Interpolate(k, &v, environmentLookup); err != nil { + return "", nil, nil, nil, err + } + config.Networks[k] = v + } + } + + if options.Preprocess != nil { + var err error + baseRawServices, err = options.Preprocess(baseRawServices) + if err != nil { + return "", nil, nil, nil, err + } + } + + var serviceConfigs map[string]*ServiceConfig + if config.Version == "2" { + var err error + serviceConfigs, err = MergeServicesV2(existingServices, environmentLookup, resourceLookup, file, baseRawServices, options) + if err != nil { + return "", nil, nil, nil, err + } + } else { + serviceConfigsV1, err := MergeServicesV1(existingServices, environmentLookup, resourceLookup, file, baseRawServices, options) + if err != nil { + return "", nil, nil, nil, err + } + serviceConfigs, err = ConvertServices(serviceConfigsV1) + if err != nil { + return "", nil, nil, nil, err + } + } + + adjustValues(serviceConfigs) + + if options.Postprocess != nil { + var err error + serviceConfigs, err = options.Postprocess(serviceConfigs) + if err != nil { + return "", nil, nil, nil, err + } + } + + var volumes map[string]*VolumeConfig + var networks map[string]*NetworkConfig + if err := utils.Convert(config.Volumes, &volumes); err != nil { + return "", nil, nil, nil, err + } + if err := utils.Convert(config.Networks, &networks); err != nil { + return "", nil, nil, nil, err + } + + return config.Version, serviceConfigs, volumes, networks, nil +} + +// InterpolateRawServiceMap replaces varialbse in raw service map struct based on environment lookup +func InterpolateRawServiceMap(baseRawServices *RawServiceMap, environmentLookup EnvironmentLookup) error { + for k, v := range *baseRawServices { + for k2, v2 := range v { + if err := Interpolate(k2, &v2, environmentLookup); err != nil { + return err + } + (*baseRawServices)[k][k2] = v2 + } + } + return nil +} + +func adjustValues(configs map[string]*ServiceConfig) { + // yaml parser turns "no" into "false" but that is not valid for a restart policy + for _, v := range configs { + if v.Restart == "false" { + v.Restart = "no" + } + } +} + +func readEnvFile(resourceLookup ResourceLookup, inFile string, serviceData RawService) (RawService, error) { + if _, ok := serviceData["env_file"]; !ok { + return serviceData, nil + } + + var envFiles composeYaml.Stringorslice + + if err := utils.Convert(serviceData["env_file"], &envFiles); err != nil { + return nil, err + } + + if len(envFiles) == 0 { + return serviceData, nil + } + + if resourceLookup == nil { + return nil, fmt.Errorf("Can not use env_file in file %s no mechanism provided to load files", inFile) + } + + var vars composeYaml.MaporEqualSlice + + if _, ok := serviceData["environment"]; ok { + if err := utils.Convert(serviceData["environment"], &vars); err != nil { + return nil, err + } + } + + for i := len(envFiles) - 1; i >= 0; i-- { + envFile := envFiles[i] + content, _, err := resourceLookup.Lookup(envFile, inFile) + if err != nil { + return nil, err + } + + if err != nil { + return nil, err + } + + scanner := bufio.NewScanner(bytes.NewBuffer(content)) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + + if len(line) > 0 && !strings.HasPrefix(line, "#") { + key := strings.SplitAfter(line, "=")[0] + + found := false + for _, v := range vars { + if strings.HasPrefix(v, key) { + found = true + break + } + } + + if !found { + vars = append(vars, line) + } + } + } + + if scanner.Err() != nil { + return nil, scanner.Err() + } + } + + serviceData["environment"] = vars + + delete(serviceData, "env_file") + + return serviceData, nil +} + +func mergeConfig(baseService, serviceData RawService) RawService { + for k, v := range serviceData { + // Image and build are mutually exclusive in merge + if k == "image" { + delete(baseService, "build") + } else if k == "build" { + delete(baseService, "image") + } + existing, ok := baseService[k] + if ok { + baseService[k] = merge(existing, v) + } else { + baseService[k] = v + } + } + + return baseService +} + +// IsValidRemote checks if the specified string is a valid remote (for builds) +func IsValidRemote(remote string) bool { + return urlutil.IsGitURL(remote) || urlutil.IsURL(remote) +} diff --git a/vendor/github.com/docker/libcompose/config/merge_v1.go b/vendor/github.com/docker/libcompose/config/merge_v1.go new file mode 100644 index 0000000000..dab39144f0 --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/merge_v1.go @@ -0,0 +1,179 @@ +package config + +import ( + "fmt" + "path" + + "github.com/Sirupsen/logrus" + "github.com/docker/libcompose/utils" +) + +// MergeServicesV1 merges a v1 compose file into an existing set of service configs +func MergeServicesV1(existingServices *ServiceConfigs, environmentLookup EnvironmentLookup, resourceLookup ResourceLookup, file string, datas RawServiceMap, options *ParseOptions) (map[string]*ServiceConfigV1, error) { + if options.Validate { + if err := validate(datas); err != nil { + return nil, err + } + } + + for name, data := range datas { + data, err := parseV1(resourceLookup, environmentLookup, file, data, datas, options) + if err != nil { + logrus.Errorf("Failed to parse service %s: %v", name, err) + return nil, err + } + + if serviceConfig, ok := existingServices.Get(name); ok { + var rawExistingService RawService + if err := utils.Convert(serviceConfig, &rawExistingService); err != nil { + return nil, err + } + + data = mergeConfig(rawExistingService, data) + } + + datas[name] = data + } + + if options.Validate { + for name, data := range datas { + err := validateServiceConstraints(data, name) + if err != nil { + return nil, err + } + } + } + + serviceConfigs := make(map[string]*ServiceConfigV1) + if err := utils.Convert(datas, &serviceConfigs); err != nil { + return nil, err + } + + return serviceConfigs, nil +} + +func parseV1(resourceLookup ResourceLookup, environmentLookup EnvironmentLookup, inFile string, serviceData RawService, datas RawServiceMap, options *ParseOptions) (RawService, error) { + serviceData, err := readEnvFile(resourceLookup, inFile, serviceData) + if err != nil { + return nil, err + } + + serviceData = resolveContextV1(inFile, serviceData) + + value, ok := serviceData["extends"] + if !ok { + return serviceData, nil + } + + mapValue, ok := value.(map[interface{}]interface{}) + if !ok { + return serviceData, nil + } + + if resourceLookup == nil { + return nil, fmt.Errorf("Can not use extends in file %s no mechanism provided to files", inFile) + } + + file := asString(mapValue["file"]) + service := asString(mapValue["service"]) + + if service == "" { + return serviceData, nil + } + + var baseService RawService + + if file == "" { + if serviceData, ok := datas[service]; ok { + baseService, err = parseV1(resourceLookup, environmentLookup, inFile, serviceData, datas, options) + } else { + return nil, fmt.Errorf("Failed to find service %s to extend", service) + } + } else { + bytes, resolved, err := resourceLookup.Lookup(file, inFile) + if err != nil { + logrus.Errorf("Failed to lookup file %s: %v", file, err) + return nil, err + } + + config, err := CreateConfig(bytes) + if err != nil { + return nil, err + } + baseRawServices := config.Services + + if options.Interpolate { + if err = InterpolateRawServiceMap(&baseRawServices, environmentLookup); err != nil { + return nil, err + } + } + + if options.Preprocess != nil { + var err error + baseRawServices, err = options.Preprocess(baseRawServices) + if err != nil { + return nil, err + } + } + + if options.Validate { + if err := validate(baseRawServices); err != nil { + return nil, err + } + } + + baseService, ok = baseRawServices[service] + if !ok { + return nil, fmt.Errorf("Failed to find service %s in file %s", service, file) + } + + baseService, err = parseV1(resourceLookup, environmentLookup, resolved, baseService, baseRawServices, options) + } + + if err != nil { + return nil, err + } + + baseService = clone(baseService) + + logrus.Debugf("Merging %#v, %#v", baseService, serviceData) + + for _, k := range noMerge { + if _, ok := baseService[k]; ok { + source := file + if source == "" { + source = inFile + } + return nil, fmt.Errorf("Cannot extend service '%s' in %s: services with '%s' cannot be extended", service, source, k) + } + } + + baseService = mergeConfig(baseService, serviceData) + + logrus.Debugf("Merged result %#v", baseService) + + return baseService, nil +} + +func resolveContextV1(inFile string, serviceData RawService) RawService { + context := asString(serviceData["build"]) + if context == "" { + return serviceData + } + + if IsValidRemote(context) { + return serviceData + } + + current := path.Dir(inFile) + + if context == "." { + context = current + } else { + current = path.Join(current, context) + } + + serviceData["build"] = current + + return serviceData +} diff --git a/vendor/github.com/docker/libcompose/config/merge_v2.go b/vendor/github.com/docker/libcompose/config/merge_v2.go new file mode 100644 index 0000000000..1f7bbd341c --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/merge_v2.go @@ -0,0 +1,187 @@ +package config + +import ( + "fmt" + "path" + "strings" + + "github.com/Sirupsen/logrus" + "github.com/docker/libcompose/utils" +) + +// MergeServicesV2 merges a v2 compose file into an existing set of service configs +func MergeServicesV2(existingServices *ServiceConfigs, environmentLookup EnvironmentLookup, resourceLookup ResourceLookup, file string, datas RawServiceMap, options *ParseOptions) (map[string]*ServiceConfig, error) { + if options.Validate { + if err := validateV2(datas); err != nil { + return nil, err + } + } + + for name, data := range datas { + data, err := parseV2(resourceLookup, environmentLookup, file, data, datas, options) + if err != nil { + logrus.Errorf("Failed to parse service %s: %v", name, err) + return nil, err + } + + if serviceConfig, ok := existingServices.Get(name); ok { + var rawExistingService RawService + if err := utils.Convert(serviceConfig, &rawExistingService); err != nil { + return nil, err + } + + data = mergeConfig(rawExistingService, data) + } + + datas[name] = data + } + + if options.Validate { + var errs []string + for name, data := range datas { + err := validateServiceConstraintsv2(data, name) + if err != nil { + errs = append(errs, err.Error()) + } + } + if len(errs) != 0 { + return nil, fmt.Errorf(strings.Join(errs, "\n")) + } + } + + serviceConfigs := make(map[string]*ServiceConfig) + if err := utils.Convert(datas, &serviceConfigs); err != nil { + return nil, err + } + + return serviceConfigs, nil +} + +func parseV2(resourceLookup ResourceLookup, environmentLookup EnvironmentLookup, inFile string, serviceData RawService, datas RawServiceMap, options *ParseOptions) (RawService, error) { + serviceData, err := readEnvFile(resourceLookup, inFile, serviceData) + if err != nil { + return nil, err + } + + serviceData = resolveContextV2(inFile, serviceData) + + value, ok := serviceData["extends"] + if !ok { + return serviceData, nil + } + + mapValue, ok := value.(map[interface{}]interface{}) + if !ok { + return serviceData, nil + } + + if resourceLookup == nil { + return nil, fmt.Errorf("Can not use extends in file %s no mechanism provided to files", inFile) + } + + file := asString(mapValue["file"]) + service := asString(mapValue["service"]) + + if service == "" { + return serviceData, nil + } + + var baseService RawService + + if file == "" { + if serviceData, ok := datas[service]; ok { + baseService, err = parseV2(resourceLookup, environmentLookup, inFile, serviceData, datas, options) + } else { + return nil, fmt.Errorf("Failed to find service %s to extend", service) + } + } else { + bytes, resolved, err := resourceLookup.Lookup(file, inFile) + if err != nil { + logrus.Errorf("Failed to lookup file %s: %v", file, err) + return nil, err + } + + config, err := CreateConfig(bytes) + if err != nil { + return nil, err + } + baseRawServices := config.Services + + if options.Interpolate { + if err = InterpolateRawServiceMap(&baseRawServices, environmentLookup); err != nil { + return nil, err + } + } + + if options.Validate { + if err := validate(baseRawServices); err != nil { + return nil, err + } + } + + baseService, ok = baseRawServices[service] + if !ok { + return nil, fmt.Errorf("Failed to find service %s in file %s", service, file) + } + + baseService, err = parseV2(resourceLookup, environmentLookup, resolved, baseService, baseRawServices, options) + } + + if err != nil { + return nil, err + } + + baseService = clone(baseService) + + logrus.Debugf("Merging %#v, %#v", baseService, serviceData) + + for _, k := range noMerge { + if _, ok := baseService[k]; ok { + source := file + if source == "" { + source = inFile + } + return nil, fmt.Errorf("Cannot extend service '%s' in %s: services with '%s' cannot be extended", service, source, k) + } + } + + baseService = mergeConfig(baseService, serviceData) + + logrus.Debugf("Merged result %#v", baseService) + + return baseService, nil +} + +func resolveContextV2(inFile string, serviceData RawService) RawService { + if _, ok := serviceData["build"]; !ok { + return serviceData + } + var build map[interface{}]interface{} + if buildAsString, ok := serviceData["build"].(string); ok { + build = map[interface{}]interface{}{ + "context": buildAsString, + } + } else { + build = serviceData["build"].(map[interface{}]interface{}) + } + context := asString(build["context"]) + if context == "" { + return serviceData + } + + if IsValidRemote(context) { + return serviceData + } + + current := path.Dir(inFile) + + if context == "." { + context = current + } else { + current = path.Join(current, context) + } + + build["context"] = current + + return serviceData +} diff --git a/vendor/github.com/docker/libcompose/config/schema.go b/vendor/github.com/docker/libcompose/config/schema.go new file mode 100644 index 0000000000..bf88912935 --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/schema.go @@ -0,0 +1,481 @@ +package config + +var schemaDataV1 = `{ + "$schema": "http://json-schema.org/draft-04/schema#", + "id": "config_schema_v1.json", + + "type": "object", + + "patternProperties": { + "^[a-zA-Z0-9._-]+$": { + "$ref": "#/definitions/service" + } + }, + + "additionalProperties": false, + + "definitions": { + "service": { + "id": "#/definitions/service", + "type": "object", + + "properties": { + "build": {"type": "string"}, + "cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "cgroup_parent": {"type": "string"}, + "command": { + "oneOf": [ + {"type": "string"}, + {"type": "array", "items": {"type": "string"}} + ] + }, + "container_name": {"type": "string"}, + "cpu_shares": {"type": ["number", "string"]}, + "cpu_quota": {"type": ["number", "string"]}, + "cpuset": {"type": "string"}, + "devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "dns": {"$ref": "#/definitions/string_or_list"}, + "dns_search": {"$ref": "#/definitions/string_or_list"}, + "dockerfile": {"type": "string"}, + "domainname": {"type": "string"}, + "entrypoint": { + "oneOf": [ + {"type": "string"}, + {"type": "array", "items": {"type": "string"}} + ] + }, + "env_file": {"$ref": "#/definitions/string_or_list"}, + "environment": {"$ref": "#/definitions/list_or_dict"}, + + "expose": { + "type": "array", + "items": { + "type": ["string", "number"], + "format": "expose" + }, + "uniqueItems": true + }, + + "extends": { + "oneOf": [ + { + "type": "string" + }, + { + "type": "object", + + "properties": { + "service": {"type": "string"}, + "file": {"type": "string"} + }, + "required": ["service"], + "additionalProperties": false + } + ] + }, + + "extra_hosts": {"$ref": "#/definitions/list_or_dict"}, + "external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "hostname": {"type": "string"}, + "image": {"type": "string"}, + "ipc": {"type": "string"}, + "labels": {"$ref": "#/definitions/list_or_dict"}, + "links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "log_driver": {"type": "string"}, + "log_opt": {"type": "object"}, + "mac_address": {"type": "string"}, + "mem_limit": {"type": ["number", "string"]}, + "memswap_limit": {"type": ["number", "string"]}, + "mem_swappiness": {"type": "integer"}, + "net": {"type": "string"}, + "pid": {"type": ["string", "null"]}, + + "ports": { + "type": "array", + "items": { + "type": ["string", "number"], + "format": "ports" + }, + "uniqueItems": true + }, + + "privileged": {"type": "boolean"}, + "read_only": {"type": "boolean"}, + "restart": {"type": "string"}, + "security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "shm_size": {"type": ["number", "string"]}, + "stdin_open": {"type": "boolean"}, + "stop_signal": {"type": "string"}, + "tty": {"type": "boolean"}, + "ulimits": { + "type": "object", + "patternProperties": { + "^[a-z]+$": { + "oneOf": [ + {"type": "integer"}, + { + "type":"object", + "properties": { + "hard": {"type": "integer"}, + "soft": {"type": "integer"} + }, + "required": ["soft", "hard"], + "additionalProperties": false + } + ] + } + } + }, + "user": {"type": "string"}, + "volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "volume_driver": {"type": "string"}, + "volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "working_dir": {"type": "string"} + }, + + "dependencies": { + "memswap_limit": ["mem_limit"] + }, + "additionalProperties": false + }, + + "string_or_list": { + "oneOf": [ + {"type": "string"}, + {"$ref": "#/definitions/list_of_strings"} + ] + }, + + "list_of_strings": { + "type": "array", + "items": {"type": "string"}, + "uniqueItems": true + }, + + "list_or_dict": { + "oneOf": [ + { + "type": "object", + "patternProperties": { + ".+": { + "type": ["string", "number", "null"] + } + }, + "additionalProperties": false + }, + {"type": "array", "items": {"type": "string"}, "uniqueItems": true} + ] + }, + + "constraints": { + "service": { + "id": "#/definitions/constraints/service", + "anyOf": [ + { + "required": ["build"], + "not": {"required": ["image"]} + }, + { + "required": ["image"], + "not": {"anyOf": [ + {"required": ["build"]}, + {"required": ["dockerfile"]} + ]} + } + ] + } + } + } +} +` + +var servicesSchemaDataV2 = `{ + "$schema": "http://json-schema.org/draft-04/schema#", + "id": "config_schema_v2.0.json", + "type": "object", + + "patternProperties": { + "^[a-zA-Z0-9._-]+$": { + "$ref": "#/definitions/service" + } + }, + + "additionalProperties": false, + + "definitions": { + + "service": { + "id": "#/definitions/service", + "type": "object", + + "properties": { + "build": { + "oneOf": [ + {"type": "string"}, + { + "type": "object", + "properties": { + "context": {"type": "string"}, + "dockerfile": {"type": "string"}, + "args": {"$ref": "#/definitions/list_or_dict"} + }, + "additionalProperties": false + } + ] + }, + "cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "cgroup_parent": {"type": "string"}, + "command": { + "oneOf": [ + {"type": "string"}, + {"type": "array", "items": {"type": "string"}} + ] + }, + "container_name": {"type": "string"}, + "cpu_shares": {"type": ["number", "string"]}, + "cpu_quota": {"type": ["number", "string"]}, + "cpuset": {"type": "string"}, + "depends_on": {"$ref": "#/definitions/list_of_strings"}, + "devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "dns": {"$ref": "#/definitions/string_or_list"}, + "dns_search": {"$ref": "#/definitions/string_or_list"}, + "domainname": {"type": "string"}, + "entrypoint": { + "oneOf": [ + {"type": "string"}, + {"type": "array", "items": {"type": "string"}} + ] + }, + "env_file": {"$ref": "#/definitions/string_or_list"}, + "environment": {"$ref": "#/definitions/list_or_dict"}, + + "expose": { + "type": "array", + "items": { + "type": ["string", "number"], + "format": "expose" + }, + "uniqueItems": true + }, + + "extends": { + "oneOf": [ + { + "type": "string" + }, + { + "type": "object", + + "properties": { + "service": {"type": "string"}, + "file": {"type": "string"} + }, + "required": ["service"], + "additionalProperties": false + } + ] + }, + + "external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "extra_hosts": {"$ref": "#/definitions/list_or_dict"}, + "hostname": {"type": "string"}, + "image": {"type": "string"}, + "ipc": {"type": "string"}, + "labels": {"$ref": "#/definitions/list_or_dict"}, + "links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + + "logging": { + "type": "object", + + "properties": { + "driver": {"type": "string"}, + "options": {"type": "object"} + }, + "additionalProperties": false + }, + + "mac_address": {"type": "string"}, + "mem_limit": {"type": ["number", "string"]}, + "memswap_limit": {"type": ["number", "string"]}, + "mem_swappiness": {"type": "integer"}, + "network_mode": {"type": "string"}, + + "networks": { + "oneOf": [ + {"$ref": "#/definitions/list_of_strings"}, + { + "type": "object", + "patternProperties": { + "^[a-zA-Z0-9._-]+$": { + "oneOf": [ + { + "type": "object", + "properties": { + "aliases": {"$ref": "#/definitions/list_of_strings"}, + "ipv4_address": {"type": "string"}, + "ipv6_address": {"type": "string"} + }, + "additionalProperties": false + }, + {"type": "null"} + ] + } + }, + "additionalProperties": false + } + ] + }, + "oom_score_adj": {"type": "integer", "minimum": -1000, "maximum": 1000}, + "pid": {"type": ["string", "null"]}, + + "ports": { + "type": "array", + "items": { + "type": ["string", "number"], + "format": "ports" + }, + "uniqueItems": true + }, + + "privileged": {"type": "boolean"}, + "read_only": {"type": "boolean"}, + "restart": {"type": "string"}, + "security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "shm_size": {"type": ["number", "string"]}, + "stdin_open": {"type": "boolean"}, + "stop_signal": {"type": "string"}, + "tmpfs": {"$ref": "#/definitions/string_or_list"}, + "tty": {"type": "boolean"}, + "ulimits": { + "type": "object", + "patternProperties": { + "^[a-z]+$": { + "oneOf": [ + {"type": "integer"}, + { + "type":"object", + "properties": { + "hard": {"type": "integer"}, + "soft": {"type": "integer"} + }, + "required": ["soft", "hard"], + "additionalProperties": false + } + ] + } + } + }, + "user": {"type": "string"}, + "volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "volume_driver": {"type": "string"}, + "volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, + "working_dir": {"type": "string"} + }, + + "dependencies": { + "memswap_limit": ["mem_limit"] + }, + "additionalProperties": false + }, + + "network": { + "id": "#/definitions/network", + "type": "object", + "properties": { + "driver": {"type": "string"}, + "driver_opts": { + "type": "object", + "patternProperties": { + "^.+$": {"type": ["string", "number"]} + } + }, + "ipam": { + "type": "object", + "properties": { + "driver": {"type": "string"}, + "config": { + "type": "array" + } + }, + "additionalProperties": false + }, + "external": { + "type": ["boolean", "object"], + "properties": { + "name": {"type": "string"} + }, + "additionalProperties": false + }, + "internal": {"type": "boolean"} + }, + "additionalProperties": false + }, + + "volume": { + "id": "#/definitions/volume", + "type": ["object", "null"], + "properties": { + "driver": {"type": "string"}, + "driver_opts": { + "type": "object", + "patternProperties": { + "^.+$": {"type": ["string", "number"]} + } + }, + "external": { + "type": ["boolean", "object"], + "properties": { + "name": {"type": "string"} + } + } + }, + "additionalProperties": false + }, + + "string_or_list": { + "oneOf": [ + {"type": "string"}, + {"$ref": "#/definitions/list_of_strings"} + ] + }, + + "list_of_strings": { + "type": "array", + "items": {"type": "string"}, + "uniqueItems": true + }, + + "list_or_dict": { + "oneOf": [ + { + "type": "object", + "patternProperties": { + ".+": { + "type": ["string", "number", "null"] + } + }, + "additionalProperties": false + }, + {"type": "array", "items": {"type": "string"}, "uniqueItems": true} + ] + }, + + "constraints": { + "service": { + "id": "#/definitions/constraints/service", + "anyOf": [ + {"required": ["build"]}, + {"required": ["image"]} + ], + "properties": { + "build": { + "required": ["context"] + } + } + } + } + } +} +` diff --git a/vendor/github.com/docker/libcompose/config/schema_helpers.go b/vendor/github.com/docker/libcompose/config/schema_helpers.go new file mode 100644 index 0000000000..550e4cacfa --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/schema_helpers.go @@ -0,0 +1,95 @@ +package config + +import ( + "encoding/json" + "strings" + + "github.com/docker/go-connections/nat" + "github.com/xeipuuv/gojsonschema" +) + +var ( + schemaLoaderV1 gojsonschema.JSONLoader + constraintSchemaLoaderV1 gojsonschema.JSONLoader + schemaLoaderV2 gojsonschema.JSONLoader + constraintSchemaLoaderV2 gojsonschema.JSONLoader + schemaV1 map[string]interface{} + schemaV2 map[string]interface{} +) + +type ( + environmentFormatChecker struct{} + portsFormatChecker struct{} +) + +func (checker environmentFormatChecker) IsFormat(input string) bool { + // If the value is a boolean, a warning should be given + // However, we can't determine type since gojsonschema converts the value to a string + // Adding a function with an interface{} parameter to gojsonschema is probably the best way to handle this + return true +} + +func (checker portsFormatChecker) IsFormat(input string) bool { + _, _, err := nat.ParsePortSpecs([]string{input}) + return err == nil +} + +func setupSchemaLoaders(schemaData string, schema *map[string]interface{}, schemaLoader, constraintSchemaLoader *gojsonschema.JSONLoader) error { + if *schema != nil { + return nil + } + + var schemaRaw interface{} + err := json.Unmarshal([]byte(schemaData), &schemaRaw) + if err != nil { + return err + } + + *schema = schemaRaw.(map[string]interface{}) + + gojsonschema.FormatCheckers.Add("environment", environmentFormatChecker{}) + gojsonschema.FormatCheckers.Add("ports", portsFormatChecker{}) + gojsonschema.FormatCheckers.Add("expose", portsFormatChecker{}) + *schemaLoader = gojsonschema.NewGoLoader(schemaRaw) + + definitions := (*schema)["definitions"].(map[string]interface{}) + constraints := definitions["constraints"].(map[string]interface{}) + service := constraints["service"].(map[string]interface{}) + *constraintSchemaLoader = gojsonschema.NewGoLoader(service) + + return nil +} + +// gojsonschema doesn't provide a list of valid types for a property +// This parses the schema manually to find all valid types +func parseValidTypesFromSchema(schema map[string]interface{}, context string) []string { + contextSplit := strings.Split(context, ".") + key := contextSplit[len(contextSplit)-1] + + definitions := schema["definitions"].(map[string]interface{}) + service := definitions["service"].(map[string]interface{}) + properties := service["properties"].(map[string]interface{}) + property := properties[key].(map[string]interface{}) + + var validTypes []string + + if val, ok := property["oneOf"]; ok { + validConditions := val.([]interface{}) + + for _, validCondition := range validConditions { + condition := validCondition.(map[string]interface{}) + validTypes = append(validTypes, condition["type"].(string)) + } + } else if val, ok := property["$ref"]; ok { + reference := val.(string) + if reference == "#/definitions/string_or_list" { + return []string{"string", "array"} + } else if reference == "#/definitions/list_of_strings" { + return []string{"array"} + } else if reference == "#/definitions/list_or_dict" { + return []string{"array", "object"} + } + } + + return validTypes +} diff --git a/vendor/github.com/docker/libcompose/config/types.go b/vendor/github.com/docker/libcompose/config/types.go new file mode 100644 index 0000000000..16642c025a --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/types.go @@ -0,0 +1,263 @@ +package config + +import ( + "sync" + + "github.com/docker/libcompose/yaml" +) + +// EnvironmentLookup defines methods to provides environment variable loading. +type EnvironmentLookup interface { + Lookup(key string, config *ServiceConfig) []string +} + +// ResourceLookup defines methods to provides file loading. +type ResourceLookup interface { + Lookup(file, relativeTo string) ([]byte, string, error) + ResolvePath(path, inFile string) string +} + +// ServiceConfigV1 holds version 1 of libcompose service configuration +type ServiceConfigV1 struct { + Build string `yaml:"build,omitempty"` + CapAdd []string `yaml:"cap_add,omitempty"` + CapDrop []string `yaml:"cap_drop,omitempty"` + CgroupParent string `yaml:"cgroup_parent,omitempty"` + CPUQuota yaml.StringorInt `yaml:"cpu_quota,omitempty"` + CPUSet string `yaml:"cpuset,omitempty"` + CPUShares yaml.StringorInt `yaml:"cpu_shares,omitempty"` + Command yaml.Command `yaml:"command,flow,omitempty"` + ContainerName string `yaml:"container_name,omitempty"` + Devices []string `yaml:"devices,omitempty"` + DNS yaml.Stringorslice `yaml:"dns,omitempty"` + DNSOpts []string `yaml:"dns_opt,omitempty"` + DNSSearch yaml.Stringorslice `yaml:"dns_search,omitempty"` + Dockerfile string `yaml:"dockerfile,omitempty"` + DomainName string `yaml:"domainname,omitempty"` + Entrypoint yaml.Command `yaml:"entrypoint,flow,omitempty"` + EnvFile yaml.Stringorslice `yaml:"env_file,omitempty"` + Environment yaml.MaporEqualSlice `yaml:"environment,omitempty"` + GroupAdd []string `yaml:"group_add,omitempty"` + Hostname string `yaml:"hostname,omitempty"` + Image string `yaml:"image,omitempty"` + Isolation string `yaml:"isolation,omitempty"` + Labels yaml.SliceorMap `yaml:"labels,omitempty"` + Links yaml.MaporColonSlice `yaml:"links,omitempty"` + LogDriver string `yaml:"log_driver,omitempty"` + MacAddress string `yaml:"mac_address,omitempty"` + MemLimit yaml.MemStringorInt `yaml:"mem_limit,omitempty"` + MemSwapLimit yaml.MemStringorInt `yaml:"memswap_limit,omitempty"` + MemSwappiness yaml.MemStringorInt `yaml:"mem_swappiness,omitempty"` + Name string `yaml:"name,omitempty"` + Net string `yaml:"net,omitempty"` + OomKillDisable bool `yaml:"oom_kill_disable,omitempty"` + OomScoreAdj yaml.StringorInt `yaml:"oom_score_adj,omitempty"` + Pid string `yaml:"pid,omitempty"` + Uts string `yaml:"uts,omitempty"` + Ipc string `yaml:"ipc,omitempty"` + Ports []string `yaml:"ports,omitempty"` + Privileged bool `yaml:"privileged,omitempty"` + Restart string `yaml:"restart,omitempty"` + ReadOnly bool `yaml:"read_only,omitempty"` + ShmSize yaml.MemStringorInt `yaml:"shm_size,omitempty"` + StdinOpen bool `yaml:"stdin_open,omitempty"` + SecurityOpt []string `yaml:"security_opt,omitempty"` + StopSignal string `yaml:"stop_signal,omitempty"` + Tmpfs yaml.Stringorslice `yaml:"tmpfs,omitempty"` + Tty bool `yaml:"tty,omitempty"` + User string `yaml:"user,omitempty"` + VolumeDriver string `yaml:"volume_driver,omitempty"` + Volumes []string `yaml:"volumes,omitempty"` + VolumesFrom []string `yaml:"volumes_from,omitempty"` + WorkingDir string `yaml:"working_dir,omitempty"` + Expose []string `yaml:"expose,omitempty"` + ExternalLinks []string `yaml:"external_links,omitempty"` + LogOpt map[string]string `yaml:"log_opt,omitempty"` + ExtraHosts []string `yaml:"extra_hosts,omitempty"` + Ulimits yaml.Ulimits `yaml:"ulimits,omitempty"` +} + +// Log holds v2 logging information +type Log struct { + Driver string `yaml:"driver,omitempty"` + Options map[string]string `yaml:"options,omitempty"` +} + +// ServiceConfig holds version 2 of libcompose service configuration +type ServiceConfig struct { + Build yaml.Build `yaml:"build,omitempty"` + CapAdd []string `yaml:"cap_add,omitempty"` + CapDrop []string `yaml:"cap_drop,omitempty"` + CPUSet string `yaml:"cpuset,omitempty"` + CPUShares yaml.StringorInt `yaml:"cpu_shares,omitempty"` + CPUQuota yaml.StringorInt `yaml:"cpu_quota,omitempty"` + Command yaml.Command `yaml:"command,flow,omitempty"` + CgroupParent string `yaml:"cgroup_parent,omitempty"` + ContainerName string `yaml:"container_name,omitempty"` + Devices []string `yaml:"devices,omitempty"` + DependsOn []string `yaml:"depends_on,omitempty"` + DNS yaml.Stringorslice `yaml:"dns,omitempty"` + DNSOpts []string `yaml:"dns_opt,omitempty"` + DNSSearch yaml.Stringorslice `yaml:"dns_search,omitempty"` + DomainName string `yaml:"domainname,omitempty"` + Entrypoint yaml.Command `yaml:"entrypoint,flow,omitempty"` + EnvFile yaml.Stringorslice `yaml:"env_file,omitempty"` + Environment yaml.MaporEqualSlice `yaml:"environment,omitempty"` + Expose []string `yaml:"expose,omitempty"` + Extends yaml.MaporEqualSlice `yaml:"extends,omitempty"` + ExternalLinks []string `yaml:"external_links,omitempty"` + ExtraHosts []string `yaml:"extra_hosts,omitempty"` + GroupAdd []string `yaml:"group_add,omitempty"` + Image string `yaml:"image,omitempty"` + Isolation string `yaml:"isolation,omitempty"` + Hostname string `yaml:"hostname,omitempty"` + Ipc string `yaml:"ipc,omitempty"` + Labels yaml.SliceorMap `yaml:"labels,omitempty"` + Links yaml.MaporColonSlice `yaml:"links,omitempty"` + Logging Log `yaml:"logging,omitempty"` + MacAddress string `yaml:"mac_address,omitempty"` + MemLimit yaml.MemStringorInt `yaml:"mem_limit,omitempty"` + MemSwapLimit yaml.MemStringorInt `yaml:"memswap_limit,omitempty"` + MemSwappiness yaml.MemStringorInt `yaml:"mem_swappiness,omitempty"` + NetworkMode string `yaml:"network_mode,omitempty"` + Networks *yaml.Networks `yaml:"networks,omitempty"` + OomKillDisable bool `yaml:"oom_kill_disable,omitempty"` + OomScoreAdj yaml.StringorInt `yaml:"oom_score_adj,omitempty"` + Pid string `yaml:"pid,omitempty"` + Ports []string `yaml:"ports,omitempty"` + Privileged bool `yaml:"privileged,omitempty"` + SecurityOpt []string `yaml:"security_opt,omitempty"` + ShmSize yaml.MemStringorInt `yaml:"shm_size,omitempty"` + StopSignal string `yaml:"stop_signal,omitempty"` + Tmpfs yaml.Stringorslice `yaml:"tmpfs,omitempty"` + VolumeDriver string `yaml:"volume_driver,omitempty"` + Volumes *yaml.Volumes `yaml:"volumes,omitempty"` + VolumesFrom []string `yaml:"volumes_from,omitempty"` + Uts string `yaml:"uts,omitempty"` + Restart string `yaml:"restart,omitempty"` + ReadOnly bool `yaml:"read_only,omitempty"` + StdinOpen bool `yaml:"stdin_open,omitempty"` + Tty bool `yaml:"tty,omitempty"` + User string `yaml:"user,omitempty"` + WorkingDir string `yaml:"working_dir,omitempty"` + Ulimits yaml.Ulimits `yaml:"ulimits,omitempty"` +} + +// VolumeConfig holds v2 volume configuration +type VolumeConfig struct { + Driver string `yaml:"driver,omitempty"` + DriverOpts map[string]string `yaml:"driver_opts,omitempty"` + External yaml.External `yaml:"external,omitempty"` +} + +// Ipam holds v2 network IPAM information +type Ipam struct { + Driver string `yaml:"driver,omitempty"` + Config []IpamConfig `yaml:"config,omitempty"` +} + +// IpamConfig holds v2 network IPAM configuration information +type IpamConfig struct { + Subnet string `yaml:"subnet,omitempty"` + IPRange string `yaml:"ip_range,omitempty"` + Gateway string `yaml:"gateway,omitempty"` + AuxAddress map[string]string `yaml:"aux_addresses,omitempty"` +} + +// NetworkConfig holds v2 network configuration +type NetworkConfig struct { + Driver string `yaml:"driver,omitempty"` + DriverOpts map[string]string `yaml:"driver_opts,omitempty"` + External yaml.External `yaml:"external,omitempty"` + Ipam Ipam `yaml:"ipam,omitempty"` +} + +// Config holds libcompose top level configuration +type Config struct { + Version string `yaml:"version,omitempty"` + Services RawServiceMap `yaml:"services,omitempty"` + Volumes map[string]interface{} `yaml:"volumes,omitempty"` + Networks map[string]interface{} `yaml:"networks,omitempty"` +} + +// NewServiceConfigs initializes a new Configs struct +func NewServiceConfigs() *ServiceConfigs { + return &ServiceConfigs{ + m: make(map[string]*ServiceConfig), + } +} + +// ServiceConfigs holds a concurrent safe map of ServiceConfig +type ServiceConfigs struct { + m map[string]*ServiceConfig + mu sync.RWMutex +} + +// Has checks if the config map has the specified name +func (c *ServiceConfigs) Has(name string) bool { + c.mu.RLock() + defer c.mu.RUnlock() + _, ok := c.m[name] + return ok +} + +// Get returns the config and the presence of the specified name +func (c *ServiceConfigs) Get(name string) (*ServiceConfig, bool) { + c.mu.RLock() + defer c.mu.RUnlock() + service, ok := c.m[name] + return service, ok +} + +// Add add the specifed config with the specified name +func (c *ServiceConfigs) Add(name string, service *ServiceConfig) { + c.mu.Lock() + c.m[name] = service + c.mu.Unlock() +} + +// Remove removes the config with the specified name +func (c *ServiceConfigs) Remove(name string) { + c.mu.Lock() + delete(c.m, name) + c.mu.Unlock() +} + +// Len returns the len of the configs +func (c *ServiceConfigs) Len() int { + c.mu.RLock() + defer c.mu.RUnlock() + return len(c.m) +} + +// Keys returns the names of the config +func (c *ServiceConfigs) Keys() []string { + keys := []string{} + c.mu.RLock() + defer c.mu.RUnlock() + for name := range c.m { + keys = append(keys, name) + } + return keys +} + +// All returns all the config at once +func (c *ServiceConfigs) All() map[string]*ServiceConfig { + c.mu.RLock() + defer c.mu.RUnlock() + return c.m +} + +// RawService is represent a Service in map form unparsed +type RawService map[string]interface{} + +// RawServiceMap is a collection of RawServices +type RawServiceMap map[string]RawService + +// ParseOptions are a set of options to customize the parsing process +type ParseOptions struct { + Interpolate bool + Validate bool + Preprocess func(RawServiceMap) (RawServiceMap, error) + Postprocess func(map[string]*ServiceConfig) (map[string]*ServiceConfig, error) +} diff --git a/vendor/github.com/docker/libcompose/config/utils.go b/vendor/github.com/docker/libcompose/config/utils.go new file mode 100644 index 0000000000..ae9b86cf91 --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/utils.go @@ -0,0 +1,42 @@ +package config + +func merge(existing, value interface{}) interface{} { + // append strings + if left, lok := existing.([]interface{}); lok { + if right, rok := value.([]interface{}); rok { + return append(left, right...) + } + } + + //merge maps + if left, lok := existing.(map[interface{}]interface{}); lok { + if right, rok := value.(map[interface{}]interface{}); rok { + newLeft := make(map[interface{}]interface{}) + for k, v := range left { + newLeft[k] = v + } + for k, v := range right { + newLeft[k] = v + } + return newLeft + } + } + + return value +} + +func clone(in RawService) RawService { + result := RawService{} + for k, v := range in { + result[k] = v + } + + return result +} + +func asString(obj interface{}) string { + if v, ok := obj.(string); ok { + return v + } + return "" +} diff --git a/vendor/github.com/docker/libcompose/config/validation.go b/vendor/github.com/docker/libcompose/config/validation.go new file mode 100644 index 0000000000..a459c1ed2c --- /dev/null +++ b/vendor/github.com/docker/libcompose/config/validation.go @@ -0,0 +1,322 @@ +package config + +import ( + "fmt" + "strconv" + "strings" + + "github.com/docker/libcompose/utils" + "github.com/xeipuuv/gojsonschema" +) + +func serviceNameFromErrorField(field string) string { + splitKeys := strings.Split(field, ".") + return splitKeys[0] +} + +func keyNameFromErrorField(field string) string { + splitKeys := strings.Split(field, ".") + + if len(splitKeys) > 0 { + return splitKeys[len(splitKeys)-1] + } + + return "" +} + +func containsTypeError(resultError gojsonschema.ResultError) bool { + contextSplit := strings.Split(resultError.Context().String(), ".") + _, err := strconv.Atoi(contextSplit[len(contextSplit)-1]) + return err == nil +} + +func addArticle(s string) string { + switch s[0] { + case 'a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U': + return "an " + s + default: + return "a " + s + } +} + +// Gets the value in a service map at a given error context +func getValue(val interface{}, context string) string { + keys := strings.Split(context, ".") + + if keys[0] == "(root)" { + keys = keys[1:] + } + + for i, k := range keys { + switch typedVal := (val).(type) { + case string: + return typedVal + case []interface{}: + if index, err := strconv.Atoi(k); err == nil { + val = typedVal[index] + } + case RawServiceMap: + val = typedVal[k] + case RawService: + val = typedVal[k] + case map[interface{}]interface{}: + val = typedVal[k] + } + + if i == len(keys)-1 { + return fmt.Sprint(val) + } + } + + return "" +} + +func convertServiceMapKeysToStrings(serviceMap RawServiceMap) RawServiceMap { + newServiceMap := make(RawServiceMap) + for k, v := range serviceMap { + newServiceMap[k] = convertServiceKeysToStrings(v) + } + return newServiceMap +} + +func convertServiceKeysToStrings(service RawService) RawService { + newService := make(RawService) + for k, v := range service { + newService[k] = utils.ConvertKeysToStrings(v) + } + return newService +} + +var dockerConfigHints = map[string]string{ + "cpu_share": "cpu_shares", + "add_host": "extra_hosts", + "hosts": "extra_hosts", + "extra_host": "extra_hosts", + "device": "devices", + "link": "links", + "memory_swap": "memswap_limit", + "port": "ports", + "privilege": "privileged", + "priviliged": "privileged", + "privilige": "privileged", + "volume": "volumes", + "workdir": "working_dir", +} + +func unsupportedConfigMessage(key string, nextErr gojsonschema.ResultError) string { + service := serviceNameFromErrorField(nextErr.Field()) + + message := fmt.Sprintf("Unsupported config option for %s service: '%s'", service, key) + if val, ok := dockerConfigHints[key]; ok { + message += fmt.Sprintf(" (did you mean '%s'?)", val) + } + + return message +} + +func oneOfMessage(serviceMap RawServiceMap, schema map[string]interface{}, err, nextErr gojsonschema.ResultError) string { + switch nextErr.Type() { + case "additional_property_not_allowed": + property := nextErr.Details()["property"] + + return fmt.Sprintf("contains unsupported option: '%s'", property) + case "invalid_type": + if containsTypeError(nextErr) { + expectedType := addArticle(nextErr.Details()["expected"].(string)) + + return fmt.Sprintf("contains %s, which is an invalid type, it should be %s", getValue(serviceMap, nextErr.Context().String()), expectedType) + } + + validTypes := parseValidTypesFromSchema(schema, err.Context().String()) + + validTypesMsg := addArticle(strings.Join(validTypes, " or ")) + + return fmt.Sprintf("contains an invalid type, it should be %s", validTypesMsg) + case "unique": + contextWithDuplicates := getValue(serviceMap, nextErr.Context().String()) + + return fmt.Sprintf("contains non unique items, please remove duplicates from %s", contextWithDuplicates) + } + + return "" +} + +func invalidTypeMessage(service, key string, err gojsonschema.ResultError) string { + expectedTypesString := err.Details()["expected"].(string) + var expectedTypes []string + + if strings.Contains(expectedTypesString, ",") { + expectedTypes = strings.Split(expectedTypesString[1:len(expectedTypesString)-1], ",") + } else { + expectedTypes = []string{expectedTypesString} + } + + validTypesMsg := addArticle(strings.Join(expectedTypes, " or ")) + + return fmt.Sprintf("Service '%s' configuration key '%s' contains an invalid type, it should be %s.", service, key, validTypesMsg) +} + +func validate(serviceMap RawServiceMap) error { + if err := setupSchemaLoaders(schemaDataV1, &schemaV1, &schemaLoaderV1, &constraintSchemaLoaderV1); err != nil { + return err + } + + serviceMap = convertServiceMapKeysToStrings(serviceMap) + + dataLoader := gojsonschema.NewGoLoader(serviceMap) + + result, err := gojsonschema.Validate(schemaLoaderV1, dataLoader) + if err != nil { + return err + } + + return generateErrorMessages(serviceMap, schemaV1, result) +} + +func validateV2(serviceMap RawServiceMap) error { + if err := setupSchemaLoaders(servicesSchemaDataV2, &schemaV2, &schemaLoaderV2, &constraintSchemaLoaderV2); err != nil { + return err + } + + serviceMap = convertServiceMapKeysToStrings(serviceMap) + + dataLoader := gojsonschema.NewGoLoader(serviceMap) + + result, err := gojsonschema.Validate(schemaLoaderV2, dataLoader) + if err != nil { + return err + } + + return generateErrorMessages(serviceMap, schemaV2, result) +} + +func generateErrorMessages(serviceMap RawServiceMap, schema map[string]interface{}, result *gojsonschema.Result) error { + var validationErrors []string + + // gojsonschema can create extraneous "additional_property_not_allowed" errors in some cases + // If this is set, and the error is at root level, skip over that error + skipRootAdditionalPropertyError := false + + if !result.Valid() { + for i := 0; i < len(result.Errors()); i++ { + err := result.Errors()[i] + + if skipRootAdditionalPropertyError && err.Type() == "additional_property_not_allowed" && err.Context().String() == "(root)" { + skipRootAdditionalPropertyError = false + continue + } + + if err.Context().String() == "(root)" { + switch err.Type() { + case "additional_property_not_allowed": + validationErrors = append(validationErrors, fmt.Sprintf("Invalid service name '%s' - only [a-zA-Z0-9\\._\\-] characters are allowed", err.Field())) + default: + validationErrors = append(validationErrors, err.Description()) + } + } else { + skipRootAdditionalPropertyError = true + + serviceName := serviceNameFromErrorField(err.Field()) + key := keyNameFromErrorField(err.Field()) + + switch err.Type() { + case "additional_property_not_allowed": + validationErrors = append(validationErrors, unsupportedConfigMessage(key, result.Errors()[i+1])) + case "number_one_of": + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' configuration key '%s' %s", serviceName, key, oneOfMessage(serviceMap, schema, err, result.Errors()[i+1]))) + + // Next error handled in oneOfMessage, skip over it + i++ + case "invalid_type": + validationErrors = append(validationErrors, invalidTypeMessage(serviceName, key, err)) + case "required": + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' option '%s' is invalid, %s", serviceName, key, err.Description())) + case "missing_dependency": + dependency := err.Details()["dependency"].(string) + validationErrors = append(validationErrors, fmt.Sprintf("Invalid configuration for '%s' service: dependency '%s' is not satisfied", serviceName, dependency)) + case "unique": + contextWithDuplicates := getValue(serviceMap, err.Context().String()) + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' configuration key '%s' value %s has non-unique elements", serviceName, key, contextWithDuplicates)) + default: + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' configuration key %s value %s", serviceName, key, err.Description())) + } + } + } + + return fmt.Errorf(strings.Join(validationErrors, "\n")) + } + + return nil +} + +func validateServiceConstraints(service RawService, serviceName string) error { + if err := setupSchemaLoaders(schemaDataV1, &schemaV1, &schemaLoaderV1, &constraintSchemaLoaderV1); err != nil { + return err + } + + service = convertServiceKeysToStrings(service) + + var validationErrors []string + + dataLoader := gojsonschema.NewGoLoader(service) + + result, err := gojsonschema.Validate(constraintSchemaLoaderV1, dataLoader) + if err != nil { + return err + } + + if !result.Valid() { + for _, err := range result.Errors() { + if err.Type() == "number_any_of" { + _, containsImage := service["image"] + _, containsBuild := service["build"] + _, containsDockerfile := service["dockerfile"] + + if containsImage && containsBuild { + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' has both an image and build path specified. A service can either be built to image or use an existing image, not both.", serviceName)) + } else if !containsImage && !containsBuild { + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' has neither an image nor a build path specified. Exactly one must be provided.", serviceName)) + } else if containsImage && containsDockerfile { + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' has both an image and alternate Dockerfile. A service can either be built to image or use an existing image, not both.", serviceName)) + } + } + } + + return fmt.Errorf(strings.Join(validationErrors, "\n")) + } + + return nil +} + +func validateServiceConstraintsv2(service RawService, serviceName string) error { + if err := setupSchemaLoaders(servicesSchemaDataV2, &schemaV2, &schemaLoaderV2, &constraintSchemaLoaderV2); err != nil { + return err + } + + service = convertServiceKeysToStrings(service) + + var validationErrors []string + + dataLoader := gojsonschema.NewGoLoader(service) + + result, err := gojsonschema.Validate(constraintSchemaLoaderV2, dataLoader) + if err != nil { + return err + } + + if !result.Valid() { + for _, err := range result.Errors() { + if err.Type() == "required" { + _, containsImage := service["image"] + _, containsBuild := service["build"] + + if containsBuild || !containsImage && !containsBuild { + validationErrors = append(validationErrors, fmt.Sprintf("Service '%s' has neither an image nor a build context specified. At least one must be provided.", serviceName)) + } + } + } + return fmt.Errorf(strings.Join(validationErrors, "\n")) + } + + return nil +} diff --git a/vendor/github.com/docker/libcompose/utils/util.go b/vendor/github.com/docker/libcompose/utils/util.go new file mode 100644 index 0000000000..971f943357 --- /dev/null +++ b/vendor/github.com/docker/libcompose/utils/util.go @@ -0,0 +1,162 @@ +package utils + +import ( + "encoding/json" + "sync" + + "github.com/Sirupsen/logrus" + + "gopkg.in/yaml.v2" +) + +// InParallel holds a pool and a waitgroup to execute tasks in parallel and to be able +// to wait for completion of all tasks. +type InParallel struct { + wg sync.WaitGroup + pool sync.Pool +} + +// Add runs the specified task in parallel and adds it to the waitGroup. +func (i *InParallel) Add(task func() error) { + i.wg.Add(1) + + go func() { + defer i.wg.Done() + err := task() + if err != nil { + i.pool.Put(err) + } + }() +} + +// Wait waits for all tasks to complete and returns the latest error encountered if any. +func (i *InParallel) Wait() error { + i.wg.Wait() + obj := i.pool.Get() + if err, ok := obj.(error); ok { + return err + } + return nil +} + +// ConvertByJSON converts a struct (src) to another one (target) using json marshalling/unmarshalling. +// If the structure are not compatible, this will throw an error as the unmarshalling will fail. +func ConvertByJSON(src, target interface{}) error { + newBytes, err := json.Marshal(src) + if err != nil { + return err + } + + err = json.Unmarshal(newBytes, target) + if err != nil { + logrus.Errorf("Failed to unmarshall: %v\n%s", err, string(newBytes)) + } + return err +} + +// Convert converts a struct (src) to another one (target) using yaml marshalling/unmarshalling. +// If the structure are not compatible, this will throw an error as the unmarshalling will fail. +func Convert(src, target interface{}) error { + newBytes, err := yaml.Marshal(src) + if err != nil { + return err + } + + err = yaml.Unmarshal(newBytes, target) + if err != nil { + logrus.Errorf("Failed to unmarshall: %v\n%s", err, string(newBytes)) + } + return err +} + +// CopySlice creates an exact copy of the provided string slice +func CopySlice(s []string) []string { + if s == nil { + return nil + } + r := make([]string, len(s)) + copy(r, s) + return r +} + +// CopyMap creates an exact copy of the provided string-to-string map +func CopyMap(m map[string]string) map[string]string { + if m == nil { + return nil + } + r := map[string]string{} + for k, v := range m { + r[k] = v + } + return r +} + +// FilterStringSet accepts a string set `s` (in the form of `map[string]bool`) and a filtering function `f` +// and returns a string set containing only the strings `x` for which `f(x) == true` +func FilterStringSet(s map[string]bool, f func(x string) bool) map[string]bool { + result := map[string]bool{} + for k := range s { + if f(k) { + result[k] = true + } + } + return result +} + +// FilterString returns a json representation of the specified map +// that is used as filter for docker. +func FilterString(data map[string][]string) string { + // I can't imagine this would ever fail + bytes, _ := json.Marshal(data) + return string(bytes) +} + +// Contains checks if the specified string (key) is present in the specified collection. +func Contains(collection []string, key string) bool { + for _, value := range collection { + if value == key { + return true + } + } + + return false +} + +// Merge performs a union of two string slices: the result is an unordered slice +// that includes every item from either argument exactly once +func Merge(coll1, coll2 []string) []string { + m := map[string]struct{}{} + for _, v := range append(coll1, coll2...) { + m[v] = struct{}{} + } + r := make([]string, 0, len(m)) + for k := range m { + r = append(r, k) + } + return r +} + +// ConvertKeysToStrings converts map[interface{}] to map[string] recursively +func ConvertKeysToStrings(item interface{}) interface{} { + switch typedDatas := item.(type) { + case map[string]interface{}: + for key, value := range typedDatas { + typedDatas[key] = ConvertKeysToStrings(value) + } + return typedDatas + case map[interface{}]interface{}: + newMap := make(map[string]interface{}) + for key, value := range typedDatas { + stringKey := key.(string) + newMap[stringKey] = ConvertKeysToStrings(value) + } + return newMap + case []interface{}: + for i, value := range typedDatas { + typedDatas[i] = ConvertKeysToStrings(value) + } + return typedDatas + default: + return item + } +} diff --git a/vendor/github.com/docker/libcompose/yaml/build.go b/vendor/github.com/docker/libcompose/yaml/build.go new file mode 100644 index 0000000000..b6a8a92518 --- /dev/null +++ b/vendor/github.com/docker/libcompose/yaml/build.go @@ -0,0 +1,117 @@ +package yaml + +import ( + "errors" + "fmt" + "strconv" + "strings" +) + +// Build represents a build element in compose file. +// It can take multiple form in the compose file, hence this special type +type Build struct { + Context string + Dockerfile string + Args map[string]*string +} + +// MarshalYAML implements the Marshaller interface. +func (b Build) MarshalYAML() (interface{}, error) { + m := map[string]interface{}{} + if b.Context != "" { + m["context"] = b.Context + } + if b.Dockerfile != "" { + m["dockerfile"] = b.Dockerfile + } + if len(b.Args) > 0 { + m["args"] = b.Args + } + return m, nil +} + +// UnmarshalYAML implements the Unmarshaller interface. +func (b *Build) UnmarshalYAML(unmarshal func(interface{}) error) error { + var stringType string + if err := unmarshal(&stringType); err == nil { + b.Context = stringType + return nil + } + + var mapType map[interface{}]interface{} + if err := unmarshal(&mapType); err == nil { + for mapKey, mapValue := range mapType { + switch mapKey { + case "context": + b.Context = mapValue.(string) + case "dockerfile": + b.Dockerfile = mapValue.(string) + case "args": + args, err := handleBuildArgs(mapValue) + if err != nil { + return err + } + b.Args = args + default: + // Ignore unknown keys + continue + } + } + return nil + } + + return errors.New("Failed to unmarshal Build") +} + +func handleBuildArgs(value interface{}) (map[string]*string, error) { + var args map[string]*string + switch v := value.(type) { + case map[interface{}]interface{}: + return handleBuildArgMap(v) + case []interface{}: + return handleBuildArgSlice(v) + default: + return args, fmt.Errorf("Failed to unmarshal Build args: %#v", value) + } +} + +func handleBuildArgSlice(s []interface{}) (map[string]*string, error) { + var args = map[string]*string{} + for _, arg := range s { + // check if a value is provided + switch v := strings.SplitN(arg.(string), "=", 2); len(v) { + case 1: + // if we have not specified a a value for this build arg, we assign it an ascii null value and query the environment + // later when we build the service + str := "\x00" + args[v[0]] = &str + case 2: + // if we do have a value provided, we use it + args[v[0]] = &v[1] + } + } + return args, nil +} + +func handleBuildArgMap(m map[interface{}]interface{}) (map[string]*string, error) { + args := map[string]*string{} + for mapKey, mapValue := range m { + var argValue string + name, ok := mapKey.(string) + if !ok { + return args, fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", name, name) + } + switch a := mapValue.(type) { + case string: + argValue = a + case int: + argValue = strconv.Itoa(a) + case int64: + argValue = strconv.Itoa(int(a)) + default: + return args, fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", mapValue, mapValue) + } + args[name] = &argValue + } + return args, nil +} diff --git a/vendor/github.com/docker/libcompose/yaml/command.go b/vendor/github.com/docker/libcompose/yaml/command.go new file mode 100644 index 0000000000..ace69b5d3b --- /dev/null +++ b/vendor/github.com/docker/libcompose/yaml/command.go @@ -0,0 +1,42 @@ +package yaml + +import ( + "errors" + "fmt" + + "github.com/docker/docker/api/types/strslice" + "github.com/flynn/go-shlex" +) + +// Command represents a docker command, can be a string or an array of strings. +type Command strslice.StrSlice + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *Command) UnmarshalYAML(unmarshal func(interface{}) error) error { + var stringType string + if err := unmarshal(&stringType); err == nil { + parts, err := shlex.Split(stringType) + if err != nil { + return err + } + *s = parts + return nil + } + + var sliceType []interface{} + if err := unmarshal(&sliceType); err == nil { + parts, err := toStrings(sliceType) + if err != nil { + return err + } + *s = parts + return nil + } + + var interfaceType interface{} + if err := unmarshal(&interfaceType); err == nil { + fmt.Println(interfaceType) + } + + return errors.New("Failed to unmarshal Command") +} diff --git a/vendor/github.com/docker/libcompose/yaml/external.go b/vendor/github.com/docker/libcompose/yaml/external.go new file mode 100644 index 0000000000..be7efca9f9 --- /dev/null +++ b/vendor/github.com/docker/libcompose/yaml/external.go @@ -0,0 +1,37 @@ +package yaml + +// External represents an external network entry in compose file. +// It can be a boolean (true|false) or have a name +type External struct { + External bool + Name string +} + +// MarshalYAML implements the Marshaller interface. +func (n External) MarshalYAML() (interface{}, error) { + if n.Name == "" { + return n.External, nil + } + return map[string]interface{}{ + "name": n.Name, + }, nil +} + +// UnmarshalYAML implements the Unmarshaller interface. +func (n *External) UnmarshalYAML(unmarshal func(interface{}) error) error { + if err := unmarshal(&n.External); err == nil { + return nil + } + var dummyExternal struct { + Name string + } + + err := unmarshal(&dummyExternal) + if err != nil { + return err + } + n.Name = dummyExternal.Name + n.External = true + + return nil +} diff --git a/vendor/github.com/docker/libcompose/yaml/network.go b/vendor/github.com/docker/libcompose/yaml/network.go new file mode 100644 index 0000000000..2776b8586b --- /dev/null +++ b/vendor/github.com/docker/libcompose/yaml/network.go @@ -0,0 +1,108 @@ +package yaml + +import ( + "errors" + "fmt" +) + +// Networks represents a list of service networks in compose file. +// It has several representation, hence this specific struct. +type Networks struct { + Networks []*Network +} + +// Network represents a service network in compose file. +type Network struct { + Name string `yaml:"-"` + RealName string `yaml:"-"` + Aliases []string `yaml:"aliases,omitempty"` + IPv4Address string `yaml:"ipv4_address,omitempty"` + IPv6Address string `yaml:"ipv6_address,omitempty"` +} + +// MarshalYAML implements the Marshaller interface. +func (n Networks) MarshalYAML() (interface{}, error) { + m := map[string]*Network{} + for _, network := range n.Networks { + m[network.Name] = network + } + return m, nil +} + +// UnmarshalYAML implements the Unmarshaller interface. +func (n *Networks) UnmarshalYAML(unmarshal func(interface{}) error) error { + var sliceType []interface{} + if err := unmarshal(&sliceType); err == nil { + n.Networks = []*Network{} + for _, network := range sliceType { + name, ok := network.(string) + if !ok { + return fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", name, name) + } + n.Networks = append(n.Networks, &Network{ + Name: name, + }) + } + return nil + } + + var mapType map[interface{}]interface{} + if err := unmarshal(&mapType); err == nil { + n.Networks = []*Network{} + for mapKey, mapValue := range mapType { + name, ok := mapKey.(string) + if !ok { + return fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", name, name) + } + network, err := handleNetwork(name, mapValue) + if err != nil { + return err + } + n.Networks = append(n.Networks, network) + } + return nil + } + + return errors.New("Failed to unmarshal Networks") +} + +func handleNetwork(name string, value interface{}) (*Network, error) { + if value == nil { + return &Network{ + Name: name, + }, nil + } + switch v := value.(type) { + case map[interface{}]interface{}: + network := &Network{ + Name: name, + } + for mapKey, mapValue := range v { + name, ok := mapKey.(string) + if !ok { + return &Network{}, fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", name, name) + } + switch name { + case "aliases": + aliases, ok := mapValue.([]interface{}) + if !ok { + return &Network{}, fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", aliases, aliases) + } + network.Aliases = []string{} + for _, alias := range aliases { + network.Aliases = append(network.Aliases, alias.(string)) + } + case "ipv4_address": + network.IPv4Address = mapValue.(string) + case "ipv6_address": + network.IPv6Address = mapValue.(string) + default: + // Ignorer unknown keys ? + continue + } + } + return network, nil + default: + return &Network{}, fmt.Errorf("Failed to unmarshal Network: %#v", value) + } +} diff --git a/vendor/github.com/docker/libcompose/yaml/types_yaml.go b/vendor/github.com/docker/libcompose/yaml/types_yaml.go new file mode 100644 index 0000000000..d0a5c78961 --- /dev/null +++ b/vendor/github.com/docker/libcompose/yaml/types_yaml.go @@ -0,0 +1,256 @@ +package yaml + +import ( + "errors" + "fmt" + "strconv" + "strings" + + "github.com/docker/docker/api/types/strslice" + "github.com/docker/go-units" +) + +// StringorInt represents a string or an integer. +type StringorInt int64 + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *StringorInt) UnmarshalYAML(unmarshal func(interface{}) error) error { + var intType int64 + if err := unmarshal(&intType); err == nil { + *s = StringorInt(intType) + return nil + } + + var stringType string + if err := unmarshal(&stringType); err == nil { + intType, err := strconv.ParseInt(stringType, 10, 64) + + if err != nil { + return err + } + *s = StringorInt(intType) + return nil + } + + return errors.New("Failed to unmarshal StringorInt") +} + +// MemStringorInt represents a string or an integer +// the String supports notations like 10m for then Megabyte of memory +type MemStringorInt int64 + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *MemStringorInt) UnmarshalYAML(unmarshal func(interface{}) error) error { + var intType int64 + if err := unmarshal(&intType); err == nil { + *s = MemStringorInt(intType) + return nil + } + + var stringType string + if err := unmarshal(&stringType); err == nil { + intType, err := units.RAMInBytes(stringType) + + if err != nil { + return err + } + *s = MemStringorInt(intType) + return nil + } + + return errors.New("Failed to unmarshal MemStringorInt") +} + +// Stringorslice represents +// Using engine-api Strslice and augment it with YAML marshalling stuff. a string or an array of strings. +type Stringorslice strslice.StrSlice + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *Stringorslice) UnmarshalYAML(unmarshal func(interface{}) error) error { + var stringType string + if err := unmarshal(&stringType); err == nil { + *s = []string{stringType} + return nil + } + + var sliceType []interface{} + if err := unmarshal(&sliceType); err == nil { + parts, err := toStrings(sliceType) + if err != nil { + return err + } + *s = parts + return nil + } + + return errors.New("Failed to unmarshal Stringorslice") +} + +// SliceorMap represents a slice or a map of strings. +type SliceorMap map[string]string + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *SliceorMap) UnmarshalYAML(unmarshal func(interface{}) error) error { + + var sliceType []interface{} + if err := unmarshal(&sliceType); err == nil { + parts := map[string]string{} + for _, s := range sliceType { + if str, ok := s.(string); ok { + str := strings.TrimSpace(str) + keyValueSlice := strings.SplitN(str, "=", 2) + + key := keyValueSlice[0] + val := "" + if len(keyValueSlice) == 2 { + val = keyValueSlice[1] + } + parts[key] = val + } else { + return fmt.Errorf("Cannot unmarshal '%v' of type %T into a string value", s, s) + } + } + *s = parts + return nil + } + + var mapType map[interface{}]interface{} + if err := unmarshal(&mapType); err == nil { + parts := map[string]string{} + for k, v := range mapType { + if sk, ok := k.(string); ok { + if sv, ok := v.(string); ok { + parts[sk] = sv + } else { + return fmt.Errorf("Cannot unmarshal '%v' of type %T into a string value", v, v) + } + } else { + return fmt.Errorf("Cannot unmarshal '%v' of type %T into a string value", k, k) + } + } + *s = parts + return nil + } + + return errors.New("Failed to unmarshal SliceorMap") +} + +// MaporEqualSlice represents a slice of strings that gets unmarshal from a +// YAML map into 'key=value' string. +type MaporEqualSlice []string + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *MaporEqualSlice) UnmarshalYAML(unmarshal func(interface{}) error) error { + parts, err := unmarshalToStringOrSepMapParts(unmarshal, "=") + if err != nil { + return err + } + *s = parts + return nil +} + +// ToMap returns the list of string as a map splitting using = the key=value +func (s *MaporEqualSlice) ToMap() map[string]string { + return toMap(*s, "=") +} + +// MaporColonSlice represents a slice of strings that gets unmarshal from a +// YAML map into 'key:value' string. +type MaporColonSlice []string + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *MaporColonSlice) UnmarshalYAML(unmarshal func(interface{}) error) error { + parts, err := unmarshalToStringOrSepMapParts(unmarshal, ":") + if err != nil { + return err + } + *s = parts + return nil +} + +// ToMap returns the list of string as a map splitting using = the key=value +func (s *MaporColonSlice) ToMap() map[string]string { + return toMap(*s, ":") +} + +// MaporSpaceSlice represents a slice of strings that gets unmarshal from a +// YAML map into 'key value' string. +type MaporSpaceSlice []string + +// UnmarshalYAML implements the Unmarshaller interface. +func (s *MaporSpaceSlice) UnmarshalYAML(unmarshal func(interface{}) error) error { + parts, err := unmarshalToStringOrSepMapParts(unmarshal, " ") + if err != nil { + return err + } + *s = parts + return nil +} + +// ToMap returns the list of string as a map splitting using = the key=value +func (s *MaporSpaceSlice) ToMap() map[string]string { + return toMap(*s, " ") +} + +func unmarshalToStringOrSepMapParts(unmarshal func(interface{}) error, key string) ([]string, error) { + var sliceType []interface{} + if err := unmarshal(&sliceType); err == nil { + return toStrings(sliceType) + } + var mapType map[interface{}]interface{} + if err := unmarshal(&mapType); err == nil { + return toSepMapParts(mapType, key) + } + return nil, errors.New("Failed to unmarshal MaporSlice") +} + +func toSepMapParts(value map[interface{}]interface{}, sep string) ([]string, error) { + if len(value) == 0 { + return nil, nil + } + parts := make([]string, 0, len(value)) + for k, v := range value { + if sk, ok := k.(string); ok { + if sv, ok := v.(string); ok { + parts = append(parts, sk+sep+sv) + } else if sv, ok := v.(int); ok { + parts = append(parts, sk+sep+strconv.Itoa(sv)) + } else if sv, ok := v.(int64); ok { + parts = append(parts, sk+sep+strconv.FormatInt(sv, 10)) + } else if sv, ok := v.(float64); ok { + parts = append(parts, sk+sep+strconv.FormatFloat(sv, 'f', -1, 64)) + } else if v == nil { + parts = append(parts, sk) + } else { + return nil, fmt.Errorf("Cannot unmarshal '%v' of type %T into a string value", v, v) + } + } else { + return nil, fmt.Errorf("Cannot unmarshal '%v' of type %T into a string value", k, k) + } + } + return parts, nil +} + +func toStrings(s []interface{}) ([]string, error) { + if len(s) == 0 { + return nil, nil + } + r := make([]string, len(s)) + for k, v := range s { + if sv, ok := v.(string); ok { + r[k] = sv + } else { + return nil, fmt.Errorf("Cannot unmarshal '%v' of type %T into a string value", v, v) + } + } + return r, nil +} + +func toMap(s []string, sep string) map[string]string { + m := map[string]string{} + for _, v := range s { + values := strings.Split(v, sep) + m[values[0]] = values[1] + } + return m +} diff --git a/vendor/github.com/docker/libcompose/yaml/ulimit.go b/vendor/github.com/docker/libcompose/yaml/ulimit.go new file mode 100644 index 0000000000..c25c493646 --- /dev/null +++ b/vendor/github.com/docker/libcompose/yaml/ulimit.go @@ -0,0 +1,108 @@ +package yaml + +import ( + "errors" + "fmt" + "sort" +) + +// Ulimits represents a list of Ulimit. +// It is, however, represented in yaml as keys (and thus map in Go) +type Ulimits struct { + Elements []Ulimit +} + +// MarshalYAML implements the Marshaller interface. +func (u Ulimits) MarshalYAML() (interface{}, error) { + ulimitMap := make(map[string]Ulimit) + for _, ulimit := range u.Elements { + ulimitMap[ulimit.Name] = ulimit + } + return ulimitMap, nil +} + +// UnmarshalYAML implements the Unmarshaller interface. +func (u *Ulimits) UnmarshalYAML(unmarshal func(interface{}) error) error { + ulimits := make(map[string]Ulimit) + + var mapType map[interface{}]interface{} + if err := unmarshal(&mapType); err == nil { + for mapKey, mapValue := range mapType { + name, ok := mapKey.(string) + if !ok { + return fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", name, name) + } + var soft, hard int64 + switch mv := mapValue.(type) { + case int: + soft = int64(mv) + hard = int64(mv) + case map[interface{}]interface{}: + if len(mv) != 2 { + return fmt.Errorf("Failed to unmarshal Ulimit: %#v", mapValue) + } + for mkey, mvalue := range mv { + switch mkey { + case "soft": + soft = int64(mvalue.(int)) + case "hard": + hard = int64(mvalue.(int)) + default: + // FIXME(vdemeester) Should we ignore or fail ? + continue + } + } + default: + return fmt.Errorf("Failed to unmarshal Ulimit: %v, %T", mapValue, mapValue) + } + ulimits[name] = Ulimit{ + Name: name, + ulimitValues: ulimitValues{ + Soft: soft, + Hard: hard, + }, + } + } + keys := make([]string, 0, len(ulimits)) + for key := range ulimits { + keys = append(keys, key) + } + sort.Strings(keys) + for _, key := range keys { + u.Elements = append(u.Elements, ulimits[key]) + } + return nil + } + + return errors.New("Failed to unmarshal Ulimit") +} + +// Ulimit represents ulimit information. +type Ulimit struct { + ulimitValues + Name string +} + +type ulimitValues struct { + Soft int64 `yaml:"soft"` + Hard int64 `yaml:"hard"` +} + +// MarshalYAML implements the Marshaller interface. +func (u Ulimit) MarshalYAML() (interface{}, error) { + if u.Soft == u.Hard { + return u.Soft, nil + } + return u.ulimitValues, nil +} + +// NewUlimit creates a Ulimit based on the specified parts. +func NewUlimit(name string, soft int64, hard int64) Ulimit { + return Ulimit{ + Name: name, + ulimitValues: ulimitValues{ + Soft: soft, + Hard: hard, + }, + } +} diff --git a/vendor/github.com/docker/libcompose/yaml/volume.go b/vendor/github.com/docker/libcompose/yaml/volume.go new file mode 100644 index 0000000000..530aa6179d --- /dev/null +++ b/vendor/github.com/docker/libcompose/yaml/volume.go @@ -0,0 +1,83 @@ +package yaml + +import ( + "errors" + "fmt" + "strings" +) + +// Volumes represents a list of service volumes in compose file. +// It has several representation, hence this specific struct. +type Volumes struct { + Volumes []*Volume +} + +// Volume represent a service volume +type Volume struct { + Source string `yaml:"-"` + Destination string `yaml:"-"` + AccessMode string `yaml:"-"` +} + +// String implements the Stringer interface. +func (v *Volume) String() string { + var paths []string + if v.Source != "" { + paths = []string{v.Source, v.Destination} + } else { + paths = []string{v.Destination} + } + if v.AccessMode != "" { + paths = append(paths, v.AccessMode) + } + return strings.Join(paths, ":") +} + +// MarshalYAML implements the Marshaller interface. +func (v Volumes) MarshalYAML() (interface{}, error) { + vs := []string{} + for _, volume := range v.Volumes { + vs = append(vs, volume.String()) + } + return vs, nil +} + +// UnmarshalYAML implements the Unmarshaller interface. +func (v *Volumes) UnmarshalYAML(unmarshal func(interface{}) error) error { + var sliceType []interface{} + if err := unmarshal(&sliceType); err == nil { + v.Volumes = []*Volume{} + for _, volume := range sliceType { + name, ok := volume.(string) + if !ok { + return fmt.Errorf("Cannot unmarshal '%v' to type %T into a string value", name, name) + } + elts := strings.SplitN(name, ":", 3) + var vol *Volume + switch { + case len(elts) == 1: + vol = &Volume{ + Destination: elts[0], + } + case len(elts) == 2: + vol = &Volume{ + Source: elts[0], + Destination: elts[1], + } + case len(elts) == 3: + vol = &Volume{ + Source: elts[0], + Destination: elts[1], + AccessMode: elts[2], + } + default: + // FIXME + return fmt.Errorf("") + } + v.Volumes = append(v.Volumes, vol) + } + return nil + } + + return errors.New("Failed to unmarshal Volumes") +} diff --git a/vendor/github.com/flynn/go-shlex/COPYING b/vendor/github.com/flynn/go-shlex/COPYING new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/vendor/github.com/flynn/go-shlex/COPYING @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/flynn/go-shlex/Makefile b/vendor/github.com/flynn/go-shlex/Makefile new file mode 100644 index 0000000000..038d9a4896 --- /dev/null +++ b/vendor/github.com/flynn/go-shlex/Makefile @@ -0,0 +1,21 @@ +# Copyright 2011 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +include $(GOROOT)/src/Make.inc + +TARG=shlex +GOFILES=\ + shlex.go\ + +include $(GOROOT)/src/Make.pkg diff --git a/vendor/github.com/flynn/go-shlex/README.md b/vendor/github.com/flynn/go-shlex/README.md new file mode 100644 index 0000000000..c86bcc066f --- /dev/null +++ b/vendor/github.com/flynn/go-shlex/README.md @@ -0,0 +1,2 @@ +go-shlex is a simple lexer for go that supports shell-style quoting, +commenting, and escaping. diff --git a/vendor/github.com/flynn/go-shlex/shlex.go b/vendor/github.com/flynn/go-shlex/shlex.go new file mode 100644 index 0000000000..7aeace801e --- /dev/null +++ b/vendor/github.com/flynn/go-shlex/shlex.go @@ -0,0 +1,457 @@ +/* +Copyright 2012 Google Inc. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package shlex + +/* +Package shlex implements a simple lexer which splits input in to tokens using +shell-style rules for quoting and commenting. +*/ +import ( + "bufio" + "errors" + "fmt" + "io" + "strings" +) + +/* +A TokenType is a top-level token; a word, space, comment, unknown. +*/ +type TokenType int + +/* +A RuneTokenType is the type of a UTF-8 character; a character, quote, space, escape. +*/ +type RuneTokenType int + +type lexerState int + +type Token struct { + tokenType TokenType + value string +} + +/* +Two tokens are equal if both their types and values are equal. A nil token can +never equal another token. +*/ +func (a *Token) Equal(b *Token) bool { + if a == nil || b == nil { + return false + } + if a.tokenType != b.tokenType { + return false + } + return a.value == b.value +} + +const ( + RUNE_CHAR string = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789._-,/@$*()+=><:;&^%~|!?[]{}" + RUNE_SPACE string = " \t\r\n" + RUNE_ESCAPING_QUOTE string = "\"" + RUNE_NONESCAPING_QUOTE string = "'" + RUNE_ESCAPE = "\\" + RUNE_COMMENT = "#" + + RUNETOKEN_UNKNOWN RuneTokenType = 0 + RUNETOKEN_CHAR RuneTokenType = 1 + RUNETOKEN_SPACE RuneTokenType = 2 + RUNETOKEN_ESCAPING_QUOTE RuneTokenType = 3 + RUNETOKEN_NONESCAPING_QUOTE RuneTokenType = 4 + RUNETOKEN_ESCAPE RuneTokenType = 5 + RUNETOKEN_COMMENT RuneTokenType = 6 + RUNETOKEN_EOF RuneTokenType = 7 + + TOKEN_UNKNOWN TokenType = 0 + TOKEN_WORD TokenType = 1 + TOKEN_SPACE TokenType = 2 + TOKEN_COMMENT TokenType = 3 + + STATE_START lexerState = 0 + STATE_INWORD lexerState = 1 + STATE_ESCAPING lexerState = 2 + STATE_ESCAPING_QUOTED lexerState = 3 + STATE_QUOTED_ESCAPING lexerState = 4 + STATE_QUOTED lexerState = 5 + STATE_COMMENT lexerState = 6 + + INITIAL_TOKEN_CAPACITY int = 100 +) + +/* +A type for classifying characters. This allows for different sorts of +classifiers - those accepting extended non-ascii chars, or strict posix +compatibility, for example. +*/ +type TokenClassifier struct { + typeMap map[int32]RuneTokenType +} + +func addRuneClass(typeMap *map[int32]RuneTokenType, runes string, tokenType RuneTokenType) { + for _, rune := range runes { + (*typeMap)[int32(rune)] = tokenType + } +} + +/* +Create a new classifier for basic ASCII characters. +*/ +func NewDefaultClassifier() *TokenClassifier { + typeMap := map[int32]RuneTokenType{} + addRuneClass(&typeMap, RUNE_CHAR, RUNETOKEN_CHAR) + addRuneClass(&typeMap, RUNE_SPACE, RUNETOKEN_SPACE) + addRuneClass(&typeMap, RUNE_ESCAPING_QUOTE, RUNETOKEN_ESCAPING_QUOTE) + addRuneClass(&typeMap, RUNE_NONESCAPING_QUOTE, RUNETOKEN_NONESCAPING_QUOTE) + addRuneClass(&typeMap, RUNE_ESCAPE, RUNETOKEN_ESCAPE) + addRuneClass(&typeMap, RUNE_COMMENT, RUNETOKEN_COMMENT) + return &TokenClassifier{ + typeMap: typeMap} +} + +func (classifier *TokenClassifier) ClassifyRune(rune int32) RuneTokenType { + return classifier.typeMap[rune] +} + +/* +A type for turning an input stream in to a sequence of strings. Whitespace and +comments are skipped. +*/ +type Lexer struct { + tokenizer *Tokenizer +} + +/* +Create a new lexer. +*/ +func NewLexer(r io.Reader) (*Lexer, error) { + + tokenizer, err := NewTokenizer(r) + if err != nil { + return nil, err + } + lexer := &Lexer{tokenizer: tokenizer} + return lexer, nil +} + +/* +Return the next word, and an error value. If there are no more words, the error +will be io.EOF. +*/ +func (l *Lexer) NextWord() (string, error) { + var token *Token + var err error + for { + token, err = l.tokenizer.NextToken() + if err != nil { + return "", err + } + switch token.tokenType { + case TOKEN_WORD: + { + return token.value, nil + } + case TOKEN_COMMENT: + { + // skip comments + } + default: + { + panic(fmt.Sprintf("Unknown token type: %v", token.tokenType)) + } + } + } + return "", io.EOF +} + +/* +A type for turning an input stream in to a sequence of typed tokens. +*/ +type Tokenizer struct { + input *bufio.Reader + classifier *TokenClassifier +} + +/* +Create a new tokenizer. +*/ +func NewTokenizer(r io.Reader) (*Tokenizer, error) { + input := bufio.NewReader(r) + classifier := NewDefaultClassifier() + tokenizer := &Tokenizer{ + input: input, + classifier: classifier} + return tokenizer, nil +} + +/* +Scan the stream for the next token. + +This uses an internal state machine. It will panic if it encounters a character +which it does not know how to handle. +*/ +func (t *Tokenizer) scanStream() (*Token, error) { + state := STATE_START + var tokenType TokenType + value := make([]int32, 0, INITIAL_TOKEN_CAPACITY) + var ( + nextRune int32 + nextRuneType RuneTokenType + err error + ) +SCAN: + for { + nextRune, _, err = t.input.ReadRune() + nextRuneType = t.classifier.ClassifyRune(nextRune) + if err != nil { + if err == io.EOF { + nextRuneType = RUNETOKEN_EOF + err = nil + } else { + return nil, err + } + } + switch state { + case STATE_START: // no runes read yet + { + switch nextRuneType { + case RUNETOKEN_EOF: + { + return nil, io.EOF + } + case RUNETOKEN_CHAR: + { + tokenType = TOKEN_WORD + value = append(value, nextRune) + state = STATE_INWORD + } + case RUNETOKEN_SPACE: + { + } + case RUNETOKEN_ESCAPING_QUOTE: + { + tokenType = TOKEN_WORD + state = STATE_QUOTED_ESCAPING + } + case RUNETOKEN_NONESCAPING_QUOTE: + { + tokenType = TOKEN_WORD + state = STATE_QUOTED + } + case RUNETOKEN_ESCAPE: + { + tokenType = TOKEN_WORD + state = STATE_ESCAPING + } + case RUNETOKEN_COMMENT: + { + tokenType = TOKEN_COMMENT + state = STATE_COMMENT + } + default: + { + return nil, errors.New(fmt.Sprintf("Unknown rune: %v", nextRune)) + } + } + } + case STATE_INWORD: // in a regular word + { + switch nextRuneType { + case RUNETOKEN_EOF: + { + break SCAN + } + case RUNETOKEN_CHAR, RUNETOKEN_COMMENT: + { + value = append(value, nextRune) + } + case RUNETOKEN_SPACE: + { + t.input.UnreadRune() + break SCAN + } + case RUNETOKEN_ESCAPING_QUOTE: + { + state = STATE_QUOTED_ESCAPING + } + case RUNETOKEN_NONESCAPING_QUOTE: + { + state = STATE_QUOTED + } + case RUNETOKEN_ESCAPE: + { + state = STATE_ESCAPING + } + default: + { + return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune)) + } + } + } + case STATE_ESCAPING: // the next rune after an escape character + { + switch nextRuneType { + case RUNETOKEN_EOF: + { + err = errors.New("EOF found after escape character") + break SCAN + } + case RUNETOKEN_CHAR, RUNETOKEN_SPACE, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_NONESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT: + { + state = STATE_INWORD + value = append(value, nextRune) + } + default: + { + return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune)) + } + } + } + case STATE_ESCAPING_QUOTED: // the next rune after an escape character, in double quotes + { + switch nextRuneType { + case RUNETOKEN_EOF: + { + err = errors.New("EOF found after escape character") + break SCAN + } + case RUNETOKEN_CHAR, RUNETOKEN_SPACE, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_NONESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT: + { + state = STATE_QUOTED_ESCAPING + value = append(value, nextRune) + } + default: + { + return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune)) + } + } + } + case STATE_QUOTED_ESCAPING: // in escaping double quotes + { + switch nextRuneType { + case RUNETOKEN_EOF: + { + err = errors.New("EOF found when expecting closing quote.") + break SCAN + } + case RUNETOKEN_CHAR, RUNETOKEN_UNKNOWN, RUNETOKEN_SPACE, RUNETOKEN_NONESCAPING_QUOTE, RUNETOKEN_COMMENT: + { + value = append(value, nextRune) + } + case RUNETOKEN_ESCAPING_QUOTE: + { + state = STATE_INWORD + } + case RUNETOKEN_ESCAPE: + { + state = STATE_ESCAPING_QUOTED + } + default: + { + return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune)) + } + } + } + case STATE_QUOTED: // in non-escaping single quotes + { + switch nextRuneType { + case RUNETOKEN_EOF: + { + err = errors.New("EOF found when expecting closing quote.") + break SCAN + } + case RUNETOKEN_CHAR, RUNETOKEN_UNKNOWN, RUNETOKEN_SPACE, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT: + { + value = append(value, nextRune) + } + case RUNETOKEN_NONESCAPING_QUOTE: + { + state = STATE_INWORD + } + default: + { + return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune)) + } + } + } + case STATE_COMMENT: + { + switch nextRuneType { + case RUNETOKEN_EOF: + { + break SCAN + } + case RUNETOKEN_CHAR, RUNETOKEN_UNKNOWN, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT, RUNETOKEN_NONESCAPING_QUOTE: + { + value = append(value, nextRune) + } + case RUNETOKEN_SPACE: + { + if nextRune == '\n' { + state = STATE_START + break SCAN + } else { + value = append(value, nextRune) + } + } + default: + { + return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune)) + } + } + } + default: + { + panic(fmt.Sprintf("Unexpected state: %v", state)) + } + } + } + token := &Token{ + tokenType: tokenType, + value: string(value)} + return token, err +} + +/* +Return the next token in the stream, and an error value. If there are no more +tokens available, the error value will be io.EOF. +*/ +func (t *Tokenizer) NextToken() (*Token, error) { + return t.scanStream() +} + +/* +Split a string in to a slice of strings, based upon shell-style rules for +quoting, escaping, and spaces. +*/ +func Split(s string) ([]string, error) { + l, err := NewLexer(strings.NewReader(s)) + if err != nil { + return nil, err + } + subStrings := []string{} + for { + word, err := l.NextWord() + if err != nil { + if err == io.EOF { + return subStrings, nil + } + return subStrings, err + } + subStrings = append(subStrings, word) + } + return subStrings, nil +} diff --git a/vendor/github.com/google/go-github/LICENSE b/vendor/github.com/google/go-github/LICENSE index 3a3a8ec0e6..53d5374a71 100644 --- a/vendor/github.com/google/go-github/LICENSE +++ b/vendor/github.com/google/go-github/LICENSE @@ -29,8 +29,9 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ---------- Some documentation is taken from the GitHub Developer site -, which is available under a Creative Commons -Attribution 3.0 License: +, which is available under the following Creative +Commons Attribution 3.0 License. This applies only to the go-github source +code and would not apply to any compiled binaries. THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY diff --git a/vendor/github.com/google/go-github/github/activity.go b/vendor/github.com/google/go-github/github/activity.go index 355de624b2..d6c992c7f5 100644 --- a/vendor/github.com/google/go-github/github/activity.go +++ b/vendor/github.com/google/go-github/github/activity.go @@ -5,10 +5,65 @@ package github +import "context" + // ActivityService handles communication with the activity related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/activity/ -type ActivityService struct { - client *Client +// GitHub API docs: https://developer.github.com/v3/activity/ +type ActivityService service + +// FeedLink represents a link to a related resource. +type FeedLink struct { + HRef *string `json:"href,omitempty"` + Type *string `json:"type,omitempty"` +} + +// Feeds represents timeline resources in Atom format. +type Feeds struct { + TimelineURL *string `json:"timeline_url,omitempty"` + UserURL *string `json:"user_url,omitempty"` + CurrentUserPublicURL *string `json:"current_user_public_url,omitempty"` + CurrentUserURL *string `json:"current_user_url,omitempty"` + CurrentUserActorURL *string `json:"current_user_actor_url,omitempty"` + CurrentUserOrganizationURL *string `json:"current_user_organization_url,omitempty"` + CurrentUserOrganizationURLs []string `json:"current_user_organization_urls,omitempty"` + Links *struct { + Timeline *FeedLink `json:"timeline,omitempty"` + User *FeedLink `json:"user,omitempty"` + CurrentUserPublic *FeedLink `json:"current_user_public,omitempty"` + CurrentUser *FeedLink `json:"current_user,omitempty"` + CurrentUserActor *FeedLink `json:"current_user_actor,omitempty"` + CurrentUserOrganization *FeedLink `json:"current_user_organization,omitempty"` + CurrentUserOrganizations []FeedLink `json:"current_user_organizations,omitempty"` + } `json:"_links,omitempty"` +} + +// ListFeeds lists all the feeds available to the authenticated user. +// +// GitHub provides several timeline resources in Atom format: +// Timeline: The GitHub global public timeline +// User: The public timeline for any user, using URI template +// Current user public: The public timeline for the authenticated user +// Current user: The private timeline for the authenticated user +// Current user actor: The private timeline for activity created by the +// authenticated user +// Current user organizations: The private timeline for the organizations +// the authenticated user is a member of. +// +// Note: Private feeds are only returned when authenticating via Basic Auth +// since current feed URIs use the older, non revocable auth tokens. +func (s *ActivityService) ListFeeds(ctx context.Context) (*Feeds, *Response, error) { + req, err := s.client.NewRequest("GET", "feeds", nil) + if err != nil { + return nil, nil, err + } + + f := &Feeds{} + resp, err := s.client.Do(ctx, req, f) + if err != nil { + return nil, resp, err + } + + return f, resp, nil } diff --git a/vendor/github.com/google/go-github/github/activity_events.go b/vendor/github.com/google/go-github/github/activity_events.go index 0894f0fa25..78219f8ab9 100644 --- a/vendor/github.com/google/go-github/github/activity_events.go +++ b/vendor/github.com/google/go-github/github/activity_events.go @@ -6,6 +6,7 @@ package github import ( + "context" "encoding/json" "fmt" "time" @@ -27,23 +28,95 @@ func (e Event) String() string { return Stringify(e) } -// Payload returns the parsed event payload. For recognized event types -// (PushEvent), a value of the corresponding struct type will be returned. -func (e *Event) Payload() (payload interface{}) { +// ParsePayload parses the event payload. For recognized event types, +// a value of the corresponding struct type will be returned. +func (e *Event) ParsePayload() (payload interface{}, err error) { switch *e.Type { + case "CommitCommentEvent": + payload = &CommitCommentEvent{} + case "CreateEvent": + payload = &CreateEvent{} + case "DeleteEvent": + payload = &DeleteEvent{} + case "DeploymentEvent": + payload = &DeploymentEvent{} + case "DeploymentStatusEvent": + payload = &DeploymentStatusEvent{} + case "ForkEvent": + payload = &ForkEvent{} + case "GollumEvent": + payload = &GollumEvent{} + case "IntegrationInstallationEvent": + payload = &IntegrationInstallationEvent{} + case "IntegrationInstallationRepositoriesEvent": + payload = &IntegrationInstallationRepositoriesEvent{} + case "IssueCommentEvent": + payload = &IssueCommentEvent{} + case "IssuesEvent": + payload = &IssuesEvent{} + case "LabelEvent": + payload = &LabelEvent{} + case "MemberEvent": + payload = &MemberEvent{} + case "MembershipEvent": + payload = &MembershipEvent{} + case "MilestoneEvent": + payload = &MilestoneEvent{} + case "OrganizationEvent": + payload = &OrganizationEvent{} + case "PageBuildEvent": + payload = &PageBuildEvent{} + case "PingEvent": + payload = &PingEvent{} + case "ProjectEvent": + payload = &ProjectEvent{} + case "ProjectCardEvent": + payload = &ProjectCardEvent{} + case "ProjectColumnEvent": + payload = &ProjectColumnEvent{} + case "PublicEvent": + payload = &PublicEvent{} + case "PullRequestEvent": + payload = &PullRequestEvent{} + case "PullRequestReviewEvent": + payload = &PullRequestReviewEvent{} + case "PullRequestReviewCommentEvent": + payload = &PullRequestReviewCommentEvent{} case "PushEvent": payload = &PushEvent{} + case "ReleaseEvent": + payload = &ReleaseEvent{} + case "RepositoryEvent": + payload = &RepositoryEvent{} + case "StatusEvent": + payload = &StatusEvent{} + case "TeamAddEvent": + payload = &TeamAddEvent{} + case "WatchEvent": + payload = &WatchEvent{} } - if err := json.Unmarshal(*e.RawPayload, &payload); err != nil { - panic(err.Error()) + err = json.Unmarshal(*e.RawPayload, &payload) + return payload, err +} + +// Payload returns the parsed event payload. For recognized event types, +// a value of the corresponding struct type will be returned. +// +// Deprecated: Use ParsePayload instead, which returns an error +// rather than panics if JSON unmarshaling raw payload fails. +func (e *Event) Payload() (payload interface{}) { + var err error + payload, err = e.ParsePayload() + if err != nil { + panic(err) } return payload } // ListEvents drinks from the firehose of all public events across GitHub. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-public-events -func (s *ActivityService) ListEvents(opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-public-events +func (s *ActivityService) ListEvents(ctx context.Context, opt *ListOptions) ([]*Event, *Response, error) { u, err := addOptions("events", opt) if err != nil { return nil, nil, err @@ -54,19 +127,19 @@ func (s *ActivityService) ListEvents(opt *ListOptions) ([]Event, *Response, erro return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*Event + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } // ListRepositoryEvents lists events for a repository. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-repository-events -func (s *ActivityService) ListRepositoryEvents(owner, repo string, opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-repository-events +func (s *ActivityService) ListRepositoryEvents(ctx context.Context, owner, repo string, opt *ListOptions) ([]*Event, *Response, error) { u := fmt.Sprintf("repos/%v/%v/events", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -78,19 +151,19 @@ func (s *ActivityService) ListRepositoryEvents(owner, repo string, opt *ListOpti return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*Event + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } // ListIssueEventsForRepository lists issue events for a repository. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-issue-events-for-a-repository -func (s *ActivityService) ListIssueEventsForRepository(owner, repo string, opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-issue-events-for-a-repository +func (s *ActivityService) ListIssueEventsForRepository(ctx context.Context, owner, repo string, opt *ListOptions) ([]*IssueEvent, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/events", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -102,19 +175,19 @@ func (s *ActivityService) ListIssueEventsForRepository(owner, repo string, opt * return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*IssueEvent + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } // ListEventsForRepoNetwork lists public events for a network of repositories. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-public-events-for-a-network-of-repositories -func (s *ActivityService) ListEventsForRepoNetwork(owner, repo string, opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-public-events-for-a-network-of-repositories +func (s *ActivityService) ListEventsForRepoNetwork(ctx context.Context, owner, repo string, opt *ListOptions) ([]*Event, *Response, error) { u := fmt.Sprintf("networks/%v/%v/events", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -126,19 +199,19 @@ func (s *ActivityService) ListEventsForRepoNetwork(owner, repo string, opt *List return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*Event + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } // ListEventsForOrganization lists public events for an organization. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-public-events-for-an-organization -func (s *ActivityService) ListEventsForOrganization(org string, opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-public-events-for-an-organization +func (s *ActivityService) ListEventsForOrganization(ctx context.Context, org string, opt *ListOptions) ([]*Event, *Response, error) { u := fmt.Sprintf("orgs/%v/events", org) u, err := addOptions(u, opt) if err != nil { @@ -150,20 +223,20 @@ func (s *ActivityService) ListEventsForOrganization(org string, opt *ListOptions return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*Event + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } // ListEventsPerformedByUser lists the events performed by a user. If publicOnly is // true, only public events will be returned. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-events-performed-by-a-user -func (s *ActivityService) ListEventsPerformedByUser(user string, publicOnly bool, opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-events-performed-by-a-user +func (s *ActivityService) ListEventsPerformedByUser(ctx context.Context, user string, publicOnly bool, opt *ListOptions) ([]*Event, *Response, error) { var u string if publicOnly { u = fmt.Sprintf("users/%v/events/public", user) @@ -180,20 +253,20 @@ func (s *ActivityService) ListEventsPerformedByUser(user string, publicOnly bool return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*Event + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } // ListEventsReceivedByUser lists the events received by a user. If publicOnly is // true, only public events will be returned. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-events-that-a-user-has-received -func (s *ActivityService) ListEventsReceivedByUser(user string, publicOnly bool, opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-events-that-a-user-has-received +func (s *ActivityService) ListEventsReceivedByUser(ctx context.Context, user string, publicOnly bool, opt *ListOptions) ([]*Event, *Response, error) { var u string if publicOnly { u = fmt.Sprintf("users/%v/received_events/public", user) @@ -210,20 +283,20 @@ func (s *ActivityService) ListEventsReceivedByUser(user string, publicOnly bool, return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*Event + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } // ListUserEventsForOrganization provides the user’s organization dashboard. You // must be authenticated as the user to view this. // -// GitHub API docs: http://developer.github.com/v3/activity/events/#list-events-for-an-organization -func (s *ActivityService) ListUserEventsForOrganization(org, user string, opt *ListOptions) ([]Event, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/events/#list-events-for-an-organization +func (s *ActivityService) ListUserEventsForOrganization(ctx context.Context, org, user string, opt *ListOptions) ([]*Event, *Response, error) { u := fmt.Sprintf("users/%v/events/orgs/%v", user, org) u, err := addOptions(u, opt) if err != nil { @@ -235,11 +308,11 @@ func (s *ActivityService) ListUserEventsForOrganization(org, user string, opt *L return nil, nil, err } - events := new([]Event) - resp, err := s.client.Do(req, events) + var events []*Event + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return *events, resp, err + return events, resp, nil } diff --git a/vendor/github.com/google/go-github/github/activity_notifications.go b/vendor/github.com/google/go-github/github/activity_notifications.go index 290b954279..45c8b2aece 100644 --- a/vendor/github.com/google/go-github/github/activity_notifications.go +++ b/vendor/github.com/google/go-github/github/activity_notifications.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -18,7 +19,7 @@ type Notification struct { // Reason identifies the event that triggered the notification. // - // GitHub API Docs: https://developer.github.com/v3/activity/notifications/#notification-reasons + // GitHub API docs: https://developer.github.com/v3/activity/notifications/#notification-reasons Reason *string `json:"reason,omitempty"` Unread *bool `json:"unread,omitempty"` @@ -42,12 +43,14 @@ type NotificationListOptions struct { Participating bool `url:"participating,omitempty"` Since time.Time `url:"since,omitempty"` Before time.Time `url:"before,omitempty"` + + ListOptions } // ListNotifications lists all notifications for the authenticated user. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#list-your-notifications -func (s *ActivityService) ListNotifications(opt *NotificationListOptions) ([]Notification, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#list-your-notifications +func (s *ActivityService) ListNotifications(ctx context.Context, opt *NotificationListOptions) ([]*Notification, *Response, error) { u := fmt.Sprintf("notifications") u, err := addOptions(u, opt) if err != nil { @@ -59,20 +62,20 @@ func (s *ActivityService) ListNotifications(opt *NotificationListOptions) ([]Not return nil, nil, err } - var notifications []Notification - resp, err := s.client.Do(req, ¬ifications) + var notifications []*Notification + resp, err := s.client.Do(ctx, req, ¬ifications) if err != nil { return nil, resp, err } - return notifications, resp, err + return notifications, resp, nil } // ListRepositoryNotifications lists all notifications in a given repository // for the authenticated user. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#list-your-notifications-in-a-repository -func (s *ActivityService) ListRepositoryNotifications(owner, repo string, opt *NotificationListOptions) ([]Notification, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#list-your-notifications-in-a-repository +func (s *ActivityService) ListRepositoryNotifications(ctx context.Context, owner, repo string, opt *NotificationListOptions) ([]*Notification, *Response, error) { u := fmt.Sprintf("repos/%v/%v/notifications", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -84,60 +87,55 @@ func (s *ActivityService) ListRepositoryNotifications(owner, repo string, opt *N return nil, nil, err } - var notifications []Notification - resp, err := s.client.Do(req, ¬ifications) + var notifications []*Notification + resp, err := s.client.Do(ctx, req, ¬ifications) if err != nil { return nil, resp, err } - return notifications, resp, err + return notifications, resp, nil } type markReadOptions struct { - LastReadAt time.Time `url:"last_read_at,omitempty"` + LastReadAt time.Time `json:"last_read_at,omitempty"` } // MarkNotificationsRead marks all notifications up to lastRead as read. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#mark-as-read -func (s *ActivityService) MarkNotificationsRead(lastRead time.Time) (*Response, error) { - u := fmt.Sprintf("notifications") - u, err := addOptions(u, markReadOptions{lastRead}) +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#mark-as-read +func (s *ActivityService) MarkNotificationsRead(ctx context.Context, lastRead time.Time) (*Response, error) { + opts := &markReadOptions{ + LastReadAt: lastRead, + } + req, err := s.client.NewRequest("PUT", "notifications", opts) if err != nil { return nil, err } - req, err := s.client.NewRequest("PUT", u, nil) - if err != nil { - return nil, err - } - - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // MarkRepositoryNotificationsRead marks all notifications up to lastRead in // the specified repository as read. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#mark-notifications-as-read-in-a-repository -func (s *ActivityService) MarkRepositoryNotificationsRead(owner, repo string, lastRead time.Time) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#mark-notifications-as-read-in-a-repository +func (s *ActivityService) MarkRepositoryNotificationsRead(ctx context.Context, owner, repo string, lastRead time.Time) (*Response, error) { + opts := &markReadOptions{ + LastReadAt: lastRead, + } u := fmt.Sprintf("repos/%v/%v/notifications", owner, repo) - u, err := addOptions(u, markReadOptions{lastRead}) + req, err := s.client.NewRequest("PUT", u, opts) if err != nil { return nil, err } - req, err := s.client.NewRequest("PUT", u, nil) - if err != nil { - return nil, err - } - - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // GetThread gets the specified notification thread. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#view-a-single-thread -func (s *ActivityService) GetThread(id string) (*Notification, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#view-a-single-thread +func (s *ActivityService) GetThread(ctx context.Context, id string) (*Notification, *Response, error) { u := fmt.Sprintf("notifications/threads/%v", id) req, err := s.client.NewRequest("GET", u, nil) @@ -146,18 +144,18 @@ func (s *ActivityService) GetThread(id string) (*Notification, *Response, error) } notification := new(Notification) - resp, err := s.client.Do(req, notification) + resp, err := s.client.Do(ctx, req, notification) if err != nil { return nil, resp, err } - return notification, resp, err + return notification, resp, nil } // MarkThreadRead marks the specified thread as read. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#mark-a-thread-as-read -func (s *ActivityService) MarkThreadRead(id string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#mark-a-thread-as-read +func (s *ActivityService) MarkThreadRead(ctx context.Context, id string) (*Response, error) { u := fmt.Sprintf("notifications/threads/%v", id) req, err := s.client.NewRequest("PATCH", u, nil) @@ -165,14 +163,14 @@ func (s *ActivityService) MarkThreadRead(id string) (*Response, error) { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // GetThreadSubscription checks to see if the authenticated user is subscribed // to a thread. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#get-a-thread-subscription -func (s *ActivityService) GetThreadSubscription(id string) (*Subscription, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#get-a-thread-subscription +func (s *ActivityService) GetThreadSubscription(ctx context.Context, id string) (*Subscription, *Response, error) { u := fmt.Sprintf("notifications/threads/%v/subscription", id) req, err := s.client.NewRequest("GET", u, nil) @@ -181,19 +179,19 @@ func (s *ActivityService) GetThreadSubscription(id string) (*Subscription, *Resp } sub := new(Subscription) - resp, err := s.client.Do(req, sub) + resp, err := s.client.Do(ctx, req, sub) if err != nil { return nil, resp, err } - return sub, resp, err + return sub, resp, nil } // SetThreadSubscription sets the subscription for the specified thread for the // authenticated user. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#set-a-thread-subscription -func (s *ActivityService) SetThreadSubscription(id string, subscription *Subscription) (*Subscription, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#set-a-thread-subscription +func (s *ActivityService) SetThreadSubscription(ctx context.Context, id string, subscription *Subscription) (*Subscription, *Response, error) { u := fmt.Sprintf("notifications/threads/%v/subscription", id) req, err := s.client.NewRequest("PUT", u, subscription) @@ -202,24 +200,24 @@ func (s *ActivityService) SetThreadSubscription(id string, subscription *Subscri } sub := new(Subscription) - resp, err := s.client.Do(req, sub) + resp, err := s.client.Do(ctx, req, sub) if err != nil { return nil, resp, err } - return sub, resp, err + return sub, resp, nil } // DeleteThreadSubscription deletes the subscription for the specified thread // for the authenticated user. // -// GitHub API Docs: https://developer.github.com/v3/activity/notifications/#delete-a-thread-subscription -func (s *ActivityService) DeleteThreadSubscription(id string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/notifications/#delete-a-thread-subscription +func (s *ActivityService) DeleteThreadSubscription(ctx context.Context, id string) (*Response, error) { u := fmt.Sprintf("notifications/threads/%v/subscription", id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/activity_star.go b/vendor/github.com/google/go-github/github/activity_star.go index fac4f41d2c..d5b067127c 100644 --- a/vendor/github.com/google/go-github/github/activity_star.go +++ b/vendor/github.com/google/go-github/github/activity_star.go @@ -5,7 +5,10 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // StarredRepository is returned by ListStarred. type StarredRepository struct { @@ -13,10 +16,16 @@ type StarredRepository struct { Repository *Repository `json:"repo,omitempty"` } +// Stargazer represents a user that has starred a repository. +type Stargazer struct { + StarredAt *Timestamp `json:"starred_at,omitempty"` + User *User `json:"user,omitempty"` +} + // ListStargazers lists people who have starred the specified repo. // -// GitHub API Docs: https://developer.github.com/v3/activity/starring/#list-stargazers -func (s *ActivityService) ListStargazers(owner, repo string, opt *ListOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/starring/#list-stargazers +func (s *ActivityService) ListStargazers(ctx context.Context, owner, repo string, opt *ListOptions) ([]*Stargazer, *Response, error) { u := fmt.Sprintf("repos/%s/%s/stargazers", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -28,34 +37,37 @@ func (s *ActivityService) ListStargazers(owner, repo string, opt *ListOptions) ( return nil, nil, err } - stargazers := new([]User) - resp, err := s.client.Do(req, stargazers) + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeStarringPreview) + + var stargazers []*Stargazer + resp, err := s.client.Do(ctx, req, &stargazers) if err != nil { return nil, resp, err } - return *stargazers, resp, err + return stargazers, resp, nil } // ActivityListStarredOptions specifies the optional parameters to the // ActivityService.ListStarred method. type ActivityListStarredOptions struct { - // How to sort the repository list. Possible values are: created, updated, - // pushed, full_name. Default is "full_name". + // How to sort the repository list. Possible values are: created, updated, + // pushed, full_name. Default is "full_name". Sort string `url:"sort,omitempty"` - // Direction in which to sort repositories. Possible values are: asc, desc. + // Direction in which to sort repositories. Possible values are: asc, desc. // Default is "asc" when sort is "full_name", otherwise default is "desc". Direction string `url:"direction,omitempty"` ListOptions } -// ListStarred lists all the repos starred by a user. Passing the empty string +// ListStarred lists all the repos starred by a user. Passing the empty string // will list the starred repositories for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/activity/starring/#list-repositories-being-starred -func (s *ActivityService) ListStarred(user string, opt *ActivityListStarredOptions) ([]StarredRepository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/starring/#list-repositories-being-starred +func (s *ActivityService) ListStarred(ctx context.Context, user string, opt *ActivityListStarredOptions) ([]*StarredRepository, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/starred", user) @@ -75,25 +87,25 @@ func (s *ActivityService) ListStarred(user string, opt *ActivityListStarredOptio // TODO: remove custom Accept header when this API fully launches req.Header.Set("Accept", mediaTypeStarringPreview) - repos := new([]StarredRepository) - resp, err := s.client.Do(req, repos) + var repos []*StarredRepository + resp, err := s.client.Do(ctx, req, &repos) if err != nil { return nil, resp, err } - return *repos, resp, err + return repos, resp, nil } // IsStarred checks if a repository is starred by authenticated user. // // GitHub API docs: https://developer.github.com/v3/activity/starring/#check-if-you-are-starring-a-repository -func (s *ActivityService) IsStarred(owner, repo string) (bool, *Response, error) { +func (s *ActivityService) IsStarred(ctx context.Context, owner, repo string) (bool, *Response, error) { u := fmt.Sprintf("user/starred/%v/%v", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) starred, err := parseBoolResponse(err) return starred, resp, err } @@ -101,23 +113,23 @@ func (s *ActivityService) IsStarred(owner, repo string) (bool, *Response, error) // Star a repository as the authenticated user. // // GitHub API docs: https://developer.github.com/v3/activity/starring/#star-a-repository -func (s *ActivityService) Star(owner, repo string) (*Response, error) { +func (s *ActivityService) Star(ctx context.Context, owner, repo string) (*Response, error) { u := fmt.Sprintf("user/starred/%v/%v", owner, repo) req, err := s.client.NewRequest("PUT", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Unstar a repository as the authenticated user. // // GitHub API docs: https://developer.github.com/v3/activity/starring/#unstar-a-repository -func (s *ActivityService) Unstar(owner, repo string) (*Response, error) { +func (s *ActivityService) Unstar(ctx context.Context, owner, repo string) (*Response, error) { u := fmt.Sprintf("user/starred/%v/%v", owner, repo) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/activity_watching.go b/vendor/github.com/google/go-github/github/activity_watching.go index c002b3b16f..c749ca86e7 100644 --- a/vendor/github.com/google/go-github/github/activity_watching.go +++ b/vendor/github.com/google/go-github/github/activity_watching.go @@ -5,7 +5,10 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // Subscription identifies a repository or thread subscription. type Subscription struct { @@ -24,8 +27,8 @@ type Subscription struct { // ListWatchers lists watchers of a particular repo. // -// GitHub API Docs: http://developer.github.com/v3/activity/watching/#list-watchers -func (s *ActivityService) ListWatchers(owner, repo string, opt *ListOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/watching/#list-watchers +func (s *ActivityService) ListWatchers(ctx context.Context, owner, repo string, opt *ListOptions) ([]*User, *Response, error) { u := fmt.Sprintf("repos/%s/%s/subscribers", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -37,20 +40,20 @@ func (s *ActivityService) ListWatchers(owner, repo string, opt *ListOptions) ([] return nil, nil, err } - watchers := new([]User) - resp, err := s.client.Do(req, watchers) + var watchers []*User + resp, err := s.client.Do(ctx, req, &watchers) if err != nil { return nil, resp, err } - return *watchers, resp, err + return watchers, resp, nil } -// ListWatched lists the repositories the specified user is watching. Passing +// ListWatched lists the repositories the specified user is watching. Passing // the empty string will fetch watched repos for the authenticated user. // -// GitHub API Docs: https://developer.github.com/v3/activity/watching/#list-repositories-being-watched -func (s *ActivityService) ListWatched(user string, opt *ListOptions) ([]Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/watching/#list-repositories-being-watched +func (s *ActivityService) ListWatched(ctx context.Context, user string, opt *ListOptions) ([]*Repository, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/subscriptions", user) @@ -67,21 +70,21 @@ func (s *ActivityService) ListWatched(user string, opt *ListOptions) ([]Reposito return nil, nil, err } - watched := new([]Repository) - resp, err := s.client.Do(req, watched) + var watched []*Repository + resp, err := s.client.Do(ctx, req, &watched) if err != nil { return nil, resp, err } - return *watched, resp, err + return watched, resp, nil } // GetRepositorySubscription returns the subscription for the specified -// repository for the authenticated user. If the authenticated user is not +// repository for the authenticated user. If the authenticated user is not // watching the repository, a nil Subscription is returned. // -// GitHub API Docs: https://developer.github.com/v3/activity/watching/#get-a-repository-subscription -func (s *ActivityService) GetRepositorySubscription(owner, repo string) (*Subscription, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/activity/watching/#get-a-repository-subscription +func (s *ActivityService) GetRepositorySubscription(ctx context.Context, owner, repo string) (*Subscription, *Response, error) { u := fmt.Sprintf("repos/%s/%s/subscription", owner, repo) req, err := s.client.NewRequest("GET", u, nil) @@ -90,21 +93,25 @@ func (s *ActivityService) GetRepositorySubscription(owner, repo string) (*Subscr } sub := new(Subscription) - resp, err := s.client.Do(req, sub) + resp, err := s.client.Do(ctx, req, sub) if err != nil { // if it's just a 404, don't return that as an error _, err = parseBoolResponse(err) return nil, resp, err } - return sub, resp, err + return sub, resp, nil } // SetRepositorySubscription sets the subscription for the specified repository // for the authenticated user. // -// GitHub API Docs: https://developer.github.com/v3/activity/watching/#set-a-repository-subscription -func (s *ActivityService) SetRepositorySubscription(owner, repo string, subscription *Subscription) (*Subscription, *Response, error) { +// To watch a repository, set subscription.Subscribed to true. +// To ignore notifications made within a repository, set subscription.Ignored to true. +// To stop watching a repository, use DeleteRepositorySubscription. +// +// GitHub API docs: https://developer.github.com/v3/activity/watching/#set-a-repository-subscription +func (s *ActivityService) SetRepositorySubscription(ctx context.Context, owner, repo string, subscription *Subscription) (*Subscription, *Response, error) { u := fmt.Sprintf("repos/%s/%s/subscription", owner, repo) req, err := s.client.NewRequest("PUT", u, subscription) @@ -113,24 +120,27 @@ func (s *ActivityService) SetRepositorySubscription(owner, repo string, subscrip } sub := new(Subscription) - resp, err := s.client.Do(req, sub) + resp, err := s.client.Do(ctx, req, sub) if err != nil { return nil, resp, err } - return sub, resp, err + return sub, resp, nil } // DeleteRepositorySubscription deletes the subscription for the specified // repository for the authenticated user. // -// GitHub API Docs: https://developer.github.com/v3/activity/watching/#delete-a-repository-subscription -func (s *ActivityService) DeleteRepositorySubscription(owner, repo string) (*Response, error) { +// This is used to stop watching a repository. To control whether or not to +// receive notifications from a repository, use SetRepositorySubscription. +// +// GitHub API docs: https://developer.github.com/v3/activity/watching/#delete-a-repository-subscription +func (s *ActivityService) DeleteRepositorySubscription(ctx context.Context, owner, repo string) (*Response, error) { u := fmt.Sprintf("repos/%s/%s/subscription", owner, repo) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/admin.go b/vendor/github.com/google/go-github/github/admin.go new file mode 100644 index 0000000000..d0f055bcfa --- /dev/null +++ b/vendor/github.com/google/go-github/github/admin.go @@ -0,0 +1,101 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// AdminService handles communication with the admin related methods of the +// GitHub API. These API routes are normally only accessible for GitHub +// Enterprise installations. +// +// GitHub API docs: https://developer.github.com/v3/enterprise/ +type AdminService service + +// TeamLDAPMapping represents the mapping between a GitHub team and an LDAP group. +type TeamLDAPMapping struct { + ID *int `json:"id,omitempty"` + LDAPDN *string `json:"ldap_dn,omitempty"` + URL *string `json:"url,omitempty"` + Name *string `json:"name,omitempty"` + Slug *string `json:"slug,omitempty"` + Description *string `json:"description,omitempty"` + Privacy *string `json:"privacy,omitempty"` + Permission *string `json:"permission,omitempty"` + + MembersURL *string `json:"members_url,omitempty"` + RepositoriesURL *string `json:"repositories_url,omitempty"` +} + +func (m TeamLDAPMapping) String() string { + return Stringify(m) +} + +// UserLDAPMapping represents the mapping between a GitHub user and an LDAP user. +type UserLDAPMapping struct { + ID *int `json:"id,omitempty"` + LDAPDN *string `json:"ldap_dn,omitempty"` + Login *string `json:"login,omitempty"` + AvatarURL *string `json:"avatar_url,omitempty"` + GravatarID *string `json:"gravatar_id,omitempty"` + Type *string `json:"type,omitempty"` + SiteAdmin *bool `json:"site_admin,omitempty"` + + URL *string `json:"url,omitempty"` + EventsURL *string `json:"events_url,omitempty"` + FollowingURL *string `json:"following_url,omitempty"` + FollowersURL *string `json:"followers_url,omitempty"` + GistsURL *string `json:"gists_url,omitempty"` + OrganizationsURL *string `json:"organizations_url,omitempty"` + ReceivedEventsURL *string `json:"received_events_url,omitempty"` + ReposURL *string `json:"repos_url,omitempty"` + StarredURL *string `json:"starred_url,omitempty"` + SubscriptionsURL *string `json:"subscriptions_url,omitempty"` +} + +func (m UserLDAPMapping) String() string { + return Stringify(m) +} + +// UpdateUserLDAPMapping updates the mapping between a GitHub user and an LDAP user. +// +// GitHub API docs: https://developer.github.com/v3/enterprise/ldap/#update-ldap-mapping-for-a-user +func (s *AdminService) UpdateUserLDAPMapping(ctx context.Context, user string, mapping *UserLDAPMapping) (*UserLDAPMapping, *Response, error) { + u := fmt.Sprintf("admin/ldap/users/%v/mapping", user) + req, err := s.client.NewRequest("PATCH", u, mapping) + if err != nil { + return nil, nil, err + } + + m := new(UserLDAPMapping) + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// UpdateTeamLDAPMapping updates the mapping between a GitHub team and an LDAP group. +// +// GitHub API docs: https://developer.github.com/v3/enterprise/ldap/#update-ldap-mapping-for-a-team +func (s *AdminService) UpdateTeamLDAPMapping(ctx context.Context, team int, mapping *TeamLDAPMapping) (*TeamLDAPMapping, *Response, error) { + u := fmt.Sprintf("admin/ldap/teams/%v/mapping", team) + req, err := s.client.NewRequest("PATCH", u, mapping) + if err != nil { + return nil, nil, err + } + + m := new(TeamLDAPMapping) + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} diff --git a/vendor/github.com/google/go-github/github/authorizations.go b/vendor/github.com/google/go-github/github/authorizations.go new file mode 100644 index 0000000000..181e83dfe5 --- /dev/null +++ b/vendor/github.com/google/go-github/github/authorizations.go @@ -0,0 +1,430 @@ +// Copyright 2015 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// Scope models a GitHub authorization scope. +// +// GitHub API docs: https://developer.github.com/v3/oauth/#scopes +type Scope string + +// This is the set of scopes for GitHub API V3 +const ( + ScopeNone Scope = "(no scope)" // REVISIT: is this actually returned, or just a documentation artifact? + ScopeUser Scope = "user" + ScopeUserEmail Scope = "user:email" + ScopeUserFollow Scope = "user:follow" + ScopePublicRepo Scope = "public_repo" + ScopeRepo Scope = "repo" + ScopeRepoDeployment Scope = "repo_deployment" + ScopeRepoStatus Scope = "repo:status" + ScopeDeleteRepo Scope = "delete_repo" + ScopeNotifications Scope = "notifications" + ScopeGist Scope = "gist" + ScopeReadRepoHook Scope = "read:repo_hook" + ScopeWriteRepoHook Scope = "write:repo_hook" + ScopeAdminRepoHook Scope = "admin:repo_hook" + ScopeAdminOrgHook Scope = "admin:org_hook" + ScopeReadOrg Scope = "read:org" + ScopeWriteOrg Scope = "write:org" + ScopeAdminOrg Scope = "admin:org" + ScopeReadPublicKey Scope = "read:public_key" + ScopeWritePublicKey Scope = "write:public_key" + ScopeAdminPublicKey Scope = "admin:public_key" + ScopeReadGPGKey Scope = "read:gpg_key" + ScopeWriteGPGKey Scope = "write:gpg_key" + ScopeAdminGPGKey Scope = "admin:gpg_key" +) + +// AuthorizationsService handles communication with the authorization related +// methods of the GitHub API. +// +// This service requires HTTP Basic Authentication; it cannot be accessed using +// an OAuth token. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/ +type AuthorizationsService service + +// Authorization represents an individual GitHub authorization. +type Authorization struct { + ID *int `json:"id,omitempty"` + URL *string `json:"url,omitempty"` + Scopes []Scope `json:"scopes,omitempty"` + Token *string `json:"token,omitempty"` + TokenLastEight *string `json:"token_last_eight,omitempty"` + HashedToken *string `json:"hashed_token,omitempty"` + App *AuthorizationApp `json:"app,omitempty"` + Note *string `json:"note,omitempty"` + NoteURL *string `json:"note_url,omitempty"` + UpdateAt *Timestamp `json:"updated_at,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + Fingerprint *string `json:"fingerprint,omitempty"` + + // User is only populated by the Check and Reset methods. + User *User `json:"user,omitempty"` +} + +func (a Authorization) String() string { + return Stringify(a) +} + +// AuthorizationApp represents an individual GitHub app (in the context of authorization). +type AuthorizationApp struct { + URL *string `json:"url,omitempty"` + Name *string `json:"name,omitempty"` + ClientID *string `json:"client_id,omitempty"` +} + +func (a AuthorizationApp) String() string { + return Stringify(a) +} + +// Grant represents an OAuth application that has been granted access to an account. +type Grant struct { + ID *int `json:"id,omitempty"` + URL *string `json:"url,omitempty"` + App *AuthorizationApp `json:"app,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + UpdatedAt *Timestamp `json:"updated_at,omitempty"` + Scopes []string `json:"scopes,omitempty"` +} + +func (g Grant) String() string { + return Stringify(g) +} + +// AuthorizationRequest represents a request to create an authorization. +type AuthorizationRequest struct { + Scopes []Scope `json:"scopes,omitempty"` + Note *string `json:"note,omitempty"` + NoteURL *string `json:"note_url,omitempty"` + ClientID *string `json:"client_id,omitempty"` + ClientSecret *string `json:"client_secret,omitempty"` + Fingerprint *string `json:"fingerprint,omitempty"` +} + +func (a AuthorizationRequest) String() string { + return Stringify(a) +} + +// AuthorizationUpdateRequest represents a request to update an authorization. +// +// Note that for any one update, you must only provide one of the "scopes" +// fields. That is, you may provide only one of "Scopes", or "AddScopes", or +// "RemoveScopes". +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#update-an-existing-authorization +type AuthorizationUpdateRequest struct { + Scopes []string `json:"scopes,omitempty"` + AddScopes []string `json:"add_scopes,omitempty"` + RemoveScopes []string `json:"remove_scopes,omitempty"` + Note *string `json:"note,omitempty"` + NoteURL *string `json:"note_url,omitempty"` + Fingerprint *string `json:"fingerprint,omitempty"` +} + +func (a AuthorizationUpdateRequest) String() string { + return Stringify(a) +} + +// List the authorizations for the authenticated user. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#list-your-authorizations +func (s *AuthorizationsService) List(ctx context.Context, opt *ListOptions) ([]*Authorization, *Response, error) { + u := "authorizations" + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + var auths []*Authorization + resp, err := s.client.Do(ctx, req, &auths) + if err != nil { + return nil, resp, err + } + return auths, resp, nil +} + +// Get a single authorization. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#get-a-single-authorization +func (s *AuthorizationsService) Get(ctx context.Context, id int) (*Authorization, *Response, error) { + u := fmt.Sprintf("authorizations/%d", id) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + a := new(Authorization) + resp, err := s.client.Do(ctx, req, a) + if err != nil { + return nil, resp, err + } + return a, resp, nil +} + +// Create a new authorization for the specified OAuth application. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#create-a-new-authorization +func (s *AuthorizationsService) Create(ctx context.Context, auth *AuthorizationRequest) (*Authorization, *Response, error) { + u := "authorizations" + + req, err := s.client.NewRequest("POST", u, auth) + if err != nil { + return nil, nil, err + } + + a := new(Authorization) + resp, err := s.client.Do(ctx, req, a) + if err != nil { + return nil, resp, err + } + return a, resp, nil +} + +// GetOrCreateForApp creates a new authorization for the specified OAuth +// application, only if an authorization for that application doesn’t already +// exist for the user. +// +// If a new token is created, the HTTP status code will be "201 Created", and +// the returned Authorization.Token field will be populated. If an existing +// token is returned, the status code will be "200 OK" and the +// Authorization.Token field will be empty. +// +// clientID is the OAuth Client ID with which to create the token. +// +// GitHub API docs: +// https://developer.github.com/v3/oauth_authorizations/#get-or-create-an-authorization-for-a-specific-app +// https://developer.github.com/v3/oauth_authorizations/#get-or-create-an-authorization-for-a-specific-app-and-fingerprint +func (s *AuthorizationsService) GetOrCreateForApp(ctx context.Context, clientID string, auth *AuthorizationRequest) (*Authorization, *Response, error) { + var u string + if auth.Fingerprint == nil || *auth.Fingerprint == "" { + u = fmt.Sprintf("authorizations/clients/%v", clientID) + } else { + u = fmt.Sprintf("authorizations/clients/%v/%v", clientID, *auth.Fingerprint) + } + + req, err := s.client.NewRequest("PUT", u, auth) + if err != nil { + return nil, nil, err + } + + a := new(Authorization) + resp, err := s.client.Do(ctx, req, a) + if err != nil { + return nil, resp, err + } + + return a, resp, nil +} + +// Edit a single authorization. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#update-an-existing-authorization +func (s *AuthorizationsService) Edit(ctx context.Context, id int, auth *AuthorizationUpdateRequest) (*Authorization, *Response, error) { + u := fmt.Sprintf("authorizations/%d", id) + + req, err := s.client.NewRequest("PATCH", u, auth) + if err != nil { + return nil, nil, err + } + + a := new(Authorization) + resp, err := s.client.Do(ctx, req, a) + if err != nil { + return nil, resp, err + } + + return a, resp, nil +} + +// Delete a single authorization. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#delete-an-authorization +func (s *AuthorizationsService) Delete(ctx context.Context, id int) (*Response, error) { + u := fmt.Sprintf("authorizations/%d", id) + + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + return s.client.Do(ctx, req, nil) +} + +// Check if an OAuth token is valid for a specific app. +// +// Note that this operation requires the use of BasicAuth, but where the +// username is the OAuth application clientID, and the password is its +// clientSecret. Invalid tokens will return a 404 Not Found. +// +// The returned Authorization.User field will be populated. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#check-an-authorization +func (s *AuthorizationsService) Check(ctx context.Context, clientID string, token string) (*Authorization, *Response, error) { + u := fmt.Sprintf("applications/%v/tokens/%v", clientID, token) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + a := new(Authorization) + resp, err := s.client.Do(ctx, req, a) + if err != nil { + return nil, resp, err + } + + return a, resp, nil +} + +// Reset is used to reset a valid OAuth token without end user involvement. +// Applications must save the "token" property in the response, because changes +// take effect immediately. +// +// Note that this operation requires the use of BasicAuth, but where the +// username is the OAuth application clientID, and the password is its +// clientSecret. Invalid tokens will return a 404 Not Found. +// +// The returned Authorization.User field will be populated. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#reset-an-authorization +func (s *AuthorizationsService) Reset(ctx context.Context, clientID string, token string) (*Authorization, *Response, error) { + u := fmt.Sprintf("applications/%v/tokens/%v", clientID, token) + + req, err := s.client.NewRequest("POST", u, nil) + if err != nil { + return nil, nil, err + } + + a := new(Authorization) + resp, err := s.client.Do(ctx, req, a) + if err != nil { + return nil, resp, err + } + + return a, resp, nil +} + +// Revoke an authorization for an application. +// +// Note that this operation requires the use of BasicAuth, but where the +// username is the OAuth application clientID, and the password is its +// clientSecret. Invalid tokens will return a 404 Not Found. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#revoke-an-authorization-for-an-application +func (s *AuthorizationsService) Revoke(ctx context.Context, clientID string, token string) (*Response, error) { + u := fmt.Sprintf("applications/%v/tokens/%v", clientID, token) + + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + return s.client.Do(ctx, req, nil) +} + +// ListGrants lists the set of OAuth applications that have been granted +// access to a user's account. This will return one entry for each application +// that has been granted access to the account, regardless of the number of +// tokens an application has generated for the user. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#list-your-grants +func (s *AuthorizationsService) ListGrants(ctx context.Context) ([]*Grant, *Response, error) { + req, err := s.client.NewRequest("GET", "applications/grants", nil) + if err != nil { + return nil, nil, err + } + + grants := []*Grant{} + resp, err := s.client.Do(ctx, req, &grants) + if err != nil { + return nil, resp, err + } + + return grants, resp, nil +} + +// GetGrant gets a single OAuth application grant. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#get-a-single-grant +func (s *AuthorizationsService) GetGrant(ctx context.Context, id int) (*Grant, *Response, error) { + u := fmt.Sprintf("applications/grants/%d", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + grant := new(Grant) + resp, err := s.client.Do(ctx, req, grant) + if err != nil { + return nil, resp, err + } + + return grant, resp, nil +} + +// DeleteGrant deletes an OAuth application grant. Deleting an application's +// grant will also delete all OAuth tokens associated with the application for +// the user. +// +// GitHub API docs: https://developer.github.com/v3/oauth_authorizations/#delete-a-grant +func (s *AuthorizationsService) DeleteGrant(ctx context.Context, id int) (*Response, error) { + u := fmt.Sprintf("applications/grants/%d", id) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + return s.client.Do(ctx, req, nil) +} + +// CreateImpersonation creates an impersonation OAuth token. +// +// This requires admin permissions. With the returned Authorization.Token +// you can e.g. create or delete a user's public SSH key. NOTE: creating a +// new token automatically revokes an existing one. +// +// GitHub API docs: https://developer.github.com/enterprise/2.5/v3/users/administration/#create-an-impersonation-oauth-token +func (s *AuthorizationsService) CreateImpersonation(ctx context.Context, username string, authReq *AuthorizationRequest) (*Authorization, *Response, error) { + u := fmt.Sprintf("admin/users/%v/authorizations", username) + req, err := s.client.NewRequest("POST", u, authReq) + if err != nil { + return nil, nil, err + } + + a := new(Authorization) + resp, err := s.client.Do(ctx, req, a) + if err != nil { + return nil, resp, err + } + return a, resp, nil +} + +// DeleteImpersonation deletes an impersonation OAuth token. +// +// NOTE: there can be only one at a time. +// +// GitHub API docs: https://developer.github.com/enterprise/2.5/v3/users/administration/#delete-an-impersonation-oauth-token +func (s *AuthorizationsService) DeleteImpersonation(ctx context.Context, username string) (*Response, error) { + u := fmt.Sprintf("admin/users/%v/authorizations", username) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + return s.client.Do(ctx, req, nil) +} diff --git a/vendor/github.com/google/go-github/github/doc.go b/vendor/github.com/google/go-github/github/doc.go index 0d32d498f9..875f039648 100644 --- a/vendor/github.com/google/go-github/github/doc.go +++ b/vendor/github.com/google/go-github/github/doc.go @@ -6,23 +6,29 @@ /* Package github provides a client for using the GitHub API. +Usage: + + import "github.com/google/go-github/github" + Construct a new GitHub client, then use the various services on the client to access different parts of the GitHub API. For example: client := github.NewClient(nil) // list all organizations for user "willnorris" - orgs, _, err := client.Organizations.List("willnorris", nil) + orgs, _, err := client.Organizations.List(ctx, "willnorris", nil) -Set optional parameters for an API method by passing an Options object. +Some API methods have optional parameters that can be passed. For example: - // list recently updated repositories for org "github" - opt := &github.RepositoryListByOrgOptions{Sort: "updated"} - repos, _, err := client.Repositories.ListByOrg("github", opt) + client := github.NewClient(nil) + + // list public repositories for org "github" + opt := &github.RepositoryListByOrgOptions{Type: "public"} + repos, _, err := client.Repositories.ListByOrg(ctx, "github", opt) The services of a client divide the API into logical chunks and correspond to the structure of the GitHub API documentation at -http://developer.github.com/v3/. +https://developer.github.com/v3/. Authentication @@ -36,54 +42,73 @@ use it with the oauth2 library using: import "golang.org/x/oauth2" func main() { + ctx := context.Background() ts := oauth2.StaticTokenSource( &oauth2.Token{AccessToken: "... your access token ..."}, ) - tc := oauth2.NewClient(oauth2.NoContext, ts) + tc := oauth2.NewClient(ctx, ts) client := github.NewClient(tc) // list all repositories for the authenticated user - repos, _, err := client.Repositories.List("", nil) + repos, _, err := client.Repositories.List(ctx, "", nil) } Note that when using an authenticated Client, all calls made by the client will include the specified OAuth token. Therefore, authenticated clients should almost never be shared between different users. +See the oauth2 docs for complete instructions on using that library. + +For API methods that require HTTP Basic Authentication, use the +BasicAuthTransport. + Rate Limiting -GitHub imposes a rate limit on all API clients. Unauthenticated clients are +GitHub imposes a rate limit on all API clients. Unauthenticated clients are limited to 60 requests per hour, while authenticated clients can make up to -5,000 requests per hour. To receive the higher rate limit when making calls +5,000 requests per hour. To receive the higher rate limit when making calls that are not issued on behalf of a user, use the UnauthenticatedRateLimitedTransport. The Rate method on a client returns the rate limit information based on the most -recent API call. This is updated on every call, but may be out of date if it's +recent API call. This is updated on every call, but may be out of date if it's been some time since the last API call and other clients have made subsequent -requests since then. You can always call RateLimits() directly to get the most +requests since then. You can always call RateLimits() directly to get the most up-to-date rate limit data for the client. To detect an API rate limit error, you can check if its type is *github.RateLimitError: - repos, _, err := client.Repositories.List("", nil) + repos, _, err := client.Repositories.List(ctx, "", nil) if _, ok := err.(*github.RateLimitError); ok { log.Println("hit rate limit") } Learn more about GitHub rate limiting at -http://developer.github.com/v3/#rate-limiting. +https://developer.github.com/v3/#rate-limiting. + +Accepted Status + +Some endpoints may return a 202 Accepted status code, meaning that the +information required is not yet ready and was scheduled to be gathered on +the GitHub side. Methods known to behave like this are documented specifying +this behavior. + +To detect this condition of error, you can check if its type is +*github.AcceptedError: + + stats, _, err := client.Repositories.ListContributorsStats(ctx, org, repo) + if _, ok := err.(*github.AcceptedError); ok { + log.Println("scheduled on GitHub side") + } Conditional Requests The GitHub API has good support for conditional requests which will help prevent you from burning through your rate limit, as well as help speed up your -application. go-github does not handle conditional requests directly, but is -instead designed to work with a caching http.Transport. We recommend using -https://github.com/gregjones/httpcache, which can be used in conjunction with -https://github.com/sourcegraph/apiproxy to provide additional flexibility and -control of caching rules. +application. go-github does not handle conditional requests directly, but is +instead designed to work with a caching http.Transport. We recommend using +https://github.com/gregjones/httpcache for that. Learn more about GitHub conditional requests at https://developer.github.com/v3/#conditional-requests. @@ -93,32 +118,35 @@ Creating and Updating Resources All structs for GitHub resources use pointer values for all non-repeated fields. This allows distinguishing between unset fields and those set to a zero-value. Helper functions have been provided to easily create these pointers for string, -bool, and int values. For example: +bool, and int values. For example: // create a new private repository named "foo" repo := &github.Repository{ Name: github.String("foo"), Private: github.Bool(true), } - client.Repositories.Create("", repo) + client.Repositories.Create(ctx, "", repo) Users who have worked with protocol buffers should find this pattern familiar. Pagination -All requests for resource collections (repos, pull requests, issues, etc) +All requests for resource collections (repos, pull requests, issues, etc.) support pagination. Pagination options are described in the -ListOptions struct and passed to the list methods directly or as an +github.ListOptions struct and passed to the list methods directly or as an embedded type of a more specific list options struct (for example -PullRequestListOptions). Pages information is available via Response struct. +github.PullRequestListOptions). Pages information is available via the +github.Response struct. + + client := github.NewClient(nil) opt := &github.RepositoryListByOrgOptions{ ListOptions: github.ListOptions{PerPage: 10}, } // get all pages of results - var allRepos []github.Repository + var allRepos []*github.Repository for { - repos, resp, err := client.Repositories.ListByOrg("github", opt) + repos, resp, err := client.Repositories.ListByOrg(ctx, "github", opt) if err != nil { return err } diff --git a/vendor/github.com/google/go-github/github/event_types.go b/vendor/github.com/google/go-github/github/event_types.go index e2c37ca42c..4fb5d955e8 100644 --- a/vendor/github.com/google/go-github/github/event_types.go +++ b/vendor/github.com/google/go-github/github/event_types.go @@ -10,14 +10,15 @@ package github // CommitCommentEvent is triggered when a commit comment is created. // The Webhook event name is "commit_comment". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#commitcommentevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#commitcommentevent type CommitCommentEvent struct { Comment *RepositoryComment `json:"comment,omitempty"` // The following fields are only populated by Webhook events. - Action *string `json:"action,omitempty"` - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Action *string `json:"action,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // CreateEvent represents a created repository, branch, or tag. @@ -27,7 +28,7 @@ type CommitCommentEvent struct { // Additionally, webhooks will not receive this event for tags if more // than three tags are pushed at once. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#createevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#createevent type CreateEvent struct { Ref *string `json:"ref,omitempty"` // RefType is the object that was created. Possible values are: "repository", "branch", "tag". @@ -36,9 +37,10 @@ type CreateEvent struct { Description *string `json:"description,omitempty"` // The following fields are only populated by Webhook events. - PusherType *string `json:"pusher_type,omitempty"` - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + PusherType *string `json:"pusher_type,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // DeleteEvent represents a deleted branch or tag. @@ -47,16 +49,17 @@ type CreateEvent struct { // Note: webhooks will not receive this event for tags if more than three tags // are deleted at once. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#deleteevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#deleteevent type DeleteEvent struct { Ref *string `json:"ref,omitempty"` // RefType is the object that was deleted. Possible values are: "branch", "tag". RefType *string `json:"ref_type,omitempty"` // The following fields are only populated by Webhook events. - PusherType *string `json:"pusher_type,omitempty"` - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + PusherType *string `json:"pusher_type,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // DeploymentEvent represents a deployment. @@ -64,13 +67,14 @@ type DeleteEvent struct { // // Events of this type are not visible in timelines, they are only used to trigger hooks. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#deploymentevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#deploymentevent type DeploymentEvent struct { Deployment *Deployment `json:"deployment,omitempty"` Repo *Repository `json:"repository,omitempty"` // The following fields are only populated by Webhook events. - Sender *User `json:"sender,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // DeploymentStatusEvent represents a deployment status. @@ -78,27 +82,29 @@ type DeploymentEvent struct { // // Events of this type are not visible in timelines, they are only used to trigger hooks. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#deploymentstatusevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#deploymentstatusevent type DeploymentStatusEvent struct { Deployment *Deployment `json:"deployment,omitempty"` DeploymentStatus *DeploymentStatus `json:"deployment_status,omitempty"` Repo *Repository `json:"repository,omitempty"` // The following fields are only populated by Webhook events. - Sender *User `json:"sender,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // ForkEvent is triggered when a user forks a repository. // The Webhook event name is "fork". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#forkevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#forkevent type ForkEvent struct { // Forkee is the created repository. Forkee *Repository `json:"forkee,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // Page represents a single Wiki page. @@ -114,73 +120,146 @@ type Page struct { // GollumEvent is triggered when a Wiki page is created or updated. // The Webhook event name is "gollum". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#gollumevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#gollumevent type GollumEvent struct { Pages []*Page `json:"pages,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } -// DEPRECATED: IssueActivityEvent represents the payload delivered by Issue webhook -// Use IssuesEvent instead. -type IssueActivityEvent struct { - Action *string `json:"action,omitempty"` - Issue *Issue `json:"issue,omitempty"` +// EditChange represents the changes when an issue, pull request, or comment has +// been edited. +type EditChange struct { + Title *struct { + From *string `json:"from,omitempty"` + } `json:"title,omitempty"` + Body *struct { + From *string `json:"from,omitempty"` + } `json:"body,omitempty"` +} - // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` +// ProjectChange represents the changes when a project has been edited. +type ProjectChange struct { + Name *struct { + From *string `json:"from,omitempty"` + } `json:"name,omitempty"` + Body *struct { + From *string `json:"from,omitempty"` + } `json:"body,omitempty"` +} + +// ProjectCardChange represents the changes when a project card has been edited. +type ProjectCardChange struct { + Note *struct { + From *string `json:"from,omitempty"` + } `json:"note,omitempty"` +} + +// ProjectColumnChange represents the changes when a project column has been edited. +type ProjectColumnChange struct { + Name *struct { + From *string `json:"from,omitempty"` + } `json:"name,omitempty"` +} + +// IntegrationInstallationEvent is triggered when an integration is created or deleted. +// The Webhook event name is "integration_installation". +// +// GitHub API docs: https://developer.github.com/early-access/integrations/webhooks/#integrationinstallationevent +type IntegrationInstallationEvent struct { + // The action that was performed. Possible values for an "integration_installation" + // event are: "created", "deleted". + Action *string `json:"action,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// IntegrationInstallationRepositoriesEvent is triggered when an integration repository +// is added or removed. The Webhook event name is "integration_installation_repositories". +// +// GitHub API docs: https://developer.github.com/early-access/integrations/webhooks/#integrationinstallationrepositoriesevent +type IntegrationInstallationRepositoriesEvent struct { + // The action that was performed. Possible values for an "integration_installation_repositories" + // event are: "added", "removed". + Action *string `json:"action,omitempty"` + RepositoriesAdded []*Repository `json:"repositories_added,omitempty"` + RepositoriesRemoved []*Repository `json:"repositories_removed,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // IssueCommentEvent is triggered when an issue comment is created on an issue // or pull request. // The Webhook event name is "issue_comment". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#issuecommentevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#issuecommentevent type IssueCommentEvent struct { // Action is the action that was performed on the comment. - // Possible value is: "created". + // Possible values are: "created", "edited", "deleted". Action *string `json:"action,omitempty"` Issue *Issue `json:"issue,omitempty"` Comment *IssueComment `json:"comment,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Changes *EditChange `json:"changes,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // IssuesEvent is triggered when an issue is assigned, unassigned, labeled, // unlabeled, opened, closed, or reopened. // The Webhook event name is "issues". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#issuesevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#issuesevent type IssuesEvent struct { // Action is the action that was performed. Possible values are: "assigned", - // "unassigned", "labeled", "unlabeled", "opened", "closed", "reopened". + // "unassigned", "labeled", "unlabeled", "opened", "closed", "reopened", "edited". Action *string `json:"action,omitempty"` Issue *Issue `json:"issue,omitempty"` Assignee *User `json:"assignee,omitempty"` Label *Label `json:"label,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Changes *EditChange `json:"changes,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// LabelEvent is triggered when a repository's label is created, edited, or deleted. +// The Webhook event name is "label" +// +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#labelevent +type LabelEvent struct { + // Action is the action that was performed. Possible values are: + // "created", "edited", "deleted" + Action *string `json:"action,omitempty"` + Label *Label `json:"label,omitempty"` + + // The following fields are only populated by Webhook events. + Changes *EditChange `json:"changes,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Org *Organization `json:"organization,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // MemberEvent is triggered when a user is added as a collaborator to a repository. // The Webhook event name is "member". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#memberevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#memberevent type MemberEvent struct { // Action is the action that was performed. Possible value is: "added". Action *string `json:"action,omitempty"` Member *User `json:"member,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // MembershipEvent is triggered when a user is added or removed from a team. @@ -189,7 +268,7 @@ type MemberEvent struct { // Events of this type are not visible in timelines, they are only used to // trigger organization webhooks. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#membershipevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#membershipevent type MembershipEvent struct { // Action is the action that was performed. Possible values are: "added", "removed". Action *string `json:"action,omitempty"` @@ -199,8 +278,49 @@ type MembershipEvent struct { Team *Team `json:"team,omitempty"` // The following fields are only populated by Webhook events. - Org *Organization `json:"organization,omitempty"` - Sender *User `json:"sender,omitempty"` + Org *Organization `json:"organization,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// MilestoneEvent is triggered when a milestone is created, closed, opened, edited, or deleted. +// The Webhook event name is "milestone". +// +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#milestoneevent +type MilestoneEvent struct { + // Action is the action that was performed. Possible values are: + // "created", "closed", "opened", "edited", "deleted" + Action *string `json:"action,omitempty"` + Milestone *Milestone `json:"milestone,omitempty"` + + // The following fields are only populated by Webhook events. + Changes *EditChange `json:"changes,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Org *Organization `json:"organization,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// OrganizationEvent is triggered when a user is added, removed, or invited to an organization. +// Events of this type are not visible in timelines. These events are only used to trigger organization hooks. +// Webhook event name is "organization". +// +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#organizationevent +type OrganizationEvent struct { + // Action is the action that was performed. + // Can be one of "member_added", "member_removed", or "member_invited". + Action *string `json:"action,omitempty"` + + // Invitaion is the invitation for the user or email if the action is "member_invited". + Invitation *Invitation `json:"invitation,omitempty"` + + // Membership is the membership between the user and the organization. + // Not present when the action is "member_invited". + Membership *Membership `json:"membership,omitempty"` + + Organization *Organization `json:"organization,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // PageBuildEvent represents an attempted build of a GitHub Pages site, whether @@ -212,87 +332,178 @@ type MembershipEvent struct { // // Events of this type are not visible in timelines, they are only used to trigger hooks. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#pagebuildevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#pagebuildevent type PageBuildEvent struct { Build *PagesBuild `json:"build,omitempty"` // The following fields are only populated by Webhook events. - ID *string `json:"id,omitempty"` - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + ID *int `json:"id,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// PingEvent is triggered when a Webhook is added to GitHub. +// +// GitHub API docs: https://developer.github.com/webhooks/#ping-event +type PingEvent struct { + // Random string of GitHub zen. + Zen *string `json:"zen,omitempty"` + // The ID of the webhook that triggered the ping. + HookID *int `json:"hook_id,omitempty"` + // The webhook configuration. + Hook *Hook `json:"hook,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// ProjectEvent is triggered when project is created, modified or deleted. +// The webhook event name is "project". +// +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#projectevent +type ProjectEvent struct { + Action *string `json:"action,omitempty"` + Changes *ProjectChange `json:"changes,omitempty"` + Project *Project `json:"project,omitempty"` + + // The following fields are only populated by Webhook events. + Repo *Repository `json:"repository,omitempty"` + Org *Organization `json:"organization,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// ProjectCardEvent is triggered when a project card is created, updated, moved, converted to an issue, or deleted. +// The webhook event name is "project_card". +// +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#projectcardevent +type ProjectCardEvent struct { + Action *string `json:"action,omitempty"` + Changes *ProjectCardChange `json:"changes,omitempty"` + AfterID *int `json:"after_id,omitempty"` + ProjectCard *ProjectCard `json:"project_card,omitempty"` + + // The following fields are only populated by Webhook events. + Repo *Repository `json:"repository,omitempty"` + Org *Organization `json:"organization,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// ProjectColumnEvent is triggered when a project column is created, updated, moved, or deleted. +// The webhook event name is "project_column". +// +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#projectcolumnevent +type ProjectColumnEvent struct { + Action *string `json:"action,omitempty"` + Changes *ProjectColumnChange `json:"changes,omitempty"` + AfterID *int `json:"after_id,omitempty"` + ProjectColumn *ProjectColumn `json:"project_column,omitempty"` + + // The following fields are only populated by Webhook events. + Repo *Repository `json:"repository,omitempty"` + Org *Organization `json:"organization,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // PublicEvent is triggered when a private repository is open sourced. // According to GitHub: "Without a doubt: the best GitHub event." // The Webhook event name is "public". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#publicevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#publicevent type PublicEvent struct { // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // PullRequestEvent is triggered when a pull request is assigned, unassigned, // labeled, unlabeled, opened, closed, reopened, or synchronized. // The Webhook event name is "pull_request". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#pullrequestevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#pullrequestevent type PullRequestEvent struct { // Action is the action that was performed. Possible values are: "assigned", // "unassigned", "labeled", "unlabeled", "opened", "closed", or "reopened", - // "synchronize". If the action is "closed" and the merged key is false, the - // pull request was closed with unmerged commits. If the action is "closed" and - // the merged key is true, the pull request was merged. + // "synchronize", "edited". If the action is "closed" and the merged key is false, + // the pull request was closed with unmerged commits. If the action is "closed" + // and the merged key is true, the pull request was merged. Action *string `json:"action,omitempty"` Number *int `json:"number,omitempty"` PullRequest *PullRequest `json:"pull_request,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Changes *EditChange `json:"changes,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` +} + +// PullRequestReviewEvent is triggered when a review is submitted on a pull +// request. +// The Webhook event name is "pull_request_review". +// +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#pullrequestreviewevent +type PullRequestReviewEvent struct { + // Action is always "submitted". + Action *string `json:"action,omitempty"` + Review *PullRequestReview `json:"review,omitempty"` + PullRequest *PullRequest `json:"pull_request,omitempty"` + + // The following fields are only populated by Webhook events. + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` + + // The following field is only present when the webhook is triggered on + // a repository belonging to an organization. + Organization *Organization `json:"organization,omitempty"` } // PullRequestReviewCommentEvent is triggered when a comment is created on a // portion of the unified diff of a pull request. // The Webhook event name is "pull_request_review_comment". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#pullrequestreviewcommentevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#pullrequestreviewcommentevent type PullRequestReviewCommentEvent struct { // Action is the action that was performed on the comment. - // Possible value is: "created". + // Possible values are: "created", "edited", "deleted". Action *string `json:"action,omitempty"` PullRequest *PullRequest `json:"pull_request,omitempty"` Comment *PullRequestComment `json:"comment,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Changes *EditChange `json:"changes,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // PushEvent represents a git push to a GitHub repository. // -// GitHub API docs: http://developer.github.com/v3/activity/events/types/#pushevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#pushevent type PushEvent struct { - PushID *int `json:"push_id,omitempty"` - Head *string `json:"head,omitempty"` - Ref *string `json:"ref,omitempty"` - Size *int `json:"size,omitempty"` - Commits []PushEventCommit `json:"commits,omitempty"` - Repo *PushEventRepository `json:"repository,omitempty"` - Before *string `json:"before,omitempty"` - DistinctSize *int `json:"distinct_size,omitempty"` + PushID *int `json:"push_id,omitempty"` + Head *string `json:"head,omitempty"` + Ref *string `json:"ref,omitempty"` + Size *int `json:"size,omitempty"` + Commits []PushEventCommit `json:"commits,omitempty"` + Before *string `json:"before,omitempty"` + DistinctSize *int `json:"distinct_size,omitempty"` // The following fields are only populated by Webhook events. - After *string `json:"after,omitempty"` - Created *bool `json:"created,omitempty"` - Deleted *bool `json:"deleted,omitempty"` - Forced *bool `json:"forced,omitempty"` - BaseRef *string `json:"base_ref,omitempty"` - Compare *string `json:"compare,omitempty"` - HeadCommit *PushEventCommit `json:"head_commit,omitempty"` - Pusher *User `json:"pusher,omitempty"` - Sender *User `json:"sender,omitempty"` + After *string `json:"after,omitempty"` + Created *bool `json:"created,omitempty"` + Deleted *bool `json:"deleted,omitempty"` + Forced *bool `json:"forced,omitempty"` + BaseRef *string `json:"base_ref,omitempty"` + Compare *string `json:"compare,omitempty"` + Repo *PushEventRepository `json:"repository,omitempty"` + HeadCommit *PushEventCommit `json:"head_commit,omitempty"` + Pusher *User `json:"pusher,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } func (p PushEvent) String() string { @@ -301,21 +512,29 @@ func (p PushEvent) String() string { // PushEventCommit represents a git commit in a GitHub PushEvent. type PushEventCommit struct { - SHA *string `json:"sha,omitempty"` Message *string `json:"message,omitempty"` Author *CommitAuthor `json:"author,omitempty"` URL *string `json:"url,omitempty"` Distinct *bool `json:"distinct,omitempty"` - Added []string `json:"added,omitempty"` - Removed []string `json:"removed,omitempty"` - Modified []string `json:"modified,omitempty"` + + // The following fields are only populated by Events API. + SHA *string `json:"sha,omitempty"` + + // The following fields are only populated by Webhook events. + ID *string `json:"id,omitempty"` + TreeID *string `json:"tree_id,omitempty"` + Timestamp *Timestamp `json:"timestamp,omitempty"` + Committer *CommitAuthor `json:"committer,omitempty"` + Added []string `json:"added,omitempty"` + Removed []string `json:"removed,omitempty"` + Modified []string `json:"modified,omitempty"` } func (p PushEventCommit) String() string { return Stringify(p) } -// PushEventRepository represents the repo object in a PushEvent payload +// PushEventRepository represents the repo object in a PushEvent payload. type PushEventRepository struct { ID *int `json:"id,omitempty"` Name *string `json:"name,omitempty"` @@ -341,9 +560,16 @@ type PushEventRepository struct { DefaultBranch *string `json:"default_branch,omitempty"` MasterBranch *string `json:"master_branch,omitempty"` Organization *string `json:"organization,omitempty"` + URL *string `json:"url,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` + StatusesURL *string `json:"statuses_url,omitempty"` + GitURL *string `json:"git_url,omitempty"` + SSHURL *string `json:"ssh_url,omitempty"` + CloneURL *string `json:"clone_url,omitempty"` + SVNURL *string `json:"svn_url,omitempty"` } -// PushEventRepoOwner is a basic reporesntation of user/org in a PushEvent payload +// PushEventRepoOwner is a basic representation of user/org in a PushEvent payload. type PushEventRepoOwner struct { Name *string `json:"name,omitempty"` Email *string `json:"email,omitempty"` @@ -352,15 +578,16 @@ type PushEventRepoOwner struct { // ReleaseEvent is triggered when a release is published. // The Webhook event name is "release". // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#releaseevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#releaseevent type ReleaseEvent struct { // Action is the action that was performed. Possible value is: "published". Action *string `json:"action,omitempty"` Release *RepositoryRelease `json:"release,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // RepositoryEvent is triggered when a repository is created. @@ -369,15 +596,17 @@ type ReleaseEvent struct { // Events of this type are not visible in timelines, they are only used to // trigger organization webhooks. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#repositoryevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#repositoryevent type RepositoryEvent struct { - // Action is the action that was performed. Possible value is: "created". + // Action is the action that was performed. Possible values are: "created", "deleted", + // "publicized", "privatized". Action *string `json:"action,omitempty"` Repo *Repository `json:"repository,omitempty"` // The following fields are only populated by Webhook events. - Org *Organization `json:"organization,omitempty"` - Sender *User `json:"sender,omitempty"` + Org *Organization `json:"organization,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // StatusEvent is triggered when the status of a Git commit changes. @@ -386,7 +615,7 @@ type RepositoryEvent struct { // Events of this type are not visible in timelines, they are only used to // trigger hooks. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#statusevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#statusevent type StatusEvent struct { SHA *string `json:"sha,omitempty"` // State is the new state. Possible values are: "pending", "success", "failure", "error". @@ -396,14 +625,15 @@ type StatusEvent struct { Branches []*Branch `json:"branches,omitempty"` // The following fields are only populated by Webhook events. - ID *int `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Context *string `json:"context,omitempty"` - Commit *PushEventCommit `json:"commit,omitempty"` - CreatedAt *Timestamp `json:"created_at,omitempty"` - UpdatedAt *Timestamp `json:"updated_at,omitempty"` - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + ID *int `json:"id,omitempty"` + Name *string `json:"name,omitempty"` + Context *string `json:"context,omitempty"` + Commit *RepositoryCommit `json:"commit,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + UpdatedAt *Timestamp `json:"updated_at,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // TeamAddEvent is triggered when a repository is added to a team. @@ -412,14 +642,15 @@ type StatusEvent struct { // Events of this type are not visible in timelines. These events are only used // to trigger hooks. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#teamaddevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#teamaddevent type TeamAddEvent struct { Team *Team `json:"team,omitempty"` Repo *Repository `json:"repository,omitempty"` // The following fields are only populated by Webhook events. - Org *Organization `json:"organization,omitempty"` - Sender *User `json:"sender,omitempty"` + Org *Organization `json:"organization,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } // WatchEvent is related to starring a repository, not watching. See this API @@ -428,12 +659,13 @@ type TeamAddEvent struct { // The event’s actor is the user who starred a repository, and the event’s // repository is the repository that was starred. // -// GitHub docs: https://developer.github.com/v3/activity/events/types/#watchevent +// GitHub API docs: https://developer.github.com/v3/activity/events/types/#watchevent type WatchEvent struct { // Action is the action that was performed. Possible value is: "started". Action *string `json:"action,omitempty"` // The following fields are only populated by Webhook events. - Repo *Repository `json:"repository,omitempty"` - Sender *User `json:"sender,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Sender *User `json:"sender,omitempty"` + Installation *Installation `json:"installation,omitempty"` } diff --git a/vendor/github.com/google/go-github/github/gen-accessors.go b/vendor/github.com/google/go-github/github/gen-accessors.go new file mode 100644 index 0000000000..131c56cbf0 --- /dev/null +++ b/vendor/github.com/google/go-github/github/gen-accessors.go @@ -0,0 +1,299 @@ +// Copyright 2017 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build ignore + +// gen-accessors generates accessor methods for structs with pointer fields. +// +// It is meant to be used by the go-github authors in conjunction with the +// go generate tool before sending a commit to GitHub. +package main + +import ( + "bytes" + "flag" + "fmt" + "go/ast" + "go/format" + "go/parser" + "go/token" + "io/ioutil" + "log" + "os" + "sort" + "strings" + "text/template" + "time" +) + +const ( + fileSuffix = "-accessors.go" +) + +var ( + verbose = flag.Bool("v", false, "Print verbose log messages") + + sourceTmpl = template.Must(template.New("source").Parse(source)) + + // blacklist lists which "struct.method" combos to not generate. + blacklist = map[string]bool{ + "RepositoryContent.GetContent": true, + "Client.GetBaseURL": true, + "Client.GetUploadURL": true, + "ErrorResponse.GetResponse": true, + "RateLimitError.GetResponse": true, + "AbuseRateLimitError.GetResponse": true, + } +) + +func logf(fmt string, args ...interface{}) { + if *verbose { + log.Printf(fmt, args...) + } +} + +func main() { + flag.Parse() + fset := token.NewFileSet() + + pkgs, err := parser.ParseDir(fset, ".", sourceFilter, 0) + if err != nil { + log.Fatal(err) + return + } + + for pkgName, pkg := range pkgs { + t := &templateData{ + filename: pkgName + fileSuffix, + Year: time.Now().Year(), + Package: pkgName, + Imports: map[string]string{}, + } + for filename, f := range pkg.Files { + logf("Processing %v...", filename) + if err := t.processAST(f); err != nil { + log.Fatal(err) + } + } + if err := t.dump(); err != nil { + log.Fatal(err) + } + } + logf("Done.") +} + +func (t *templateData) processAST(f *ast.File) error { + for _, decl := range f.Decls { + gd, ok := decl.(*ast.GenDecl) + if !ok { + continue + } + for _, spec := range gd.Specs { + ts, ok := spec.(*ast.TypeSpec) + if !ok { + continue + } + st, ok := ts.Type.(*ast.StructType) + if !ok { + continue + } + for _, field := range st.Fields.List { + se, ok := field.Type.(*ast.StarExpr) + if len(field.Names) == 0 || !ok { + continue + } + + fieldName := field.Names[0] + if key := fmt.Sprintf("%v.Get%v", ts.Name, fieldName); blacklist[key] { + logf("Method %v blacklisted; skipping.", key) + continue + } + + switch x := se.X.(type) { + case *ast.ArrayType: + t.addArrayType(x, ts.Name.String(), fieldName.String()) + case *ast.Ident: + t.addIdent(x, ts.Name.String(), fieldName.String()) + case *ast.MapType: + t.addMapType(x, ts.Name.String(), fieldName.String()) + case *ast.SelectorExpr: + t.addSelectorExpr(x, ts.Name.String(), fieldName.String()) + default: + logf("processAST: type %q, field %q, unknown %T: %+v", ts.Name, fieldName, x, x) + } + } + } + } + return nil +} + +func sourceFilter(fi os.FileInfo) bool { + return !strings.HasSuffix(fi.Name(), "_test.go") && !strings.HasSuffix(fi.Name(), fileSuffix) +} + +func (t *templateData) dump() error { + if len(t.Getters) == 0 { + logf("No getters for %v; skipping.", t.filename) + return nil + } + + // Sort getters by ReceiverType.FieldName + sort.Sort(byName(t.Getters)) + + var buf bytes.Buffer + if err := sourceTmpl.Execute(&buf, t); err != nil { + return err + } + clean, err := format.Source(buf.Bytes()) + if err != nil { + return err + } + + logf("Writing %v...", t.filename) + return ioutil.WriteFile(t.filename, clean, 0644) +} + +func newGetter(receiverType, fieldName, fieldType, zeroValue string) *getter { + return &getter{ + sortVal: strings.ToLower(receiverType) + "." + strings.ToLower(fieldName), + ReceiverVar: strings.ToLower(receiverType[:1]), + ReceiverType: receiverType, + FieldName: fieldName, + FieldType: fieldType, + ZeroValue: zeroValue, + } +} + +func (t *templateData) addArrayType(x *ast.ArrayType, receiverType, fieldName string) { + var eltType string + switch elt := x.Elt.(type) { + case *ast.Ident: + eltType = elt.String() + default: + logf("addArrayType: type %q, field %q: unknown elt type: %T %+v; skipping.", receiverType, fieldName, elt, elt) + return + } + + t.Getters = append(t.Getters, newGetter(receiverType, fieldName, "[]"+eltType, "nil")) +} + +func (t *templateData) addIdent(x *ast.Ident, receiverType, fieldName string) { + var zeroValue string + switch x.String() { + case "int": + zeroValue = "0" + case "string": + zeroValue = `""` + case "bool": + zeroValue = "false" + case "Timestamp": + zeroValue = "Timestamp{}" + default: // other structs handled by their receivers directly. + return + } + + t.Getters = append(t.Getters, newGetter(receiverType, fieldName, x.String(), zeroValue)) +} + +func (t *templateData) addMapType(x *ast.MapType, receiverType, fieldName string) { + var keyType string + switch key := x.Key.(type) { + case *ast.Ident: + keyType = key.String() + default: + logf("addMapType: type %q, field %q: unknown key type: %T %+v; skipping.", receiverType, fieldName, key, key) + return + } + + var valueType string + switch value := x.Value.(type) { + case *ast.Ident: + valueType = value.String() + default: + logf("addMapType: type %q, field %q: unknown value type: %T %+v; skipping.", receiverType, fieldName, value, value) + return + } + + fieldType := fmt.Sprintf("map[%v]%v", keyType, valueType) + zeroValue := fmt.Sprintf("map[%v]%v{}", keyType, valueType) + t.Getters = append(t.Getters, newGetter(receiverType, fieldName, fieldType, zeroValue)) +} + +func (t *templateData) addSelectorExpr(x *ast.SelectorExpr, receiverType, fieldName string) { + if strings.ToLower(fieldName[:1]) == fieldName[:1] { // non-exported field + return + } + + var xX string + if xx, ok := x.X.(*ast.Ident); ok { + xX = xx.String() + } + + switch xX { + case "time", "json": + if xX == "json" { + t.Imports["encoding/json"] = "encoding/json" + } else { + t.Imports[xX] = xX + } + fieldType := fmt.Sprintf("%v.%v", xX, x.Sel.Name) + zeroValue := fmt.Sprintf("%v.%v{}", xX, x.Sel.Name) + if xX == "time" && x.Sel.Name == "Duration" { + zeroValue = "0" + } + t.Getters = append(t.Getters, newGetter(receiverType, fieldName, fieldType, zeroValue)) + default: + logf("addSelectorExpr: xX %q, type %q, field %q: unknown x=%+v; skipping.", xX, receiverType, fieldName, x) + } +} + +type templateData struct { + filename string + Year int + Package string + Imports map[string]string + Getters []*getter +} + +type getter struct { + sortVal string // lower-case version of "ReceiverType.FieldName" + ReceiverVar string // the one-letter variable name to match the ReceiverType + ReceiverType string + FieldName string + FieldType string + ZeroValue string +} + +type byName []*getter + +func (b byName) Len() int { return len(b) } +func (b byName) Less(i, j int) bool { return b[i].sortVal < b[j].sortVal } +func (b byName) Swap(i, j int) { b[i], b[j] = b[j], b[i] } + +const source = `// Copyright {{.Year}} The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Code generated by gen-accessors; DO NOT EDIT. + +package {{.Package}} +{{with .Imports}} +import ( + {{- range . -}} + "{{.}}" + {{end -}} +) +{{end}} +{{range .Getters}} +// Get{{.FieldName}} returns the {{.FieldName}} field if it's non-nil, zero value otherwise. +func ({{.ReceiverVar}} *{{.ReceiverType}}) Get{{.FieldName}}() {{.FieldType}} { + if {{.ReceiverVar}} == nil || {{.ReceiverVar}}.{{.FieldName}} == nil { + return {{.ZeroValue}} + } + return *{{.ReceiverVar}}.{{.FieldName}} +} +{{end}} +` diff --git a/vendor/github.com/google/go-github/github/gists.go b/vendor/github.com/google/go-github/github/gists.go index a662d3548e..e7d6586c60 100644 --- a/vendor/github.com/google/go-github/github/gists.go +++ b/vendor/github.com/google/go-github/github/gists.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -13,10 +14,8 @@ import ( // GistsService handles communication with the Gist related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/gists/ -type GistsService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/gists/ +type GistsService service // Gist represents a GitHub's gist. type Gist struct { @@ -44,6 +43,8 @@ type GistFilename string type GistFile struct { Size *int `json:"size,omitempty"` Filename *string `json:"filename,omitempty"` + Language *string `json:"language,omitempty"` + Type *string `json:"type,omitempty"` RawURL *string `json:"raw_url,omitempty"` Content *string `json:"content,omitempty"` } @@ -52,6 +53,32 @@ func (g GistFile) String() string { return Stringify(g) } +// GistCommit represents a commit on a gist. +type GistCommit struct { + URL *string `json:"url,omitempty"` + Version *string `json:"version,omitempty"` + User *User `json:"user,omitempty"` + ChangeStatus *CommitStats `json:"change_status,omitempty"` + CommittedAt *Timestamp `json:"committed_at,omitempty"` +} + +func (gc GistCommit) String() string { + return Stringify(gc) +} + +// GistFork represents a fork of a gist. +type GistFork struct { + URL *string `json:"url,omitempty"` + User *User `json:"user,omitempty"` + ID *string `json:"id,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + UpdatedAt *Timestamp `json:"updated_at,omitempty"` +} + +func (gf GistFork) String() string { + return Stringify(gf) +} + // GistListOptions specifies the optional parameters to the // GistsService.List, GistsService.ListAll, and GistsService.ListStarred methods. type GistListOptions struct { @@ -66,8 +93,8 @@ type GistListOptions struct { // is authenticated, it will returns all gists for the authenticated // user. // -// GitHub API docs: http://developer.github.com/v3/gists/#list-gists -func (s *GistsService) List(user string, opt *GistListOptions) ([]Gist, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#list-gists +func (s *GistsService) List(ctx context.Context, user string, opt *GistListOptions) ([]*Gist, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/gists", user) @@ -84,19 +111,19 @@ func (s *GistsService) List(user string, opt *GistListOptions) ([]Gist, *Respons return nil, nil, err } - gists := new([]Gist) - resp, err := s.client.Do(req, gists) + var gists []*Gist + resp, err := s.client.Do(ctx, req, &gists) if err != nil { return nil, resp, err } - return *gists, resp, err + return gists, resp, nil } // ListAll lists all public gists. // -// GitHub API docs: http://developer.github.com/v3/gists/#list-gists -func (s *GistsService) ListAll(opt *GistListOptions) ([]Gist, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#list-gists +func (s *GistsService) ListAll(ctx context.Context, opt *GistListOptions) ([]*Gist, *Response, error) { u, err := addOptions("gists/public", opt) if err != nil { return nil, nil, err @@ -107,19 +134,19 @@ func (s *GistsService) ListAll(opt *GistListOptions) ([]Gist, *Response, error) return nil, nil, err } - gists := new([]Gist) - resp, err := s.client.Do(req, gists) + var gists []*Gist + resp, err := s.client.Do(ctx, req, &gists) if err != nil { return nil, resp, err } - return *gists, resp, err + return gists, resp, nil } // ListStarred lists starred gists of authenticated user. // -// GitHub API docs: http://developer.github.com/v3/gists/#list-gists -func (s *GistsService) ListStarred(opt *GistListOptions) ([]Gist, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#list-gists +func (s *GistsService) ListStarred(ctx context.Context, opt *GistListOptions) ([]*Gist, *Response, error) { u, err := addOptions("gists/starred", opt) if err != nil { return nil, nil, err @@ -130,141 +157,160 @@ func (s *GistsService) ListStarred(opt *GistListOptions) ([]Gist, *Response, err return nil, nil, err } - gists := new([]Gist) - resp, err := s.client.Do(req, gists) + var gists []*Gist + resp, err := s.client.Do(ctx, req, &gists) if err != nil { return nil, resp, err } - return *gists, resp, err + return gists, resp, nil } // Get a single gist. // -// GitHub API docs: http://developer.github.com/v3/gists/#get-a-single-gist -func (s *GistsService) Get(id string) (*Gist, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#get-a-single-gist +func (s *GistsService) Get(ctx context.Context, id string) (*Gist, *Response, error) { u := fmt.Sprintf("gists/%v", id) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } gist := new(Gist) - resp, err := s.client.Do(req, gist) + resp, err := s.client.Do(ctx, req, gist) if err != nil { return nil, resp, err } - return gist, resp, err + return gist, resp, nil } // GetRevision gets a specific revision of a gist. // // GitHub API docs: https://developer.github.com/v3/gists/#get-a-specific-revision-of-a-gist -func (s *GistsService) GetRevision(id, sha string) (*Gist, *Response, error) { +func (s *GistsService) GetRevision(ctx context.Context, id, sha string) (*Gist, *Response, error) { u := fmt.Sprintf("gists/%v/%v", id, sha) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } gist := new(Gist) - resp, err := s.client.Do(req, gist) + resp, err := s.client.Do(ctx, req, gist) if err != nil { return nil, resp, err } - return gist, resp, err + return gist, resp, nil } // Create a gist for authenticated user. // -// GitHub API docs: http://developer.github.com/v3/gists/#create-a-gist -func (s *GistsService) Create(gist *Gist) (*Gist, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#create-a-gist +func (s *GistsService) Create(ctx context.Context, gist *Gist) (*Gist, *Response, error) { u := "gists" req, err := s.client.NewRequest("POST", u, gist) if err != nil { return nil, nil, err } g := new(Gist) - resp, err := s.client.Do(req, g) + resp, err := s.client.Do(ctx, req, g) if err != nil { return nil, resp, err } - return g, resp, err + return g, resp, nil } // Edit a gist. // -// GitHub API docs: http://developer.github.com/v3/gists/#edit-a-gist -func (s *GistsService) Edit(id string, gist *Gist) (*Gist, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#edit-a-gist +func (s *GistsService) Edit(ctx context.Context, id string, gist *Gist) (*Gist, *Response, error) { u := fmt.Sprintf("gists/%v", id) req, err := s.client.NewRequest("PATCH", u, gist) if err != nil { return nil, nil, err } g := new(Gist) - resp, err := s.client.Do(req, g) + resp, err := s.client.Do(ctx, req, g) if err != nil { return nil, resp, err } - return g, resp, err + return g, resp, nil +} + +// ListCommits lists commits of a gist. +// +// GitHub API docs: https://developer.github.com/v3/gists/#list-gist-commits +func (s *GistsService) ListCommits(ctx context.Context, id string) ([]*GistCommit, *Response, error) { + u := fmt.Sprintf("gists/%v/commits", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + var gistCommits []*GistCommit + resp, err := s.client.Do(ctx, req, &gistCommits) + if err != nil { + return nil, resp, err + } + + return gistCommits, resp, nil } // Delete a gist. // -// GitHub API docs: http://developer.github.com/v3/gists/#delete-a-gist -func (s *GistsService) Delete(id string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#delete-a-gist +func (s *GistsService) Delete(ctx context.Context, id string) (*Response, error) { u := fmt.Sprintf("gists/%v", id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Star a gist on behalf of authenticated user. // -// GitHub API docs: http://developer.github.com/v3/gists/#star-a-gist -func (s *GistsService) Star(id string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#star-a-gist +func (s *GistsService) Star(ctx context.Context, id string) (*Response, error) { u := fmt.Sprintf("gists/%v/star", id) req, err := s.client.NewRequest("PUT", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Unstar a gist on a behalf of authenticated user. // -// Github API docs: http://developer.github.com/v3/gists/#unstar-a-gist -func (s *GistsService) Unstar(id string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#unstar-a-gist +func (s *GistsService) Unstar(ctx context.Context, id string) (*Response, error) { u := fmt.Sprintf("gists/%v/star", id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // IsStarred checks if a gist is starred by authenticated user. // -// GitHub API docs: http://developer.github.com/v3/gists/#check-if-a-gist-is-starred -func (s *GistsService) IsStarred(id string) (bool, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#check-if-a-gist-is-starred +func (s *GistsService) IsStarred(ctx context.Context, id string) (bool, *Response, error) { u := fmt.Sprintf("gists/%v/star", id) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) starred, err := parseBoolResponse(err) return starred, resp, err } // Fork a gist. // -// GitHub API docs: http://developer.github.com/v3/gists/#fork-a-gist -func (s *GistsService) Fork(id string) (*Gist, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/#fork-a-gist +func (s *GistsService) Fork(ctx context.Context, id string) (*Gist, *Response, error) { u := fmt.Sprintf("gists/%v/forks", id) req, err := s.client.NewRequest("POST", u, nil) if err != nil { @@ -272,10 +318,29 @@ func (s *GistsService) Fork(id string) (*Gist, *Response, error) { } g := new(Gist) - resp, err := s.client.Do(req, g) + resp, err := s.client.Do(ctx, req, g) if err != nil { return nil, resp, err } - return g, resp, err + return g, resp, nil +} + +// ListForks lists forks of a gist. +// +// GitHub API docs: https://developer.github.com/v3/gists/#list-gist-forks +func (s *GistsService) ListForks(ctx context.Context, id string) ([]*GistFork, *Response, error) { + u := fmt.Sprintf("gists/%v/forks", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + var gistForks []*GistFork + resp, err := s.client.Do(ctx, req, &gistForks) + if err != nil { + return nil, resp, err + } + + return gistForks, resp, nil } diff --git a/vendor/github.com/google/go-github/github/gists_comments.go b/vendor/github.com/google/go-github/github/gists_comments.go index c5c21bde66..2d0722375e 100644 --- a/vendor/github.com/google/go-github/github/gists_comments.go +++ b/vendor/github.com/google/go-github/github/gists_comments.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -25,8 +26,8 @@ func (g GistComment) String() string { // ListComments lists all comments for a gist. // -// GitHub API docs: http://developer.github.com/v3/gists/comments/#list-comments-on-a-gist -func (s *GistsService) ListComments(gistID string, opt *ListOptions) ([]GistComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/comments/#list-comments-on-a-gist +func (s *GistsService) ListComments(ctx context.Context, gistID string, opt *ListOptions) ([]*GistComment, *Response, error) { u := fmt.Sprintf("gists/%v/comments", gistID) u, err := addOptions(u, opt) if err != nil { @@ -38,19 +39,19 @@ func (s *GistsService) ListComments(gistID string, opt *ListOptions) ([]GistComm return nil, nil, err } - comments := new([]GistComment) - resp, err := s.client.Do(req, comments) + var comments []*GistComment + resp, err := s.client.Do(ctx, req, &comments) if err != nil { return nil, resp, err } - return *comments, resp, err + return comments, resp, nil } // GetComment retrieves a single comment from a gist. // -// GitHub API docs: http://developer.github.com/v3/gists/comments/#get-a-single-comment -func (s *GistsService) GetComment(gistID string, commentID int) (*GistComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/comments/#get-a-single-comment +func (s *GistsService) GetComment(ctx context.Context, gistID string, commentID int) (*GistComment, *Response, error) { u := fmt.Sprintf("gists/%v/comments/%v", gistID, commentID) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -58,18 +59,18 @@ func (s *GistsService) GetComment(gistID string, commentID int) (*GistComment, * } c := new(GistComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // CreateComment creates a comment for a gist. // -// GitHub API docs: http://developer.github.com/v3/gists/comments/#create-a-comment -func (s *GistsService) CreateComment(gistID string, comment *GistComment) (*GistComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/comments/#create-a-comment +func (s *GistsService) CreateComment(ctx context.Context, gistID string, comment *GistComment) (*GistComment, *Response, error) { u := fmt.Sprintf("gists/%v/comments", gistID) req, err := s.client.NewRequest("POST", u, comment) if err != nil { @@ -77,18 +78,18 @@ func (s *GistsService) CreateComment(gistID string, comment *GistComment) (*Gist } c := new(GistComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // EditComment edits an existing gist comment. // -// GitHub API docs: http://developer.github.com/v3/gists/comments/#edit-a-comment -func (s *GistsService) EditComment(gistID string, commentID int, comment *GistComment) (*GistComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/comments/#edit-a-comment +func (s *GistsService) EditComment(ctx context.Context, gistID string, commentID int, comment *GistComment) (*GistComment, *Response, error) { u := fmt.Sprintf("gists/%v/comments/%v", gistID, commentID) req, err := s.client.NewRequest("PATCH", u, comment) if err != nil { @@ -96,23 +97,23 @@ func (s *GistsService) EditComment(gistID string, commentID int, comment *GistCo } c := new(GistComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // DeleteComment deletes a gist comment. // -// GitHub API docs: http://developer.github.com/v3/gists/comments/#delete-a-comment -func (s *GistsService) DeleteComment(gistID string, commentID int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/gists/comments/#delete-a-comment +func (s *GistsService) DeleteComment(ctx context.Context, gistID string, commentID int) (*Response, error) { u := fmt.Sprintf("gists/%v/comments/%v", gistID, commentID) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/git.go b/vendor/github.com/google/go-github/github/git.go index a80e55b9bb..1ce47437bd 100644 --- a/vendor/github.com/google/go-github/github/git.go +++ b/vendor/github.com/google/go-github/github/git.go @@ -8,7 +8,5 @@ package github // GitService handles communication with the git data related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/git/ -type GitService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/git/ +type GitService service diff --git a/vendor/github.com/google/go-github/github/git_blobs.go b/vendor/github.com/google/go-github/github/git_blobs.go index 55148fdb41..67ea74a196 100644 --- a/vendor/github.com/google/go-github/github/git_blobs.go +++ b/vendor/github.com/google/go-github/github/git_blobs.go @@ -5,7 +5,10 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // Blob represents a blob object. type Blob struct { @@ -18,8 +21,8 @@ type Blob struct { // GetBlob fetchs a blob from a repo given a SHA. // -// GitHub API docs: http://developer.github.com/v3/git/blobs/#get-a-blob -func (s *GitService) GetBlob(owner string, repo string, sha string) (*Blob, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/blobs/#get-a-blob +func (s *GitService) GetBlob(ctx context.Context, owner string, repo string, sha string) (*Blob, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/blobs/%v", owner, repo, sha) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -27,14 +30,14 @@ func (s *GitService) GetBlob(owner string, repo string, sha string) (*Blob, *Res } blob := new(Blob) - resp, err := s.client.Do(req, blob) + resp, err := s.client.Do(ctx, req, blob) return blob, resp, err } // CreateBlob creates a blob object. // // GitHub API docs: https://developer.github.com/v3/git/blobs/#create-a-blob -func (s *GitService) CreateBlob(owner string, repo string, blob *Blob) (*Blob, *Response, error) { +func (s *GitService) CreateBlob(ctx context.Context, owner string, repo string, blob *Blob) (*Blob, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/blobs", owner, repo) req, err := s.client.NewRequest("POST", u, blob) if err != nil { @@ -42,6 +45,6 @@ func (s *GitService) CreateBlob(owner string, repo string, blob *Blob) (*Blob, * } t := new(Blob) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) return t, resp, err } diff --git a/vendor/github.com/google/go-github/github/git_commits.go b/vendor/github.com/google/go-github/github/git_commits.go index 6584b777e0..22cb49afaf 100644 --- a/vendor/github.com/google/go-github/github/git_commits.go +++ b/vendor/github.com/google/go-github/github/git_commits.go @@ -6,22 +6,32 @@ package github import ( + "context" "fmt" "time" ) +// SignatureVerification represents GPG signature verification. +type SignatureVerification struct { + Verified *bool `json:"verified,omitempty"` + Reason *string `json:"reason,omitempty"` + Signature *string `json:"signature,omitempty"` + Payload *string `json:"payload,omitempty"` +} + // Commit represents a GitHub commit. type Commit struct { - SHA *string `json:"sha,omitempty"` - Author *CommitAuthor `json:"author,omitempty"` - Committer *CommitAuthor `json:"committer,omitempty"` - Message *string `json:"message,omitempty"` - Tree *Tree `json:"tree,omitempty"` - Parents []Commit `json:"parents,omitempty"` - Stats *CommitStats `json:"stats,omitempty"` - URL *string `json:"url,omitempty"` + SHA *string `json:"sha,omitempty"` + Author *CommitAuthor `json:"author,omitempty"` + Committer *CommitAuthor `json:"committer,omitempty"` + Message *string `json:"message,omitempty"` + Tree *Tree `json:"tree,omitempty"` + Parents []Commit `json:"parents,omitempty"` + Stats *CommitStats `json:"stats,omitempty"` + URL *string `json:"url,omitempty"` + Verification *SignatureVerification `json:"verification,omitempty"` - // CommentCount is the number of GitHub comments on the commit. This + // CommentCount is the number of GitHub comments on the commit. This // is only populated for requests that fetch GitHub data like // Pulls.ListCommits, Repositories.ListCommits, etc. CommentCount *int `json:"comment_count,omitempty"` @@ -31,12 +41,15 @@ func (c Commit) String() string { return Stringify(c) } -// CommitAuthor represents the author or committer of a commit. The commit +// CommitAuthor represents the author or committer of a commit. The commit // author may not correspond to a GitHub User. type CommitAuthor struct { Date *time.Time `json:"date,omitempty"` Name *string `json:"name,omitempty"` Email *string `json:"email,omitempty"` + + // The following fields are only populated by Webhook events. + Login *string `json:"username,omitempty"` // Renamed for go-github consistency. } func (c CommitAuthor) String() string { @@ -45,21 +58,24 @@ func (c CommitAuthor) String() string { // GetCommit fetchs the Commit object for a given SHA. // -// GitHub API docs: http://developer.github.com/v3/git/commits/#get-a-commit -func (s *GitService) GetCommit(owner string, repo string, sha string) (*Commit, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/commits/#get-a-commit +func (s *GitService) GetCommit(ctx context.Context, owner string, repo string, sha string) (*Commit, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/commits/%v", owner, repo, sha) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeGitSigningPreview) + c := new(Commit) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // createCommit represents the body of a CreateCommit request. @@ -72,29 +88,33 @@ type createCommit struct { } // CreateCommit creates a new commit in a repository. +// commit must not be nil. // // The commit.Committer is optional and will be filled with the commit.Author // data if omitted. If the commit.Author is omitted, it will be filled in with // the authenticated user’s information and the current date. // -// GitHub API docs: http://developer.github.com/v3/git/commits/#create-a-commit -func (s *GitService) CreateCommit(owner string, repo string, commit *Commit) (*Commit, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/commits/#create-a-commit +func (s *GitService) CreateCommit(ctx context.Context, owner string, repo string, commit *Commit) (*Commit, *Response, error) { + if commit == nil { + return nil, nil, fmt.Errorf("commit must be provided") + } + u := fmt.Sprintf("repos/%v/%v/git/commits", owner, repo) - body := &createCommit{} - if commit != nil { - parents := make([]string, len(commit.Parents)) - for i, parent := range commit.Parents { - parents[i] = *parent.SHA - } + parents := make([]string, len(commit.Parents)) + for i, parent := range commit.Parents { + parents[i] = *parent.SHA + } - body = &createCommit{ - Author: commit.Author, - Committer: commit.Committer, - Message: commit.Message, - Tree: commit.Tree.SHA, - Parents: parents, - } + body := &createCommit{ + Author: commit.Author, + Committer: commit.Committer, + Message: commit.Message, + Parents: parents, + } + if commit.Tree != nil { + body.Tree = commit.Tree.SHA } req, err := s.client.NewRequest("POST", u, body) @@ -103,10 +123,10 @@ func (s *GitService) CreateCommit(owner string, repo string, commit *Commit) (*C } c := new(Commit) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } diff --git a/vendor/github.com/google/go-github/github/git_refs.go b/vendor/github.com/google/go-github/github/git_refs.go index 3d2f6c8a34..bd5df3f72a 100644 --- a/vendor/github.com/google/go-github/github/git_refs.go +++ b/vendor/github.com/google/go-github/github/git_refs.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "strings" ) @@ -46,8 +47,8 @@ type updateRefRequest struct { // GetRef fetches the Reference object for a given Git ref. // -// GitHub API docs: http://developer.github.com/v3/git/refs/#get-a-reference -func (s *GitService) GetRef(owner string, repo string, ref string) (*Reference, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/refs/#get-a-reference +func (s *GitService) GetRef(ctx context.Context, owner string, repo string, ref string) (*Reference, *Response, error) { ref = strings.TrimPrefix(ref, "refs/") u := fmt.Sprintf("repos/%v/%v/git/refs/%v", owner, repo, ref) req, err := s.client.NewRequest("GET", u, nil) @@ -56,12 +57,12 @@ func (s *GitService) GetRef(owner string, repo string, ref string) (*Reference, } r := new(Reference) - resp, err := s.client.Do(req, r) + resp, err := s.client.Do(ctx, req, r) if err != nil { return nil, resp, err } - return r, resp, err + return r, resp, nil } // ReferenceListOptions specifies optional parameters to the @@ -74,8 +75,8 @@ type ReferenceListOptions struct { // ListRefs lists all refs in a repository. // -// GitHub API docs: http://developer.github.com/v3/git/refs/#get-all-references -func (s *GitService) ListRefs(owner, repo string, opt *ReferenceListOptions) ([]Reference, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/refs/#get-all-references +func (s *GitService) ListRefs(ctx context.Context, owner, repo string, opt *ReferenceListOptions) ([]*Reference, *Response, error) { var u string if opt != nil && opt.Type != "" { u = fmt.Sprintf("repos/%v/%v/git/refs/%v", owner, repo, opt.Type) @@ -92,19 +93,19 @@ func (s *GitService) ListRefs(owner, repo string, opt *ReferenceListOptions) ([] return nil, nil, err } - var rs []Reference - resp, err := s.client.Do(req, &rs) + var rs []*Reference + resp, err := s.client.Do(ctx, req, &rs) if err != nil { return nil, resp, err } - return rs, resp, err + return rs, resp, nil } // CreateRef creates a new ref in a repository. // -// GitHub API docs: http://developer.github.com/v3/git/refs/#create-a-reference -func (s *GitService) CreateRef(owner string, repo string, ref *Reference) (*Reference, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/refs/#create-a-reference +func (s *GitService) CreateRef(ctx context.Context, owner string, repo string, ref *Reference) (*Reference, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/refs", owner, repo) req, err := s.client.NewRequest("POST", u, &createRefRequest{ // back-compat with previous behavior that didn't require 'refs/' prefix @@ -116,18 +117,18 @@ func (s *GitService) CreateRef(owner string, repo string, ref *Reference) (*Refe } r := new(Reference) - resp, err := s.client.Do(req, r) + resp, err := s.client.Do(ctx, req, r) if err != nil { return nil, resp, err } - return r, resp, err + return r, resp, nil } // UpdateRef updates an existing ref in a repository. // -// GitHub API docs: http://developer.github.com/v3/git/refs/#update-a-reference -func (s *GitService) UpdateRef(owner string, repo string, ref *Reference, force bool) (*Reference, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/refs/#update-a-reference +func (s *GitService) UpdateRef(ctx context.Context, owner string, repo string, ref *Reference, force bool) (*Reference, *Response, error) { refPath := strings.TrimPrefix(*ref.Ref, "refs/") u := fmt.Sprintf("repos/%v/%v/git/refs/%v", owner, repo, refPath) req, err := s.client.NewRequest("PATCH", u, &updateRefRequest{ @@ -139,18 +140,18 @@ func (s *GitService) UpdateRef(owner string, repo string, ref *Reference, force } r := new(Reference) - resp, err := s.client.Do(req, r) + resp, err := s.client.Do(ctx, req, r) if err != nil { return nil, resp, err } - return r, resp, err + return r, resp, nil } // DeleteRef deletes a ref from a repository. // -// GitHub API docs: http://developer.github.com/v3/git/refs/#delete-a-reference -func (s *GitService) DeleteRef(owner string, repo string, ref string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/refs/#delete-a-reference +func (s *GitService) DeleteRef(ctx context.Context, owner string, repo string, ref string) (*Response, error) { ref = strings.TrimPrefix(ref, "refs/") u := fmt.Sprintf("repos/%v/%v/git/refs/%v", owner, repo, ref) req, err := s.client.NewRequest("DELETE", u, nil) @@ -158,5 +159,5 @@ func (s *GitService) DeleteRef(owner string, repo string, ref string) (*Response return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/git_tags.go b/vendor/github.com/google/go-github/github/git_tags.go index 7b53f5cc65..08df3d3d1b 100644 --- a/vendor/github.com/google/go-github/github/git_tags.go +++ b/vendor/github.com/google/go-github/github/git_tags.go @@ -6,20 +6,22 @@ package github import ( + "context" "fmt" ) // Tag represents a tag object. type Tag struct { - Tag *string `json:"tag,omitempty"` - SHA *string `json:"sha,omitempty"` - URL *string `json:"url,omitempty"` - Message *string `json:"message,omitempty"` - Tagger *CommitAuthor `json:"tagger,omitempty"` - Object *GitObject `json:"object,omitempty"` + Tag *string `json:"tag,omitempty"` + SHA *string `json:"sha,omitempty"` + URL *string `json:"url,omitempty"` + Message *string `json:"message,omitempty"` + Tagger *CommitAuthor `json:"tagger,omitempty"` + Object *GitObject `json:"object,omitempty"` + Verification *SignatureVerification `json:"verification,omitempty"` } -// createTagRequest represents the body of a CreateTag request. This is mostly +// createTagRequest represents the body of a CreateTag request. This is mostly // identical to Tag with the exception that the object SHA and Type are // top-level fields, rather than being nested inside a JSON object. type createTagRequest struct { @@ -32,23 +34,26 @@ type createTagRequest struct { // GetTag fetchs a tag from a repo given a SHA. // -// GitHub API docs: http://developer.github.com/v3/git/tags/#get-a-tag -func (s *GitService) GetTag(owner string, repo string, sha string) (*Tag, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/tags/#get-a-tag +func (s *GitService) GetTag(ctx context.Context, owner string, repo string, sha string) (*Tag, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/tags/%v", owner, repo, sha) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeGitSigningPreview) + tag := new(Tag) - resp, err := s.client.Do(req, tag) + resp, err := s.client.Do(ctx, req, tag) return tag, resp, err } // CreateTag creates a tag object. // -// GitHub API docs: http://developer.github.com/v3/git/tags/#create-a-tag-object -func (s *GitService) CreateTag(owner string, repo string, tag *Tag) (*Tag, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/tags/#create-a-tag-object +func (s *GitService) CreateTag(ctx context.Context, owner string, repo string, tag *Tag) (*Tag, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/tags", owner, repo) // convert Tag into a createTagRequest @@ -68,6 +73,6 @@ func (s *GitService) CreateTag(owner string, repo string, tag *Tag) (*Tag, *Resp } t := new(Tag) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) return t, resp, err } diff --git a/vendor/github.com/google/go-github/github/git_trees.go b/vendor/github.com/google/go-github/github/git_trees.go index 9efa4b3806..bdd481f1ee 100644 --- a/vendor/github.com/google/go-github/github/git_trees.go +++ b/vendor/github.com/google/go-github/github/git_trees.go @@ -5,7 +5,10 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // Tree represents a GitHub tree. type Tree struct { @@ -17,7 +20,7 @@ func (t Tree) String() string { return Stringify(t) } -// TreeEntry represents the contents of a tree structure. TreeEntry can +// TreeEntry represents the contents of a tree structure. TreeEntry can // represent either a blob, a commit (in the case of a submodule), or another // tree. type TreeEntry struct { @@ -35,8 +38,8 @@ func (t TreeEntry) String() string { // GetTree fetches the Tree object for a given sha hash from a repository. // -// GitHub API docs: http://developer.github.com/v3/git/trees/#get-a-tree -func (s *GitService) GetTree(owner string, repo string, sha string, recursive bool) (*Tree, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/trees/#get-a-tree +func (s *GitService) GetTree(ctx context.Context, owner string, repo string, sha string, recursive bool) (*Tree, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/trees/%v", owner, repo, sha) if recursive { u += "?recursive=1" @@ -48,12 +51,12 @@ func (s *GitService) GetTree(owner string, repo string, sha string, recursive bo } t := new(Tree) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) if err != nil { return nil, resp, err } - return t, resp, err + return t, resp, nil } // createTree represents the body of a CreateTree request. @@ -62,12 +65,12 @@ type createTree struct { Entries []TreeEntry `json:"tree"` } -// CreateTree creates a new tree in a repository. If both a tree and a nested +// CreateTree creates a new tree in a repository. If both a tree and a nested // path modifying that tree are specified, it will overwrite the contents of // that tree with the new path contents and write a new tree out. // -// GitHub API docs: http://developer.github.com/v3/git/trees/#create-a-tree -func (s *GitService) CreateTree(owner string, repo string, baseTree string, entries []TreeEntry) (*Tree, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/git/trees/#create-a-tree +func (s *GitService) CreateTree(ctx context.Context, owner string, repo string, baseTree string, entries []TreeEntry) (*Tree, *Response, error) { u := fmt.Sprintf("repos/%v/%v/git/trees", owner, repo) body := &createTree{ @@ -80,10 +83,10 @@ func (s *GitService) CreateTree(owner string, repo string, baseTree string, entr } t := new(Tree) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) if err != nil { return nil, resp, err } - return t, resp, err + return t, resp, nil } diff --git a/vendor/github.com/google/go-github/github/github-accessors.go b/vendor/github.com/google/go-github/github/github-accessors.go new file mode 100644 index 0000000000..fd3b31cc46 --- /dev/null +++ b/vendor/github.com/google/go-github/github/github-accessors.go @@ -0,0 +1,7261 @@ +// Copyright 2017 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Code generated by gen-accessors; DO NOT EDIT. + +package github + +import ( + "encoding/json" + "time" +) + +// GetRetryAfter returns the RetryAfter field if it's non-nil, zero value otherwise. +func (a *AbuseRateLimitError) GetRetryAfter() time.Duration { + if a == nil || a.RetryAfter == nil { + return 0 + } + return *a.RetryAfter +} + +// GetVerifiablePasswordAuthentication returns the VerifiablePasswordAuthentication field if it's non-nil, zero value otherwise. +func (a *APIMeta) GetVerifiablePasswordAuthentication() bool { + if a == nil || a.VerifiablePasswordAuthentication == nil { + return false + } + return *a.VerifiablePasswordAuthentication +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (a *Authorization) GetCreatedAt() Timestamp { + if a == nil || a.CreatedAt == nil { + return Timestamp{} + } + return *a.CreatedAt +} + +// GetFingerprint returns the Fingerprint field if it's non-nil, zero value otherwise. +func (a *Authorization) GetFingerprint() string { + if a == nil || a.Fingerprint == nil { + return "" + } + return *a.Fingerprint +} + +// GetHashedToken returns the HashedToken field if it's non-nil, zero value otherwise. +func (a *Authorization) GetHashedToken() string { + if a == nil || a.HashedToken == nil { + return "" + } + return *a.HashedToken +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (a *Authorization) GetID() int { + if a == nil || a.ID == nil { + return 0 + } + return *a.ID +} + +// GetNote returns the Note field if it's non-nil, zero value otherwise. +func (a *Authorization) GetNote() string { + if a == nil || a.Note == nil { + return "" + } + return *a.Note +} + +// GetNoteURL returns the NoteURL field if it's non-nil, zero value otherwise. +func (a *Authorization) GetNoteURL() string { + if a == nil || a.NoteURL == nil { + return "" + } + return *a.NoteURL +} + +// GetToken returns the Token field if it's non-nil, zero value otherwise. +func (a *Authorization) GetToken() string { + if a == nil || a.Token == nil { + return "" + } + return *a.Token +} + +// GetTokenLastEight returns the TokenLastEight field if it's non-nil, zero value otherwise. +func (a *Authorization) GetTokenLastEight() string { + if a == nil || a.TokenLastEight == nil { + return "" + } + return *a.TokenLastEight +} + +// GetUpdateAt returns the UpdateAt field if it's non-nil, zero value otherwise. +func (a *Authorization) GetUpdateAt() Timestamp { + if a == nil || a.UpdateAt == nil { + return Timestamp{} + } + return *a.UpdateAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (a *Authorization) GetURL() string { + if a == nil || a.URL == nil { + return "" + } + return *a.URL +} + +// GetClientID returns the ClientID field if it's non-nil, zero value otherwise. +func (a *AuthorizationApp) GetClientID() string { + if a == nil || a.ClientID == nil { + return "" + } + return *a.ClientID +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (a *AuthorizationApp) GetName() string { + if a == nil || a.Name == nil { + return "" + } + return *a.Name +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (a *AuthorizationApp) GetURL() string { + if a == nil || a.URL == nil { + return "" + } + return *a.URL +} + +// GetClientID returns the ClientID field if it's non-nil, zero value otherwise. +func (a *AuthorizationRequest) GetClientID() string { + if a == nil || a.ClientID == nil { + return "" + } + return *a.ClientID +} + +// GetClientSecret returns the ClientSecret field if it's non-nil, zero value otherwise. +func (a *AuthorizationRequest) GetClientSecret() string { + if a == nil || a.ClientSecret == nil { + return "" + } + return *a.ClientSecret +} + +// GetFingerprint returns the Fingerprint field if it's non-nil, zero value otherwise. +func (a *AuthorizationRequest) GetFingerprint() string { + if a == nil || a.Fingerprint == nil { + return "" + } + return *a.Fingerprint +} + +// GetNote returns the Note field if it's non-nil, zero value otherwise. +func (a *AuthorizationRequest) GetNote() string { + if a == nil || a.Note == nil { + return "" + } + return *a.Note +} + +// GetNoteURL returns the NoteURL field if it's non-nil, zero value otherwise. +func (a *AuthorizationRequest) GetNoteURL() string { + if a == nil || a.NoteURL == nil { + return "" + } + return *a.NoteURL +} + +// GetFingerprint returns the Fingerprint field if it's non-nil, zero value otherwise. +func (a *AuthorizationUpdateRequest) GetFingerprint() string { + if a == nil || a.Fingerprint == nil { + return "" + } + return *a.Fingerprint +} + +// GetNote returns the Note field if it's non-nil, zero value otherwise. +func (a *AuthorizationUpdateRequest) GetNote() string { + if a == nil || a.Note == nil { + return "" + } + return *a.Note +} + +// GetNoteURL returns the NoteURL field if it's non-nil, zero value otherwise. +func (a *AuthorizationUpdateRequest) GetNoteURL() string { + if a == nil || a.NoteURL == nil { + return "" + } + return *a.NoteURL +} + +// GetContent returns the Content field if it's non-nil, zero value otherwise. +func (b *Blob) GetContent() string { + if b == nil || b.Content == nil { + return "" + } + return *b.Content +} + +// GetEncoding returns the Encoding field if it's non-nil, zero value otherwise. +func (b *Blob) GetEncoding() string { + if b == nil || b.Encoding == nil { + return "" + } + return *b.Encoding +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (b *Blob) GetSHA() string { + if b == nil || b.SHA == nil { + return "" + } + return *b.SHA +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (b *Blob) GetSize() int { + if b == nil || b.Size == nil { + return 0 + } + return *b.Size +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (b *Blob) GetURL() string { + if b == nil || b.URL == nil { + return "" + } + return *b.URL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (b *Branch) GetName() string { + if b == nil || b.Name == nil { + return "" + } + return *b.Name +} + +// GetProtected returns the Protected field if it's non-nil, zero value otherwise. +func (b *Branch) GetProtected() bool { + if b == nil || b.Protected == nil { + return false + } + return *b.Protected +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (c *CodeResult) GetHTMLURL() string { + if c == nil || c.HTMLURL == nil { + return "" + } + return *c.HTMLURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (c *CodeResult) GetName() string { + if c == nil || c.Name == nil { + return "" + } + return *c.Name +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (c *CodeResult) GetPath() string { + if c == nil || c.Path == nil { + return "" + } + return *c.Path +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (c *CodeResult) GetSHA() string { + if c == nil || c.SHA == nil { + return "" + } + return *c.SHA +} + +// GetIncompleteResults returns the IncompleteResults field if it's non-nil, zero value otherwise. +func (c *CodeSearchResult) GetIncompleteResults() bool { + if c == nil || c.IncompleteResults == nil { + return false + } + return *c.IncompleteResults +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (c *CodeSearchResult) GetTotal() int { + if c == nil || c.Total == nil { + return 0 + } + return *c.Total +} + +// GetCommitURL returns the CommitURL field if it's non-nil, zero value otherwise. +func (c *CombinedStatus) GetCommitURL() string { + if c == nil || c.CommitURL == nil { + return "" + } + return *c.CommitURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (c *CombinedStatus) GetName() string { + if c == nil || c.Name == nil { + return "" + } + return *c.Name +} + +// GetRepositoryURL returns the RepositoryURL field if it's non-nil, zero value otherwise. +func (c *CombinedStatus) GetRepositoryURL() string { + if c == nil || c.RepositoryURL == nil { + return "" + } + return *c.RepositoryURL +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (c *CombinedStatus) GetSHA() string { + if c == nil || c.SHA == nil { + return "" + } + return *c.SHA +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (c *CombinedStatus) GetState() string { + if c == nil || c.State == nil { + return "" + } + return *c.State +} + +// GetTotalCount returns the TotalCount field if it's non-nil, zero value otherwise. +func (c *CombinedStatus) GetTotalCount() int { + if c == nil || c.TotalCount == nil { + return 0 + } + return *c.TotalCount +} + +// GetCommentCount returns the CommentCount field if it's non-nil, zero value otherwise. +func (c *Commit) GetCommentCount() int { + if c == nil || c.CommentCount == nil { + return 0 + } + return *c.CommentCount +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (c *Commit) GetMessage() string { + if c == nil || c.Message == nil { + return "" + } + return *c.Message +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (c *Commit) GetSHA() string { + if c == nil || c.SHA == nil { + return "" + } + return *c.SHA +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (c *Commit) GetURL() string { + if c == nil || c.URL == nil { + return "" + } + return *c.URL +} + +// GetDate returns the Date field if it's non-nil, zero value otherwise. +func (c *CommitAuthor) GetDate() time.Time { + if c == nil || c.Date == nil { + return time.Time{} + } + return *c.Date +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (c *CommitAuthor) GetEmail() string { + if c == nil || c.Email == nil { + return "" + } + return *c.Email +} + +// GetLogin returns the Login field if it's non-nil, zero value otherwise. +func (c *CommitAuthor) GetLogin() string { + if c == nil || c.Login == nil { + return "" + } + return *c.Login +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (c *CommitAuthor) GetName() string { + if c == nil || c.Name == nil { + return "" + } + return *c.Name +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (c *CommitCommentEvent) GetAction() string { + if c == nil || c.Action == nil { + return "" + } + return *c.Action +} + +// GetAdditions returns the Additions field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetAdditions() int { + if c == nil || c.Additions == nil { + return 0 + } + return *c.Additions +} + +// GetBlobURL returns the BlobURL field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetBlobURL() string { + if c == nil || c.BlobURL == nil { + return "" + } + return *c.BlobURL +} + +// GetChanges returns the Changes field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetChanges() int { + if c == nil || c.Changes == nil { + return 0 + } + return *c.Changes +} + +// GetContentsURL returns the ContentsURL field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetContentsURL() string { + if c == nil || c.ContentsURL == nil { + return "" + } + return *c.ContentsURL +} + +// GetDeletions returns the Deletions field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetDeletions() int { + if c == nil || c.Deletions == nil { + return 0 + } + return *c.Deletions +} + +// GetFilename returns the Filename field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetFilename() string { + if c == nil || c.Filename == nil { + return "" + } + return *c.Filename +} + +// GetPatch returns the Patch field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetPatch() string { + if c == nil || c.Patch == nil { + return "" + } + return *c.Patch +} + +// GetRawURL returns the RawURL field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetRawURL() string { + if c == nil || c.RawURL == nil { + return "" + } + return *c.RawURL +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetSHA() string { + if c == nil || c.SHA == nil { + return "" + } + return *c.SHA +} + +// GetStatus returns the Status field if it's non-nil, zero value otherwise. +func (c *CommitFile) GetStatus() string { + if c == nil || c.Status == nil { + return "" + } + return *c.Status +} + +// GetAuthorDate returns the AuthorDate field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetAuthorDate() Timestamp { + if c == nil || c.AuthorDate == nil { + return Timestamp{} + } + return *c.AuthorDate +} + +// GetAuthorEmail returns the AuthorEmail field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetAuthorEmail() string { + if c == nil || c.AuthorEmail == nil { + return "" + } + return *c.AuthorEmail +} + +// GetAuthorID returns the AuthorID field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetAuthorID() int { + if c == nil || c.AuthorID == nil { + return 0 + } + return *c.AuthorID +} + +// GetAuthorName returns the AuthorName field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetAuthorName() string { + if c == nil || c.AuthorName == nil { + return "" + } + return *c.AuthorName +} + +// GetCommitterDate returns the CommitterDate field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetCommitterDate() Timestamp { + if c == nil || c.CommitterDate == nil { + return Timestamp{} + } + return *c.CommitterDate +} + +// GetCommitterEmail returns the CommitterEmail field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetCommitterEmail() string { + if c == nil || c.CommitterEmail == nil { + return "" + } + return *c.CommitterEmail +} + +// GetCommitterID returns the CommitterID field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetCommitterID() int { + if c == nil || c.CommitterID == nil { + return 0 + } + return *c.CommitterID +} + +// GetCommitterName returns the CommitterName field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetCommitterName() string { + if c == nil || c.CommitterName == nil { + return "" + } + return *c.CommitterName +} + +// GetHash returns the Hash field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetHash() string { + if c == nil || c.Hash == nil { + return "" + } + return *c.Hash +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (c *CommitResult) GetMessage() string { + if c == nil || c.Message == nil { + return "" + } + return *c.Message +} + +// GetAheadBy returns the AheadBy field if it's non-nil, zero value otherwise. +func (c *CommitsComparison) GetAheadBy() int { + if c == nil || c.AheadBy == nil { + return 0 + } + return *c.AheadBy +} + +// GetBehindBy returns the BehindBy field if it's non-nil, zero value otherwise. +func (c *CommitsComparison) GetBehindBy() int { + if c == nil || c.BehindBy == nil { + return 0 + } + return *c.BehindBy +} + +// GetStatus returns the Status field if it's non-nil, zero value otherwise. +func (c *CommitsComparison) GetStatus() string { + if c == nil || c.Status == nil { + return "" + } + return *c.Status +} + +// GetTotalCommits returns the TotalCommits field if it's non-nil, zero value otherwise. +func (c *CommitsComparison) GetTotalCommits() int { + if c == nil || c.TotalCommits == nil { + return 0 + } + return *c.TotalCommits +} + +// GetIncompleteResults returns the IncompleteResults field if it's non-nil, zero value otherwise. +func (c *CommitsSearchResult) GetIncompleteResults() bool { + if c == nil || c.IncompleteResults == nil { + return false + } + return *c.IncompleteResults +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (c *CommitsSearchResult) GetTotal() int { + if c == nil || c.Total == nil { + return 0 + } + return *c.Total +} + +// GetAdditions returns the Additions field if it's non-nil, zero value otherwise. +func (c *CommitStats) GetAdditions() int { + if c == nil || c.Additions == nil { + return 0 + } + return *c.Additions +} + +// GetDeletions returns the Deletions field if it's non-nil, zero value otherwise. +func (c *CommitStats) GetDeletions() int { + if c == nil || c.Deletions == nil { + return 0 + } + return *c.Deletions +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (c *CommitStats) GetTotal() int { + if c == nil || c.Total == nil { + return 0 + } + return *c.Total +} + +// GetAvatarURL returns the AvatarURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetAvatarURL() string { + if c == nil || c.AvatarURL == nil { + return "" + } + return *c.AvatarURL +} + +// GetContributions returns the Contributions field if it's non-nil, zero value otherwise. +func (c *Contributor) GetContributions() int { + if c == nil || c.Contributions == nil { + return 0 + } + return *c.Contributions +} + +// GetEventsURL returns the EventsURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetEventsURL() string { + if c == nil || c.EventsURL == nil { + return "" + } + return *c.EventsURL +} + +// GetFollowersURL returns the FollowersURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetFollowersURL() string { + if c == nil || c.FollowersURL == nil { + return "" + } + return *c.FollowersURL +} + +// GetFollowingURL returns the FollowingURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetFollowingURL() string { + if c == nil || c.FollowingURL == nil { + return "" + } + return *c.FollowingURL +} + +// GetGistsURL returns the GistsURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetGistsURL() string { + if c == nil || c.GistsURL == nil { + return "" + } + return *c.GistsURL +} + +// GetGravatarID returns the GravatarID field if it's non-nil, zero value otherwise. +func (c *Contributor) GetGravatarID() string { + if c == nil || c.GravatarID == nil { + return "" + } + return *c.GravatarID +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetHTMLURL() string { + if c == nil || c.HTMLURL == nil { + return "" + } + return *c.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (c *Contributor) GetID() int { + if c == nil || c.ID == nil { + return 0 + } + return *c.ID +} + +// GetLogin returns the Login field if it's non-nil, zero value otherwise. +func (c *Contributor) GetLogin() string { + if c == nil || c.Login == nil { + return "" + } + return *c.Login +} + +// GetOrganizationsURL returns the OrganizationsURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetOrganizationsURL() string { + if c == nil || c.OrganizationsURL == nil { + return "" + } + return *c.OrganizationsURL +} + +// GetReceivedEventsURL returns the ReceivedEventsURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetReceivedEventsURL() string { + if c == nil || c.ReceivedEventsURL == nil { + return "" + } + return *c.ReceivedEventsURL +} + +// GetReposURL returns the ReposURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetReposURL() string { + if c == nil || c.ReposURL == nil { + return "" + } + return *c.ReposURL +} + +// GetSiteAdmin returns the SiteAdmin field if it's non-nil, zero value otherwise. +func (c *Contributor) GetSiteAdmin() bool { + if c == nil || c.SiteAdmin == nil { + return false + } + return *c.SiteAdmin +} + +// GetStarredURL returns the StarredURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetStarredURL() string { + if c == nil || c.StarredURL == nil { + return "" + } + return *c.StarredURL +} + +// GetSubscriptionsURL returns the SubscriptionsURL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetSubscriptionsURL() string { + if c == nil || c.SubscriptionsURL == nil { + return "" + } + return *c.SubscriptionsURL +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (c *Contributor) GetType() string { + if c == nil || c.Type == nil { + return "" + } + return *c.Type +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (c *Contributor) GetURL() string { + if c == nil || c.URL == nil { + return "" + } + return *c.URL +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (c *ContributorStats) GetTotal() int { + if c == nil || c.Total == nil { + return 0 + } + return *c.Total +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (c *createCommit) GetMessage() string { + if c == nil || c.Message == nil { + return "" + } + return *c.Message +} + +// GetTree returns the Tree field if it's non-nil, zero value otherwise. +func (c *createCommit) GetTree() string { + if c == nil || c.Tree == nil { + return "" + } + return *c.Tree +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (c *CreateEvent) GetDescription() string { + if c == nil || c.Description == nil { + return "" + } + return *c.Description +} + +// GetMasterBranch returns the MasterBranch field if it's non-nil, zero value otherwise. +func (c *CreateEvent) GetMasterBranch() string { + if c == nil || c.MasterBranch == nil { + return "" + } + return *c.MasterBranch +} + +// GetPusherType returns the PusherType field if it's non-nil, zero value otherwise. +func (c *CreateEvent) GetPusherType() string { + if c == nil || c.PusherType == nil { + return "" + } + return *c.PusherType +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (c *CreateEvent) GetRef() string { + if c == nil || c.Ref == nil { + return "" + } + return *c.Ref +} + +// GetRefType returns the RefType field if it's non-nil, zero value otherwise. +func (c *CreateEvent) GetRefType() string { + if c == nil || c.RefType == nil { + return "" + } + return *c.RefType +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (c *createRefRequest) GetRef() string { + if c == nil || c.Ref == nil { + return "" + } + return *c.Ref +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (c *createRefRequest) GetSHA() string { + if c == nil || c.SHA == nil { + return "" + } + return *c.SHA +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (c *createTagRequest) GetMessage() string { + if c == nil || c.Message == nil { + return "" + } + return *c.Message +} + +// GetObject returns the Object field if it's non-nil, zero value otherwise. +func (c *createTagRequest) GetObject() string { + if c == nil || c.Object == nil { + return "" + } + return *c.Object +} + +// GetTag returns the Tag field if it's non-nil, zero value otherwise. +func (c *createTagRequest) GetTag() string { + if c == nil || c.Tag == nil { + return "" + } + return *c.Tag +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (c *createTagRequest) GetType() string { + if c == nil || c.Type == nil { + return "" + } + return *c.Type +} + +// GetPusherType returns the PusherType field if it's non-nil, zero value otherwise. +func (d *DeleteEvent) GetPusherType() string { + if d == nil || d.PusherType == nil { + return "" + } + return *d.PusherType +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (d *DeleteEvent) GetRef() string { + if d == nil || d.Ref == nil { + return "" + } + return *d.Ref +} + +// GetRefType returns the RefType field if it's non-nil, zero value otherwise. +func (d *DeleteEvent) GetRefType() string { + if d == nil || d.RefType == nil { + return "" + } + return *d.RefType +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (d *Deployment) GetCreatedAt() Timestamp { + if d == nil || d.CreatedAt == nil { + return Timestamp{} + } + return *d.CreatedAt +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (d *Deployment) GetDescription() string { + if d == nil || d.Description == nil { + return "" + } + return *d.Description +} + +// GetEnvironment returns the Environment field if it's non-nil, zero value otherwise. +func (d *Deployment) GetEnvironment() string { + if d == nil || d.Environment == nil { + return "" + } + return *d.Environment +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (d *Deployment) GetID() int { + if d == nil || d.ID == nil { + return 0 + } + return *d.ID +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (d *Deployment) GetRef() string { + if d == nil || d.Ref == nil { + return "" + } + return *d.Ref +} + +// GetRepositoryURL returns the RepositoryURL field if it's non-nil, zero value otherwise. +func (d *Deployment) GetRepositoryURL() string { + if d == nil || d.RepositoryURL == nil { + return "" + } + return *d.RepositoryURL +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (d *Deployment) GetSHA() string { + if d == nil || d.SHA == nil { + return "" + } + return *d.SHA +} + +// GetStatusesURL returns the StatusesURL field if it's non-nil, zero value otherwise. +func (d *Deployment) GetStatusesURL() string { + if d == nil || d.StatusesURL == nil { + return "" + } + return *d.StatusesURL +} + +// GetTask returns the Task field if it's non-nil, zero value otherwise. +func (d *Deployment) GetTask() string { + if d == nil || d.Task == nil { + return "" + } + return *d.Task +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (d *Deployment) GetUpdatedAt() Timestamp { + if d == nil || d.UpdatedAt == nil { + return Timestamp{} + } + return *d.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (d *Deployment) GetURL() string { + if d == nil || d.URL == nil { + return "" + } + return *d.URL +} + +// GetAutoMerge returns the AutoMerge field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetAutoMerge() bool { + if d == nil || d.AutoMerge == nil { + return false + } + return *d.AutoMerge +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetDescription() string { + if d == nil || d.Description == nil { + return "" + } + return *d.Description +} + +// GetEnvironment returns the Environment field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetEnvironment() string { + if d == nil || d.Environment == nil { + return "" + } + return *d.Environment +} + +// GetPayload returns the Payload field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetPayload() string { + if d == nil || d.Payload == nil { + return "" + } + return *d.Payload +} + +// GetProductionEnvironment returns the ProductionEnvironment field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetProductionEnvironment() bool { + if d == nil || d.ProductionEnvironment == nil { + return false + } + return *d.ProductionEnvironment +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetRef() string { + if d == nil || d.Ref == nil { + return "" + } + return *d.Ref +} + +// GetRequiredContexts returns the RequiredContexts field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetRequiredContexts() []string { + if d == nil || d.RequiredContexts == nil { + return nil + } + return *d.RequiredContexts +} + +// GetTask returns the Task field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetTask() string { + if d == nil || d.Task == nil { + return "" + } + return *d.Task +} + +// GetTransientEnvironment returns the TransientEnvironment field if it's non-nil, zero value otherwise. +func (d *DeploymentRequest) GetTransientEnvironment() bool { + if d == nil || d.TransientEnvironment == nil { + return false + } + return *d.TransientEnvironment +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetCreatedAt() Timestamp { + if d == nil || d.CreatedAt == nil { + return Timestamp{} + } + return *d.CreatedAt +} + +// GetDeploymentURL returns the DeploymentURL field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetDeploymentURL() string { + if d == nil || d.DeploymentURL == nil { + return "" + } + return *d.DeploymentURL +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetDescription() string { + if d == nil || d.Description == nil { + return "" + } + return *d.Description +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetID() int { + if d == nil || d.ID == nil { + return 0 + } + return *d.ID +} + +// GetRepositoryURL returns the RepositoryURL field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetRepositoryURL() string { + if d == nil || d.RepositoryURL == nil { + return "" + } + return *d.RepositoryURL +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetState() string { + if d == nil || d.State == nil { + return "" + } + return *d.State +} + +// GetTargetURL returns the TargetURL field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetTargetURL() string { + if d == nil || d.TargetURL == nil { + return "" + } + return *d.TargetURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (d *DeploymentStatus) GetUpdatedAt() Timestamp { + if d == nil || d.UpdatedAt == nil { + return Timestamp{} + } + return *d.UpdatedAt +} + +// GetAutoInactive returns the AutoInactive field if it's non-nil, zero value otherwise. +func (d *DeploymentStatusRequest) GetAutoInactive() bool { + if d == nil || d.AutoInactive == nil { + return false + } + return *d.AutoInactive +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (d *DeploymentStatusRequest) GetDescription() string { + if d == nil || d.Description == nil { + return "" + } + return *d.Description +} + +// GetEnvironmentURL returns the EnvironmentURL field if it's non-nil, zero value otherwise. +func (d *DeploymentStatusRequest) GetEnvironmentURL() string { + if d == nil || d.EnvironmentURL == nil { + return "" + } + return *d.EnvironmentURL +} + +// GetLogURL returns the LogURL field if it's non-nil, zero value otherwise. +func (d *DeploymentStatusRequest) GetLogURL() string { + if d == nil || d.LogURL == nil { + return "" + } + return *d.LogURL +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (d *DeploymentStatusRequest) GetState() string { + if d == nil || d.State == nil { + return "" + } + return *d.State +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (d *DraftReviewComment) GetBody() string { + if d == nil || d.Body == nil { + return "" + } + return *d.Body +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (d *DraftReviewComment) GetPath() string { + if d == nil || d.Path == nil { + return "" + } + return *d.Path +} + +// GetPosition returns the Position field if it's non-nil, zero value otherwise. +func (d *DraftReviewComment) GetPosition() int { + if d == nil || d.Position == nil { + return 0 + } + return *d.Position +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (e *Event) GetCreatedAt() time.Time { + if e == nil || e.CreatedAt == nil { + return time.Time{} + } + return *e.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (e *Event) GetID() string { + if e == nil || e.ID == nil { + return "" + } + return *e.ID +} + +// GetPublic returns the Public field if it's non-nil, zero value otherwise. +func (e *Event) GetPublic() bool { + if e == nil || e.Public == nil { + return false + } + return *e.Public +} + +// GetRawPayload returns the RawPayload field if it's non-nil, zero value otherwise. +func (e *Event) GetRawPayload() json.RawMessage { + if e == nil || e.RawPayload == nil { + return json.RawMessage{} + } + return *e.RawPayload +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (e *Event) GetType() string { + if e == nil || e.Type == nil { + return "" + } + return *e.Type +} + +// GetHRef returns the HRef field if it's non-nil, zero value otherwise. +func (f *FeedLink) GetHRef() string { + if f == nil || f.HRef == nil { + return "" + } + return *f.HRef +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (f *FeedLink) GetType() string { + if f == nil || f.Type == nil { + return "" + } + return *f.Type +} + +// GetCurrentUserActorURL returns the CurrentUserActorURL field if it's non-nil, zero value otherwise. +func (f *Feeds) GetCurrentUserActorURL() string { + if f == nil || f.CurrentUserActorURL == nil { + return "" + } + return *f.CurrentUserActorURL +} + +// GetCurrentUserOrganizationURL returns the CurrentUserOrganizationURL field if it's non-nil, zero value otherwise. +func (f *Feeds) GetCurrentUserOrganizationURL() string { + if f == nil || f.CurrentUserOrganizationURL == nil { + return "" + } + return *f.CurrentUserOrganizationURL +} + +// GetCurrentUserPublicURL returns the CurrentUserPublicURL field if it's non-nil, zero value otherwise. +func (f *Feeds) GetCurrentUserPublicURL() string { + if f == nil || f.CurrentUserPublicURL == nil { + return "" + } + return *f.CurrentUserPublicURL +} + +// GetCurrentUserURL returns the CurrentUserURL field if it's non-nil, zero value otherwise. +func (f *Feeds) GetCurrentUserURL() string { + if f == nil || f.CurrentUserURL == nil { + return "" + } + return *f.CurrentUserURL +} + +// GetTimelineURL returns the TimelineURL field if it's non-nil, zero value otherwise. +func (f *Feeds) GetTimelineURL() string { + if f == nil || f.TimelineURL == nil { + return "" + } + return *f.TimelineURL +} + +// GetUserURL returns the UserURL field if it's non-nil, zero value otherwise. +func (f *Feeds) GetUserURL() string { + if f == nil || f.UserURL == nil { + return "" + } + return *f.UserURL +} + +// GetComments returns the Comments field if it's non-nil, zero value otherwise. +func (g *Gist) GetComments() int { + if g == nil || g.Comments == nil { + return 0 + } + return *g.Comments +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (g *Gist) GetCreatedAt() time.Time { + if g == nil || g.CreatedAt == nil { + return time.Time{} + } + return *g.CreatedAt +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (g *Gist) GetDescription() string { + if g == nil || g.Description == nil { + return "" + } + return *g.Description +} + +// GetGitPullURL returns the GitPullURL field if it's non-nil, zero value otherwise. +func (g *Gist) GetGitPullURL() string { + if g == nil || g.GitPullURL == nil { + return "" + } + return *g.GitPullURL +} + +// GetGitPushURL returns the GitPushURL field if it's non-nil, zero value otherwise. +func (g *Gist) GetGitPushURL() string { + if g == nil || g.GitPushURL == nil { + return "" + } + return *g.GitPushURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (g *Gist) GetHTMLURL() string { + if g == nil || g.HTMLURL == nil { + return "" + } + return *g.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (g *Gist) GetID() string { + if g == nil || g.ID == nil { + return "" + } + return *g.ID +} + +// GetPublic returns the Public field if it's non-nil, zero value otherwise. +func (g *Gist) GetPublic() bool { + if g == nil || g.Public == nil { + return false + } + return *g.Public +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (g *Gist) GetUpdatedAt() time.Time { + if g == nil || g.UpdatedAt == nil { + return time.Time{} + } + return *g.UpdatedAt +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (g *GistComment) GetBody() string { + if g == nil || g.Body == nil { + return "" + } + return *g.Body +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (g *GistComment) GetCreatedAt() time.Time { + if g == nil || g.CreatedAt == nil { + return time.Time{} + } + return *g.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (g *GistComment) GetID() int { + if g == nil || g.ID == nil { + return 0 + } + return *g.ID +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (g *GistComment) GetURL() string { + if g == nil || g.URL == nil { + return "" + } + return *g.URL +} + +// GetCommittedAt returns the CommittedAt field if it's non-nil, zero value otherwise. +func (g *GistCommit) GetCommittedAt() Timestamp { + if g == nil || g.CommittedAt == nil { + return Timestamp{} + } + return *g.CommittedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (g *GistCommit) GetURL() string { + if g == nil || g.URL == nil { + return "" + } + return *g.URL +} + +// GetVersion returns the Version field if it's non-nil, zero value otherwise. +func (g *GistCommit) GetVersion() string { + if g == nil || g.Version == nil { + return "" + } + return *g.Version +} + +// GetContent returns the Content field if it's non-nil, zero value otherwise. +func (g *GistFile) GetContent() string { + if g == nil || g.Content == nil { + return "" + } + return *g.Content +} + +// GetFilename returns the Filename field if it's non-nil, zero value otherwise. +func (g *GistFile) GetFilename() string { + if g == nil || g.Filename == nil { + return "" + } + return *g.Filename +} + +// GetLanguage returns the Language field if it's non-nil, zero value otherwise. +func (g *GistFile) GetLanguage() string { + if g == nil || g.Language == nil { + return "" + } + return *g.Language +} + +// GetRawURL returns the RawURL field if it's non-nil, zero value otherwise. +func (g *GistFile) GetRawURL() string { + if g == nil || g.RawURL == nil { + return "" + } + return *g.RawURL +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (g *GistFile) GetSize() int { + if g == nil || g.Size == nil { + return 0 + } + return *g.Size +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (g *GistFile) GetType() string { + if g == nil || g.Type == nil { + return "" + } + return *g.Type +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (g *GistFork) GetCreatedAt() Timestamp { + if g == nil || g.CreatedAt == nil { + return Timestamp{} + } + return *g.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (g *GistFork) GetID() string { + if g == nil || g.ID == nil { + return "" + } + return *g.ID +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (g *GistFork) GetUpdatedAt() Timestamp { + if g == nil || g.UpdatedAt == nil { + return Timestamp{} + } + return *g.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (g *GistFork) GetURL() string { + if g == nil || g.URL == nil { + return "" + } + return *g.URL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (g *Gitignore) GetName() string { + if g == nil || g.Name == nil { + return "" + } + return *g.Name +} + +// GetSource returns the Source field if it's non-nil, zero value otherwise. +func (g *Gitignore) GetSource() string { + if g == nil || g.Source == nil { + return "" + } + return *g.Source +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (g *GitObject) GetSHA() string { + if g == nil || g.SHA == nil { + return "" + } + return *g.SHA +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (g *GitObject) GetType() string { + if g == nil || g.Type == nil { + return "" + } + return *g.Type +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (g *GitObject) GetURL() string { + if g == nil || g.URL == nil { + return "" + } + return *g.URL +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (g *GPGEmail) GetEmail() string { + if g == nil || g.Email == nil { + return "" + } + return *g.Email +} + +// GetVerified returns the Verified field if it's non-nil, zero value otherwise. +func (g *GPGEmail) GetVerified() bool { + if g == nil || g.Verified == nil { + return false + } + return *g.Verified +} + +// GetCanCertify returns the CanCertify field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetCanCertify() bool { + if g == nil || g.CanCertify == nil { + return false + } + return *g.CanCertify +} + +// GetCanEncryptComms returns the CanEncryptComms field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetCanEncryptComms() bool { + if g == nil || g.CanEncryptComms == nil { + return false + } + return *g.CanEncryptComms +} + +// GetCanEncryptStorage returns the CanEncryptStorage field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetCanEncryptStorage() bool { + if g == nil || g.CanEncryptStorage == nil { + return false + } + return *g.CanEncryptStorage +} + +// GetCanSign returns the CanSign field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetCanSign() bool { + if g == nil || g.CanSign == nil { + return false + } + return *g.CanSign +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetCreatedAt() time.Time { + if g == nil || g.CreatedAt == nil { + return time.Time{} + } + return *g.CreatedAt +} + +// GetExpiresAt returns the ExpiresAt field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetExpiresAt() time.Time { + if g == nil || g.ExpiresAt == nil { + return time.Time{} + } + return *g.ExpiresAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetID() int { + if g == nil || g.ID == nil { + return 0 + } + return *g.ID +} + +// GetKeyID returns the KeyID field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetKeyID() string { + if g == nil || g.KeyID == nil { + return "" + } + return *g.KeyID +} + +// GetPrimaryKeyID returns the PrimaryKeyID field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetPrimaryKeyID() int { + if g == nil || g.PrimaryKeyID == nil { + return 0 + } + return *g.PrimaryKeyID +} + +// GetPublicKey returns the PublicKey field if it's non-nil, zero value otherwise. +func (g *GPGKey) GetPublicKey() string { + if g == nil || g.PublicKey == nil { + return "" + } + return *g.PublicKey +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (g *Grant) GetCreatedAt() Timestamp { + if g == nil || g.CreatedAt == nil { + return Timestamp{} + } + return *g.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (g *Grant) GetID() int { + if g == nil || g.ID == nil { + return 0 + } + return *g.ID +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (g *Grant) GetUpdatedAt() Timestamp { + if g == nil || g.UpdatedAt == nil { + return Timestamp{} + } + return *g.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (g *Grant) GetURL() string { + if g == nil || g.URL == nil { + return "" + } + return *g.URL +} + +// GetActive returns the Active field if it's non-nil, zero value otherwise. +func (h *Hook) GetActive() bool { + if h == nil || h.Active == nil { + return false + } + return *h.Active +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (h *Hook) GetCreatedAt() time.Time { + if h == nil || h.CreatedAt == nil { + return time.Time{} + } + return *h.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (h *Hook) GetID() int { + if h == nil || h.ID == nil { + return 0 + } + return *h.ID +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (h *Hook) GetName() string { + if h == nil || h.Name == nil { + return "" + } + return *h.Name +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (h *Hook) GetUpdatedAt() time.Time { + if h == nil || h.UpdatedAt == nil { + return time.Time{} + } + return *h.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (h *Hook) GetURL() string { + if h == nil || h.URL == nil { + return "" + } + return *h.URL +} + +// GetAuthorsCount returns the AuthorsCount field if it's non-nil, zero value otherwise. +func (i *Import) GetAuthorsCount() int { + if i == nil || i.AuthorsCount == nil { + return 0 + } + return *i.AuthorsCount +} + +// GetAuthorsURL returns the AuthorsURL field if it's non-nil, zero value otherwise. +func (i *Import) GetAuthorsURL() string { + if i == nil || i.AuthorsURL == nil { + return "" + } + return *i.AuthorsURL +} + +// GetCommitCount returns the CommitCount field if it's non-nil, zero value otherwise. +func (i *Import) GetCommitCount() int { + if i == nil || i.CommitCount == nil { + return 0 + } + return *i.CommitCount +} + +// GetFailedStep returns the FailedStep field if it's non-nil, zero value otherwise. +func (i *Import) GetFailedStep() string { + if i == nil || i.FailedStep == nil { + return "" + } + return *i.FailedStep +} + +// GetHasLargeFiles returns the HasLargeFiles field if it's non-nil, zero value otherwise. +func (i *Import) GetHasLargeFiles() bool { + if i == nil || i.HasLargeFiles == nil { + return false + } + return *i.HasLargeFiles +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (i *Import) GetHTMLURL() string { + if i == nil || i.HTMLURL == nil { + return "" + } + return *i.HTMLURL +} + +// GetHumanName returns the HumanName field if it's non-nil, zero value otherwise. +func (i *Import) GetHumanName() string { + if i == nil || i.HumanName == nil { + return "" + } + return *i.HumanName +} + +// GetLargeFilesCount returns the LargeFilesCount field if it's non-nil, zero value otherwise. +func (i *Import) GetLargeFilesCount() int { + if i == nil || i.LargeFilesCount == nil { + return 0 + } + return *i.LargeFilesCount +} + +// GetLargeFilesSize returns the LargeFilesSize field if it's non-nil, zero value otherwise. +func (i *Import) GetLargeFilesSize() int { + if i == nil || i.LargeFilesSize == nil { + return 0 + } + return *i.LargeFilesSize +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (i *Import) GetMessage() string { + if i == nil || i.Message == nil { + return "" + } + return *i.Message +} + +// GetPercent returns the Percent field if it's non-nil, zero value otherwise. +func (i *Import) GetPercent() int { + if i == nil || i.Percent == nil { + return 0 + } + return *i.Percent +} + +// GetPushPercent returns the PushPercent field if it's non-nil, zero value otherwise. +func (i *Import) GetPushPercent() int { + if i == nil || i.PushPercent == nil { + return 0 + } + return *i.PushPercent +} + +// GetRepositoryURL returns the RepositoryURL field if it's non-nil, zero value otherwise. +func (i *Import) GetRepositoryURL() string { + if i == nil || i.RepositoryURL == nil { + return "" + } + return *i.RepositoryURL +} + +// GetStatus returns the Status field if it's non-nil, zero value otherwise. +func (i *Import) GetStatus() string { + if i == nil || i.Status == nil { + return "" + } + return *i.Status +} + +// GetStatusText returns the StatusText field if it's non-nil, zero value otherwise. +func (i *Import) GetStatusText() string { + if i == nil || i.StatusText == nil { + return "" + } + return *i.StatusText +} + +// GetTFVCProject returns the TFVCProject field if it's non-nil, zero value otherwise. +func (i *Import) GetTFVCProject() string { + if i == nil || i.TFVCProject == nil { + return "" + } + return *i.TFVCProject +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (i *Import) GetURL() string { + if i == nil || i.URL == nil { + return "" + } + return *i.URL +} + +// GetUseLFS returns the UseLFS field if it's non-nil, zero value otherwise. +func (i *Import) GetUseLFS() string { + if i == nil || i.UseLFS == nil { + return "" + } + return *i.UseLFS +} + +// GetVCS returns the VCS field if it's non-nil, zero value otherwise. +func (i *Import) GetVCS() string { + if i == nil || i.VCS == nil { + return "" + } + return *i.VCS +} + +// GetVCSPassword returns the VCSPassword field if it's non-nil, zero value otherwise. +func (i *Import) GetVCSPassword() string { + if i == nil || i.VCSPassword == nil { + return "" + } + return *i.VCSPassword +} + +// GetVCSURL returns the VCSURL field if it's non-nil, zero value otherwise. +func (i *Import) GetVCSURL() string { + if i == nil || i.VCSURL == nil { + return "" + } + return *i.VCSURL +} + +// GetVCSUsername returns the VCSUsername field if it's non-nil, zero value otherwise. +func (i *Import) GetVCSUsername() string { + if i == nil || i.VCSUsername == nil { + return "" + } + return *i.VCSUsername +} + +// GetAccessTokensURL returns the AccessTokensURL field if it's non-nil, zero value otherwise. +func (i *Installation) GetAccessTokensURL() string { + if i == nil || i.AccessTokensURL == nil { + return "" + } + return *i.AccessTokensURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (i *Installation) GetID() int { + if i == nil || i.ID == nil { + return 0 + } + return *i.ID +} + +// GetRepositoriesURL returns the RepositoriesURL field if it's non-nil, zero value otherwise. +func (i *Installation) GetRepositoriesURL() string { + if i == nil || i.RepositoriesURL == nil { + return "" + } + return *i.RepositoriesURL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (i *IntegrationInstallationEvent) GetAction() string { + if i == nil || i.Action == nil { + return "" + } + return *i.Action +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (i *IntegrationInstallationRepositoriesEvent) GetAction() string { + if i == nil || i.Action == nil { + return "" + } + return *i.Action +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (i *Invitation) GetCreatedAt() time.Time { + if i == nil || i.CreatedAt == nil { + return time.Time{} + } + return *i.CreatedAt +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (i *Invitation) GetEmail() string { + if i == nil || i.Email == nil { + return "" + } + return *i.Email +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (i *Invitation) GetID() int { + if i == nil || i.ID == nil { + return 0 + } + return *i.ID +} + +// GetLogin returns the Login field if it's non-nil, zero value otherwise. +func (i *Invitation) GetLogin() string { + if i == nil || i.Login == nil { + return "" + } + return *i.Login +} + +// GetRole returns the Role field if it's non-nil, zero value otherwise. +func (i *Invitation) GetRole() string { + if i == nil || i.Role == nil { + return "" + } + return *i.Role +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (i *Issue) GetBody() string { + if i == nil || i.Body == nil { + return "" + } + return *i.Body +} + +// GetClosedAt returns the ClosedAt field if it's non-nil, zero value otherwise. +func (i *Issue) GetClosedAt() time.Time { + if i == nil || i.ClosedAt == nil { + return time.Time{} + } + return *i.ClosedAt +} + +// GetComments returns the Comments field if it's non-nil, zero value otherwise. +func (i *Issue) GetComments() int { + if i == nil || i.Comments == nil { + return 0 + } + return *i.Comments +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (i *Issue) GetCreatedAt() time.Time { + if i == nil || i.CreatedAt == nil { + return time.Time{} + } + return *i.CreatedAt +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (i *Issue) GetHTMLURL() string { + if i == nil || i.HTMLURL == nil { + return "" + } + return *i.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (i *Issue) GetID() int { + if i == nil || i.ID == nil { + return 0 + } + return *i.ID +} + +// GetLocked returns the Locked field if it's non-nil, zero value otherwise. +func (i *Issue) GetLocked() bool { + if i == nil || i.Locked == nil { + return false + } + return *i.Locked +} + +// GetNumber returns the Number field if it's non-nil, zero value otherwise. +func (i *Issue) GetNumber() int { + if i == nil || i.Number == nil { + return 0 + } + return *i.Number +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (i *Issue) GetState() string { + if i == nil || i.State == nil { + return "" + } + return *i.State +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (i *Issue) GetTitle() string { + if i == nil || i.Title == nil { + return "" + } + return *i.Title +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (i *Issue) GetUpdatedAt() time.Time { + if i == nil || i.UpdatedAt == nil { + return time.Time{} + } + return *i.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (i *Issue) GetURL() string { + if i == nil || i.URL == nil { + return "" + } + return *i.URL +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (i *IssueComment) GetBody() string { + if i == nil || i.Body == nil { + return "" + } + return *i.Body +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (i *IssueComment) GetCreatedAt() time.Time { + if i == nil || i.CreatedAt == nil { + return time.Time{} + } + return *i.CreatedAt +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (i *IssueComment) GetHTMLURL() string { + if i == nil || i.HTMLURL == nil { + return "" + } + return *i.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (i *IssueComment) GetID() int { + if i == nil || i.ID == nil { + return 0 + } + return *i.ID +} + +// GetIssueURL returns the IssueURL field if it's non-nil, zero value otherwise. +func (i *IssueComment) GetIssueURL() string { + if i == nil || i.IssueURL == nil { + return "" + } + return *i.IssueURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (i *IssueComment) GetUpdatedAt() time.Time { + if i == nil || i.UpdatedAt == nil { + return time.Time{} + } + return *i.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (i *IssueComment) GetURL() string { + if i == nil || i.URL == nil { + return "" + } + return *i.URL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (i *IssueCommentEvent) GetAction() string { + if i == nil || i.Action == nil { + return "" + } + return *i.Action +} + +// GetCommitID returns the CommitID field if it's non-nil, zero value otherwise. +func (i *IssueEvent) GetCommitID() string { + if i == nil || i.CommitID == nil { + return "" + } + return *i.CommitID +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (i *IssueEvent) GetCreatedAt() time.Time { + if i == nil || i.CreatedAt == nil { + return time.Time{} + } + return *i.CreatedAt +} + +// GetEvent returns the Event field if it's non-nil, zero value otherwise. +func (i *IssueEvent) GetEvent() string { + if i == nil || i.Event == nil { + return "" + } + return *i.Event +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (i *IssueEvent) GetID() int { + if i == nil || i.ID == nil { + return 0 + } + return *i.ID +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (i *IssueEvent) GetURL() string { + if i == nil || i.URL == nil { + return "" + } + return *i.URL +} + +// GetAssignee returns the Assignee field if it's non-nil, zero value otherwise. +func (i *IssueRequest) GetAssignee() string { + if i == nil || i.Assignee == nil { + return "" + } + return *i.Assignee +} + +// GetAssignees returns the Assignees field if it's non-nil, zero value otherwise. +func (i *IssueRequest) GetAssignees() []string { + if i == nil || i.Assignees == nil { + return nil + } + return *i.Assignees +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (i *IssueRequest) GetBody() string { + if i == nil || i.Body == nil { + return "" + } + return *i.Body +} + +// GetLabels returns the Labels field if it's non-nil, zero value otherwise. +func (i *IssueRequest) GetLabels() []string { + if i == nil || i.Labels == nil { + return nil + } + return *i.Labels +} + +// GetMilestone returns the Milestone field if it's non-nil, zero value otherwise. +func (i *IssueRequest) GetMilestone() int { + if i == nil || i.Milestone == nil { + return 0 + } + return *i.Milestone +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (i *IssueRequest) GetState() string { + if i == nil || i.State == nil { + return "" + } + return *i.State +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (i *IssueRequest) GetTitle() string { + if i == nil || i.Title == nil { + return "" + } + return *i.Title +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (i *IssuesEvent) GetAction() string { + if i == nil || i.Action == nil { + return "" + } + return *i.Action +} + +// GetIncompleteResults returns the IncompleteResults field if it's non-nil, zero value otherwise. +func (i *IssuesSearchResult) GetIncompleteResults() bool { + if i == nil || i.IncompleteResults == nil { + return false + } + return *i.IncompleteResults +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (i *IssuesSearchResult) GetTotal() int { + if i == nil || i.Total == nil { + return 0 + } + return *i.Total +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (k *Key) GetID() int { + if k == nil || k.ID == nil { + return 0 + } + return *k.ID +} + +// GetKey returns the Key field if it's non-nil, zero value otherwise. +func (k *Key) GetKey() string { + if k == nil || k.Key == nil { + return "" + } + return *k.Key +} + +// GetReadOnly returns the ReadOnly field if it's non-nil, zero value otherwise. +func (k *Key) GetReadOnly() bool { + if k == nil || k.ReadOnly == nil { + return false + } + return *k.ReadOnly +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (k *Key) GetTitle() string { + if k == nil || k.Title == nil { + return "" + } + return *k.Title +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (k *Key) GetURL() string { + if k == nil || k.URL == nil { + return "" + } + return *k.URL +} + +// GetColor returns the Color field if it's non-nil, zero value otherwise. +func (l *Label) GetColor() string { + if l == nil || l.Color == nil { + return "" + } + return *l.Color +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (l *Label) GetName() string { + if l == nil || l.Name == nil { + return "" + } + return *l.Name +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (l *Label) GetURL() string { + if l == nil || l.URL == nil { + return "" + } + return *l.URL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (l *LabelEvent) GetAction() string { + if l == nil || l.Action == nil { + return "" + } + return *l.Action +} + +// GetOID returns the OID field if it's non-nil, zero value otherwise. +func (l *LargeFile) GetOID() string { + if l == nil || l.OID == nil { + return "" + } + return *l.OID +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (l *LargeFile) GetPath() string { + if l == nil || l.Path == nil { + return "" + } + return *l.Path +} + +// GetRefName returns the RefName field if it's non-nil, zero value otherwise. +func (l *LargeFile) GetRefName() string { + if l == nil || l.RefName == nil { + return "" + } + return *l.RefName +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (l *LargeFile) GetSize() int { + if l == nil || l.Size == nil { + return 0 + } + return *l.Size +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (l *License) GetBody() string { + if l == nil || l.Body == nil { + return "" + } + return *l.Body +} + +// GetConditions returns the Conditions field if it's non-nil, zero value otherwise. +func (l *License) GetConditions() []string { + if l == nil || l.Conditions == nil { + return nil + } + return *l.Conditions +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (l *License) GetDescription() string { + if l == nil || l.Description == nil { + return "" + } + return *l.Description +} + +// GetFeatured returns the Featured field if it's non-nil, zero value otherwise. +func (l *License) GetFeatured() bool { + if l == nil || l.Featured == nil { + return false + } + return *l.Featured +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (l *License) GetHTMLURL() string { + if l == nil || l.HTMLURL == nil { + return "" + } + return *l.HTMLURL +} + +// GetImplementation returns the Implementation field if it's non-nil, zero value otherwise. +func (l *License) GetImplementation() string { + if l == nil || l.Implementation == nil { + return "" + } + return *l.Implementation +} + +// GetKey returns the Key field if it's non-nil, zero value otherwise. +func (l *License) GetKey() string { + if l == nil || l.Key == nil { + return "" + } + return *l.Key +} + +// GetLimitations returns the Limitations field if it's non-nil, zero value otherwise. +func (l *License) GetLimitations() []string { + if l == nil || l.Limitations == nil { + return nil + } + return *l.Limitations +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (l *License) GetName() string { + if l == nil || l.Name == nil { + return "" + } + return *l.Name +} + +// GetPermissions returns the Permissions field if it's non-nil, zero value otherwise. +func (l *License) GetPermissions() []string { + if l == nil || l.Permissions == nil { + return nil + } + return *l.Permissions +} + +// GetSPDXID returns the SPDXID field if it's non-nil, zero value otherwise. +func (l *License) GetSPDXID() string { + if l == nil || l.SPDXID == nil { + return "" + } + return *l.SPDXID +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (l *License) GetURL() string { + if l == nil || l.URL == nil { + return "" + } + return *l.URL +} + +// GetContext returns the Context field if it's non-nil, zero value otherwise. +func (m *markdownRequest) GetContext() string { + if m == nil || m.Context == nil { + return "" + } + return *m.Context +} + +// GetMode returns the Mode field if it's non-nil, zero value otherwise. +func (m *markdownRequest) GetMode() string { + if m == nil || m.Mode == nil { + return "" + } + return *m.Mode +} + +// GetText returns the Text field if it's non-nil, zero value otherwise. +func (m *markdownRequest) GetText() string { + if m == nil || m.Text == nil { + return "" + } + return *m.Text +} + +// GetText returns the Text field if it's non-nil, zero value otherwise. +func (m *Match) GetText() string { + if m == nil || m.Text == nil { + return "" + } + return *m.Text +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (m *MemberEvent) GetAction() string { + if m == nil || m.Action == nil { + return "" + } + return *m.Action +} + +// GetOrganizationURL returns the OrganizationURL field if it's non-nil, zero value otherwise. +func (m *Membership) GetOrganizationURL() string { + if m == nil || m.OrganizationURL == nil { + return "" + } + return *m.OrganizationURL +} + +// GetRole returns the Role field if it's non-nil, zero value otherwise. +func (m *Membership) GetRole() string { + if m == nil || m.Role == nil { + return "" + } + return *m.Role +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (m *Membership) GetState() string { + if m == nil || m.State == nil { + return "" + } + return *m.State +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (m *Membership) GetURL() string { + if m == nil || m.URL == nil { + return "" + } + return *m.URL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (m *MembershipEvent) GetAction() string { + if m == nil || m.Action == nil { + return "" + } + return *m.Action +} + +// GetScope returns the Scope field if it's non-nil, zero value otherwise. +func (m *MembershipEvent) GetScope() string { + if m == nil || m.Scope == nil { + return "" + } + return *m.Scope +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (m *Migration) GetCreatedAt() string { + if m == nil || m.CreatedAt == nil { + return "" + } + return *m.CreatedAt +} + +// GetExcludeAttachments returns the ExcludeAttachments field if it's non-nil, zero value otherwise. +func (m *Migration) GetExcludeAttachments() bool { + if m == nil || m.ExcludeAttachments == nil { + return false + } + return *m.ExcludeAttachments +} + +// GetGUID returns the GUID field if it's non-nil, zero value otherwise. +func (m *Migration) GetGUID() string { + if m == nil || m.GUID == nil { + return "" + } + return *m.GUID +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (m *Migration) GetID() int { + if m == nil || m.ID == nil { + return 0 + } + return *m.ID +} + +// GetLockRepositories returns the LockRepositories field if it's non-nil, zero value otherwise. +func (m *Migration) GetLockRepositories() bool { + if m == nil || m.LockRepositories == nil { + return false + } + return *m.LockRepositories +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (m *Migration) GetState() string { + if m == nil || m.State == nil { + return "" + } + return *m.State +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (m *Migration) GetUpdatedAt() string { + if m == nil || m.UpdatedAt == nil { + return "" + } + return *m.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (m *Migration) GetURL() string { + if m == nil || m.URL == nil { + return "" + } + return *m.URL +} + +// GetClosedAt returns the ClosedAt field if it's non-nil, zero value otherwise. +func (m *Milestone) GetClosedAt() time.Time { + if m == nil || m.ClosedAt == nil { + return time.Time{} + } + return *m.ClosedAt +} + +// GetClosedIssues returns the ClosedIssues field if it's non-nil, zero value otherwise. +func (m *Milestone) GetClosedIssues() int { + if m == nil || m.ClosedIssues == nil { + return 0 + } + return *m.ClosedIssues +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (m *Milestone) GetCreatedAt() time.Time { + if m == nil || m.CreatedAt == nil { + return time.Time{} + } + return *m.CreatedAt +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (m *Milestone) GetDescription() string { + if m == nil || m.Description == nil { + return "" + } + return *m.Description +} + +// GetDueOn returns the DueOn field if it's non-nil, zero value otherwise. +func (m *Milestone) GetDueOn() time.Time { + if m == nil || m.DueOn == nil { + return time.Time{} + } + return *m.DueOn +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (m *Milestone) GetHTMLURL() string { + if m == nil || m.HTMLURL == nil { + return "" + } + return *m.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (m *Milestone) GetID() int { + if m == nil || m.ID == nil { + return 0 + } + return *m.ID +} + +// GetLabelsURL returns the LabelsURL field if it's non-nil, zero value otherwise. +func (m *Milestone) GetLabelsURL() string { + if m == nil || m.LabelsURL == nil { + return "" + } + return *m.LabelsURL +} + +// GetNumber returns the Number field if it's non-nil, zero value otherwise. +func (m *Milestone) GetNumber() int { + if m == nil || m.Number == nil { + return 0 + } + return *m.Number +} + +// GetOpenIssues returns the OpenIssues field if it's non-nil, zero value otherwise. +func (m *Milestone) GetOpenIssues() int { + if m == nil || m.OpenIssues == nil { + return 0 + } + return *m.OpenIssues +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (m *Milestone) GetState() string { + if m == nil || m.State == nil { + return "" + } + return *m.State +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (m *Milestone) GetTitle() string { + if m == nil || m.Title == nil { + return "" + } + return *m.Title +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (m *Milestone) GetUpdatedAt() time.Time { + if m == nil || m.UpdatedAt == nil { + return time.Time{} + } + return *m.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (m *Milestone) GetURL() string { + if m == nil || m.URL == nil { + return "" + } + return *m.URL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (m *MilestoneEvent) GetAction() string { + if m == nil || m.Action == nil { + return "" + } + return *m.Action +} + +// GetBase returns the Base field if it's non-nil, zero value otherwise. +func (n *NewPullRequest) GetBase() string { + if n == nil || n.Base == nil { + return "" + } + return *n.Base +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (n *NewPullRequest) GetBody() string { + if n == nil || n.Body == nil { + return "" + } + return *n.Body +} + +// GetHead returns the Head field if it's non-nil, zero value otherwise. +func (n *NewPullRequest) GetHead() string { + if n == nil || n.Head == nil { + return "" + } + return *n.Head +} + +// GetIssue returns the Issue field if it's non-nil, zero value otherwise. +func (n *NewPullRequest) GetIssue() int { + if n == nil || n.Issue == nil { + return 0 + } + return *n.Issue +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (n *NewPullRequest) GetTitle() string { + if n == nil || n.Title == nil { + return "" + } + return *n.Title +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (n *Notification) GetID() string { + if n == nil || n.ID == nil { + return "" + } + return *n.ID +} + +// GetLastReadAt returns the LastReadAt field if it's non-nil, zero value otherwise. +func (n *Notification) GetLastReadAt() time.Time { + if n == nil || n.LastReadAt == nil { + return time.Time{} + } + return *n.LastReadAt +} + +// GetReason returns the Reason field if it's non-nil, zero value otherwise. +func (n *Notification) GetReason() string { + if n == nil || n.Reason == nil { + return "" + } + return *n.Reason +} + +// GetUnread returns the Unread field if it's non-nil, zero value otherwise. +func (n *Notification) GetUnread() bool { + if n == nil || n.Unread == nil { + return false + } + return *n.Unread +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (n *Notification) GetUpdatedAt() time.Time { + if n == nil || n.UpdatedAt == nil { + return time.Time{} + } + return *n.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (n *Notification) GetURL() string { + if n == nil || n.URL == nil { + return "" + } + return *n.URL +} + +// GetLatestCommentURL returns the LatestCommentURL field if it's non-nil, zero value otherwise. +func (n *NotificationSubject) GetLatestCommentURL() string { + if n == nil || n.LatestCommentURL == nil { + return "" + } + return *n.LatestCommentURL +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (n *NotificationSubject) GetTitle() string { + if n == nil || n.Title == nil { + return "" + } + return *n.Title +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (n *NotificationSubject) GetType() string { + if n == nil || n.Type == nil { + return "" + } + return *n.Type +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (n *NotificationSubject) GetURL() string { + if n == nil || n.URL == nil { + return "" + } + return *n.URL +} + +// GetAvatarURL returns the AvatarURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetAvatarURL() string { + if o == nil || o.AvatarURL == nil { + return "" + } + return *o.AvatarURL +} + +// GetBillingEmail returns the BillingEmail field if it's non-nil, zero value otherwise. +func (o *Organization) GetBillingEmail() string { + if o == nil || o.BillingEmail == nil { + return "" + } + return *o.BillingEmail +} + +// GetBlog returns the Blog field if it's non-nil, zero value otherwise. +func (o *Organization) GetBlog() string { + if o == nil || o.Blog == nil { + return "" + } + return *o.Blog +} + +// GetCollaborators returns the Collaborators field if it's non-nil, zero value otherwise. +func (o *Organization) GetCollaborators() int { + if o == nil || o.Collaborators == nil { + return 0 + } + return *o.Collaborators +} + +// GetCompany returns the Company field if it's non-nil, zero value otherwise. +func (o *Organization) GetCompany() string { + if o == nil || o.Company == nil { + return "" + } + return *o.Company +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (o *Organization) GetCreatedAt() time.Time { + if o == nil || o.CreatedAt == nil { + return time.Time{} + } + return *o.CreatedAt +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (o *Organization) GetDescription() string { + if o == nil || o.Description == nil { + return "" + } + return *o.Description +} + +// GetDiskUsage returns the DiskUsage field if it's non-nil, zero value otherwise. +func (o *Organization) GetDiskUsage() int { + if o == nil || o.DiskUsage == nil { + return 0 + } + return *o.DiskUsage +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (o *Organization) GetEmail() string { + if o == nil || o.Email == nil { + return "" + } + return *o.Email +} + +// GetEventsURL returns the EventsURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetEventsURL() string { + if o == nil || o.EventsURL == nil { + return "" + } + return *o.EventsURL +} + +// GetFollowers returns the Followers field if it's non-nil, zero value otherwise. +func (o *Organization) GetFollowers() int { + if o == nil || o.Followers == nil { + return 0 + } + return *o.Followers +} + +// GetFollowing returns the Following field if it's non-nil, zero value otherwise. +func (o *Organization) GetFollowing() int { + if o == nil || o.Following == nil { + return 0 + } + return *o.Following +} + +// GetHooksURL returns the HooksURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetHooksURL() string { + if o == nil || o.HooksURL == nil { + return "" + } + return *o.HooksURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetHTMLURL() string { + if o == nil || o.HTMLURL == nil { + return "" + } + return *o.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (o *Organization) GetID() int { + if o == nil || o.ID == nil { + return 0 + } + return *o.ID +} + +// GetIssuesURL returns the IssuesURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetIssuesURL() string { + if o == nil || o.IssuesURL == nil { + return "" + } + return *o.IssuesURL +} + +// GetLocation returns the Location field if it's non-nil, zero value otherwise. +func (o *Organization) GetLocation() string { + if o == nil || o.Location == nil { + return "" + } + return *o.Location +} + +// GetLogin returns the Login field if it's non-nil, zero value otherwise. +func (o *Organization) GetLogin() string { + if o == nil || o.Login == nil { + return "" + } + return *o.Login +} + +// GetMembersURL returns the MembersURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetMembersURL() string { + if o == nil || o.MembersURL == nil { + return "" + } + return *o.MembersURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (o *Organization) GetName() string { + if o == nil || o.Name == nil { + return "" + } + return *o.Name +} + +// GetOwnedPrivateRepos returns the OwnedPrivateRepos field if it's non-nil, zero value otherwise. +func (o *Organization) GetOwnedPrivateRepos() int { + if o == nil || o.OwnedPrivateRepos == nil { + return 0 + } + return *o.OwnedPrivateRepos +} + +// GetPrivateGists returns the PrivateGists field if it's non-nil, zero value otherwise. +func (o *Organization) GetPrivateGists() int { + if o == nil || o.PrivateGists == nil { + return 0 + } + return *o.PrivateGists +} + +// GetPublicGists returns the PublicGists field if it's non-nil, zero value otherwise. +func (o *Organization) GetPublicGists() int { + if o == nil || o.PublicGists == nil { + return 0 + } + return *o.PublicGists +} + +// GetPublicMembersURL returns the PublicMembersURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetPublicMembersURL() string { + if o == nil || o.PublicMembersURL == nil { + return "" + } + return *o.PublicMembersURL +} + +// GetPublicRepos returns the PublicRepos field if it's non-nil, zero value otherwise. +func (o *Organization) GetPublicRepos() int { + if o == nil || o.PublicRepos == nil { + return 0 + } + return *o.PublicRepos +} + +// GetReposURL returns the ReposURL field if it's non-nil, zero value otherwise. +func (o *Organization) GetReposURL() string { + if o == nil || o.ReposURL == nil { + return "" + } + return *o.ReposURL +} + +// GetTotalPrivateRepos returns the TotalPrivateRepos field if it's non-nil, zero value otherwise. +func (o *Organization) GetTotalPrivateRepos() int { + if o == nil || o.TotalPrivateRepos == nil { + return 0 + } + return *o.TotalPrivateRepos +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (o *Organization) GetType() string { + if o == nil || o.Type == nil { + return "" + } + return *o.Type +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (o *Organization) GetUpdatedAt() time.Time { + if o == nil || o.UpdatedAt == nil { + return time.Time{} + } + return *o.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (o *Organization) GetURL() string { + if o == nil || o.URL == nil { + return "" + } + return *o.URL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (o *OrganizationEvent) GetAction() string { + if o == nil || o.Action == nil { + return "" + } + return *o.Action +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (p *Page) GetAction() string { + if p == nil || p.Action == nil { + return "" + } + return *p.Action +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (p *Page) GetHTMLURL() string { + if p == nil || p.HTMLURL == nil { + return "" + } + return *p.HTMLURL +} + +// GetPageName returns the PageName field if it's non-nil, zero value otherwise. +func (p *Page) GetPageName() string { + if p == nil || p.PageName == nil { + return "" + } + return *p.PageName +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (p *Page) GetSHA() string { + if p == nil || p.SHA == nil { + return "" + } + return *p.SHA +} + +// GetSummary returns the Summary field if it's non-nil, zero value otherwise. +func (p *Page) GetSummary() string { + if p == nil || p.Summary == nil { + return "" + } + return *p.Summary +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (p *Page) GetTitle() string { + if p == nil || p.Title == nil { + return "" + } + return *p.Title +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *PageBuildEvent) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetCNAME returns the CNAME field if it's non-nil, zero value otherwise. +func (p *Pages) GetCNAME() string { + if p == nil || p.CNAME == nil { + return "" + } + return *p.CNAME +} + +// GetCustom404 returns the Custom404 field if it's non-nil, zero value otherwise. +func (p *Pages) GetCustom404() bool { + if p == nil || p.Custom404 == nil { + return false + } + return *p.Custom404 +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (p *Pages) GetHTMLURL() string { + if p == nil || p.HTMLURL == nil { + return "" + } + return *p.HTMLURL +} + +// GetStatus returns the Status field if it's non-nil, zero value otherwise. +func (p *Pages) GetStatus() string { + if p == nil || p.Status == nil { + return "" + } + return *p.Status +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *Pages) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetCommit returns the Commit field if it's non-nil, zero value otherwise. +func (p *PagesBuild) GetCommit() string { + if p == nil || p.Commit == nil { + return "" + } + return *p.Commit +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (p *PagesBuild) GetCreatedAt() Timestamp { + if p == nil || p.CreatedAt == nil { + return Timestamp{} + } + return *p.CreatedAt +} + +// GetDuration returns the Duration field if it's non-nil, zero value otherwise. +func (p *PagesBuild) GetDuration() int { + if p == nil || p.Duration == nil { + return 0 + } + return *p.Duration +} + +// GetStatus returns the Status field if it's non-nil, zero value otherwise. +func (p *PagesBuild) GetStatus() string { + if p == nil || p.Status == nil { + return "" + } + return *p.Status +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (p *PagesBuild) GetUpdatedAt() Timestamp { + if p == nil || p.UpdatedAt == nil { + return Timestamp{} + } + return *p.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *PagesBuild) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (p *PagesError) GetMessage() string { + if p == nil || p.Message == nil { + return "" + } + return *p.Message +} + +// GetHookID returns the HookID field if it's non-nil, zero value otherwise. +func (p *PingEvent) GetHookID() int { + if p == nil || p.HookID == nil { + return 0 + } + return *p.HookID +} + +// GetZen returns the Zen field if it's non-nil, zero value otherwise. +func (p *PingEvent) GetZen() string { + if p == nil || p.Zen == nil { + return "" + } + return *p.Zen +} + +// GetCollaborators returns the Collaborators field if it's non-nil, zero value otherwise. +func (p *Plan) GetCollaborators() int { + if p == nil || p.Collaborators == nil { + return 0 + } + return *p.Collaborators +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (p *Plan) GetName() string { + if p == nil || p.Name == nil { + return "" + } + return *p.Name +} + +// GetPrivateRepos returns the PrivateRepos field if it's non-nil, zero value otherwise. +func (p *Plan) GetPrivateRepos() int { + if p == nil || p.PrivateRepos == nil { + return 0 + } + return *p.PrivateRepos +} + +// GetSpace returns the Space field if it's non-nil, zero value otherwise. +func (p *Plan) GetSpace() int { + if p == nil || p.Space == nil { + return 0 + } + return *p.Space +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (p *Project) GetBody() string { + if p == nil || p.Body == nil { + return "" + } + return *p.Body +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (p *Project) GetCreatedAt() Timestamp { + if p == nil || p.CreatedAt == nil { + return Timestamp{} + } + return *p.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *Project) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (p *Project) GetName() string { + if p == nil || p.Name == nil { + return "" + } + return *p.Name +} + +// GetNumber returns the Number field if it's non-nil, zero value otherwise. +func (p *Project) GetNumber() int { + if p == nil || p.Number == nil { + return 0 + } + return *p.Number +} + +// GetOwnerURL returns the OwnerURL field if it's non-nil, zero value otherwise. +func (p *Project) GetOwnerURL() string { + if p == nil || p.OwnerURL == nil { + return "" + } + return *p.OwnerURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (p *Project) GetUpdatedAt() Timestamp { + if p == nil || p.UpdatedAt == nil { + return Timestamp{} + } + return *p.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *Project) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetColumnURL returns the ColumnURL field if it's non-nil, zero value otherwise. +func (p *ProjectCard) GetColumnURL() string { + if p == nil || p.ColumnURL == nil { + return "" + } + return *p.ColumnURL +} + +// GetContentURL returns the ContentURL field if it's non-nil, zero value otherwise. +func (p *ProjectCard) GetContentURL() string { + if p == nil || p.ContentURL == nil { + return "" + } + return *p.ContentURL +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (p *ProjectCard) GetCreatedAt() Timestamp { + if p == nil || p.CreatedAt == nil { + return Timestamp{} + } + return *p.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *ProjectCard) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetNote returns the Note field if it's non-nil, zero value otherwise. +func (p *ProjectCard) GetNote() string { + if p == nil || p.Note == nil { + return "" + } + return *p.Note +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (p *ProjectCard) GetUpdatedAt() Timestamp { + if p == nil || p.UpdatedAt == nil { + return Timestamp{} + } + return *p.UpdatedAt +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (p *ProjectCardEvent) GetAction() string { + if p == nil || p.Action == nil { + return "" + } + return *p.Action +} + +// GetAfterID returns the AfterID field if it's non-nil, zero value otherwise. +func (p *ProjectCardEvent) GetAfterID() int { + if p == nil || p.AfterID == nil { + return 0 + } + return *p.AfterID +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (p *ProjectColumn) GetCreatedAt() Timestamp { + if p == nil || p.CreatedAt == nil { + return Timestamp{} + } + return *p.CreatedAt +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *ProjectColumn) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (p *ProjectColumn) GetName() string { + if p == nil || p.Name == nil { + return "" + } + return *p.Name +} + +// GetProjectURL returns the ProjectURL field if it's non-nil, zero value otherwise. +func (p *ProjectColumn) GetProjectURL() string { + if p == nil || p.ProjectURL == nil { + return "" + } + return *p.ProjectURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (p *ProjectColumn) GetUpdatedAt() Timestamp { + if p == nil || p.UpdatedAt == nil { + return Timestamp{} + } + return *p.UpdatedAt +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (p *ProjectColumnEvent) GetAction() string { + if p == nil || p.Action == nil { + return "" + } + return *p.Action +} + +// GetAfterID returns the AfterID field if it's non-nil, zero value otherwise. +func (p *ProjectColumnEvent) GetAfterID() int { + if p == nil || p.AfterID == nil { + return 0 + } + return *p.AfterID +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (p *ProjectEvent) GetAction() string { + if p == nil || p.Action == nil { + return "" + } + return *p.Action +} + +// GetAdditions returns the Additions field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetAdditions() int { + if p == nil || p.Additions == nil { + return 0 + } + return *p.Additions +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetBody() string { + if p == nil || p.Body == nil { + return "" + } + return *p.Body +} + +// GetChangedFiles returns the ChangedFiles field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetChangedFiles() int { + if p == nil || p.ChangedFiles == nil { + return 0 + } + return *p.ChangedFiles +} + +// GetClosedAt returns the ClosedAt field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetClosedAt() time.Time { + if p == nil || p.ClosedAt == nil { + return time.Time{} + } + return *p.ClosedAt +} + +// GetComments returns the Comments field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetComments() int { + if p == nil || p.Comments == nil { + return 0 + } + return *p.Comments +} + +// GetCommits returns the Commits field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetCommits() int { + if p == nil || p.Commits == nil { + return 0 + } + return *p.Commits +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetCreatedAt() time.Time { + if p == nil || p.CreatedAt == nil { + return time.Time{} + } + return *p.CreatedAt +} + +// GetDeletions returns the Deletions field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetDeletions() int { + if p == nil || p.Deletions == nil { + return 0 + } + return *p.Deletions +} + +// GetDiffURL returns the DiffURL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetDiffURL() string { + if p == nil || p.DiffURL == nil { + return "" + } + return *p.DiffURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetHTMLURL() string { + if p == nil || p.HTMLURL == nil { + return "" + } + return *p.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetIssueURL returns the IssueURL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetIssueURL() string { + if p == nil || p.IssueURL == nil { + return "" + } + return *p.IssueURL +} + +// GetMergeable returns the Mergeable field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetMergeable() bool { + if p == nil || p.Mergeable == nil { + return false + } + return *p.Mergeable +} + +// GetMerged returns the Merged field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetMerged() bool { + if p == nil || p.Merged == nil { + return false + } + return *p.Merged +} + +// GetMergedAt returns the MergedAt field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetMergedAt() time.Time { + if p == nil || p.MergedAt == nil { + return time.Time{} + } + return *p.MergedAt +} + +// GetNumber returns the Number field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetNumber() int { + if p == nil || p.Number == nil { + return 0 + } + return *p.Number +} + +// GetPatchURL returns the PatchURL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetPatchURL() string { + if p == nil || p.PatchURL == nil { + return "" + } + return *p.PatchURL +} + +// GetReviewCommentsURL returns the ReviewCommentsURL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetReviewCommentsURL() string { + if p == nil || p.ReviewCommentsURL == nil { + return "" + } + return *p.ReviewCommentsURL +} + +// GetReviewCommentURL returns the ReviewCommentURL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetReviewCommentURL() string { + if p == nil || p.ReviewCommentURL == nil { + return "" + } + return *p.ReviewCommentURL +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetState() string { + if p == nil || p.State == nil { + return "" + } + return *p.State +} + +// GetStatusesURL returns the StatusesURL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetStatusesURL() string { + if p == nil || p.StatusesURL == nil { + return "" + } + return *p.StatusesURL +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetTitle() string { + if p == nil || p.Title == nil { + return "" + } + return *p.Title +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetUpdatedAt() time.Time { + if p == nil || p.UpdatedAt == nil { + return time.Time{} + } + return *p.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *PullRequest) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetLabel returns the Label field if it's non-nil, zero value otherwise. +func (p *PullRequestBranch) GetLabel() string { + if p == nil || p.Label == nil { + return "" + } + return *p.Label +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (p *PullRequestBranch) GetRef() string { + if p == nil || p.Ref == nil { + return "" + } + return *p.Ref +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (p *PullRequestBranch) GetSHA() string { + if p == nil || p.SHA == nil { + return "" + } + return *p.SHA +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetBody() string { + if p == nil || p.Body == nil { + return "" + } + return *p.Body +} + +// GetCommitID returns the CommitID field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetCommitID() string { + if p == nil || p.CommitID == nil { + return "" + } + return *p.CommitID +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetCreatedAt() time.Time { + if p == nil || p.CreatedAt == nil { + return time.Time{} + } + return *p.CreatedAt +} + +// GetDiffHunk returns the DiffHunk field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetDiffHunk() string { + if p == nil || p.DiffHunk == nil { + return "" + } + return *p.DiffHunk +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetHTMLURL() string { + if p == nil || p.HTMLURL == nil { + return "" + } + return *p.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetInReplyTo returns the InReplyTo field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetInReplyTo() int { + if p == nil || p.InReplyTo == nil { + return 0 + } + return *p.InReplyTo +} + +// GetOriginalCommitID returns the OriginalCommitID field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetOriginalCommitID() string { + if p == nil || p.OriginalCommitID == nil { + return "" + } + return *p.OriginalCommitID +} + +// GetOriginalPosition returns the OriginalPosition field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetOriginalPosition() int { + if p == nil || p.OriginalPosition == nil { + return 0 + } + return *p.OriginalPosition +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetPath() string { + if p == nil || p.Path == nil { + return "" + } + return *p.Path +} + +// GetPosition returns the Position field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetPosition() int { + if p == nil || p.Position == nil { + return 0 + } + return *p.Position +} + +// GetPullRequestURL returns the PullRequestURL field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetPullRequestURL() string { + if p == nil || p.PullRequestURL == nil { + return "" + } + return *p.PullRequestURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetUpdatedAt() time.Time { + if p == nil || p.UpdatedAt == nil { + return time.Time{} + } + return *p.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *PullRequestComment) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (p *PullRequestEvent) GetAction() string { + if p == nil || p.Action == nil { + return "" + } + return *p.Action +} + +// GetNumber returns the Number field if it's non-nil, zero value otherwise. +func (p *PullRequestEvent) GetNumber() int { + if p == nil || p.Number == nil { + return 0 + } + return *p.Number +} + +// GetDiffURL returns the DiffURL field if it's non-nil, zero value otherwise. +func (p *PullRequestLinks) GetDiffURL() string { + if p == nil || p.DiffURL == nil { + return "" + } + return *p.DiffURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (p *PullRequestLinks) GetHTMLURL() string { + if p == nil || p.HTMLURL == nil { + return "" + } + return *p.HTMLURL +} + +// GetPatchURL returns the PatchURL field if it's non-nil, zero value otherwise. +func (p *PullRequestLinks) GetPatchURL() string { + if p == nil || p.PatchURL == nil { + return "" + } + return *p.PatchURL +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *PullRequestLinks) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetMerged returns the Merged field if it's non-nil, zero value otherwise. +func (p *PullRequestMergeResult) GetMerged() bool { + if p == nil || p.Merged == nil { + return false + } + return *p.Merged +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (p *PullRequestMergeResult) GetMessage() string { + if p == nil || p.Message == nil { + return "" + } + return *p.Message +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (p *PullRequestMergeResult) GetSHA() string { + if p == nil || p.SHA == nil { + return "" + } + return *p.SHA +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (p *PullRequestReview) GetBody() string { + if p == nil || p.Body == nil { + return "" + } + return *p.Body +} + +// GetCommitID returns the CommitID field if it's non-nil, zero value otherwise. +func (p *PullRequestReview) GetCommitID() string { + if p == nil || p.CommitID == nil { + return "" + } + return *p.CommitID +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (p *PullRequestReview) GetHTMLURL() string { + if p == nil || p.HTMLURL == nil { + return "" + } + return *p.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *PullRequestReview) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetPullRequestURL returns the PullRequestURL field if it's non-nil, zero value otherwise. +func (p *PullRequestReview) GetPullRequestURL() string { + if p == nil || p.PullRequestURL == nil { + return "" + } + return *p.PullRequestURL +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (p *PullRequestReview) GetState() string { + if p == nil || p.State == nil { + return "" + } + return *p.State +} + +// GetSubmittedAt returns the SubmittedAt field if it's non-nil, zero value otherwise. +func (p *PullRequestReview) GetSubmittedAt() time.Time { + if p == nil || p.SubmittedAt == nil { + return time.Time{} + } + return *p.SubmittedAt +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (p *PullRequestReviewCommentEvent) GetAction() string { + if p == nil || p.Action == nil { + return "" + } + return *p.Action +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (p *PullRequestReviewDismissalRequest) GetMessage() string { + if p == nil || p.Message == nil { + return "" + } + return *p.Message +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (p *PullRequestReviewEvent) GetAction() string { + if p == nil || p.Action == nil { + return "" + } + return *p.Action +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (p *PullRequestReviewRequest) GetBody() string { + if p == nil || p.Body == nil { + return "" + } + return *p.Body +} + +// GetEvent returns the Event field if it's non-nil, zero value otherwise. +func (p *PullRequestReviewRequest) GetEvent() string { + if p == nil || p.Event == nil { + return "" + } + return *p.Event +} + +// GetBase returns the Base field if it's non-nil, zero value otherwise. +func (p *pullRequestUpdate) GetBase() string { + if p == nil || p.Base == nil { + return "" + } + return *p.Base +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (p *pullRequestUpdate) GetBody() string { + if p == nil || p.Body == nil { + return "" + } + return *p.Body +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (p *pullRequestUpdate) GetState() string { + if p == nil || p.State == nil { + return "" + } + return *p.State +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (p *pullRequestUpdate) GetTitle() string { + if p == nil || p.Title == nil { + return "" + } + return *p.Title +} + +// GetCommits returns the Commits field if it's non-nil, zero value otherwise. +func (p *PunchCard) GetCommits() int { + if p == nil || p.Commits == nil { + return 0 + } + return *p.Commits +} + +// GetDay returns the Day field if it's non-nil, zero value otherwise. +func (p *PunchCard) GetDay() int { + if p == nil || p.Day == nil { + return 0 + } + return *p.Day +} + +// GetHour returns the Hour field if it's non-nil, zero value otherwise. +func (p *PunchCard) GetHour() int { + if p == nil || p.Hour == nil { + return 0 + } + return *p.Hour +} + +// GetAfter returns the After field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetAfter() string { + if p == nil || p.After == nil { + return "" + } + return *p.After +} + +// GetBaseRef returns the BaseRef field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetBaseRef() string { + if p == nil || p.BaseRef == nil { + return "" + } + return *p.BaseRef +} + +// GetBefore returns the Before field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetBefore() string { + if p == nil || p.Before == nil { + return "" + } + return *p.Before +} + +// GetCompare returns the Compare field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetCompare() string { + if p == nil || p.Compare == nil { + return "" + } + return *p.Compare +} + +// GetCreated returns the Created field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetCreated() bool { + if p == nil || p.Created == nil { + return false + } + return *p.Created +} + +// GetDeleted returns the Deleted field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetDeleted() bool { + if p == nil || p.Deleted == nil { + return false + } + return *p.Deleted +} + +// GetDistinctSize returns the DistinctSize field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetDistinctSize() int { + if p == nil || p.DistinctSize == nil { + return 0 + } + return *p.DistinctSize +} + +// GetForced returns the Forced field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetForced() bool { + if p == nil || p.Forced == nil { + return false + } + return *p.Forced +} + +// GetHead returns the Head field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetHead() string { + if p == nil || p.Head == nil { + return "" + } + return *p.Head +} + +// GetPushID returns the PushID field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetPushID() int { + if p == nil || p.PushID == nil { + return 0 + } + return *p.PushID +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetRef() string { + if p == nil || p.Ref == nil { + return "" + } + return *p.Ref +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (p *PushEvent) GetSize() int { + if p == nil || p.Size == nil { + return 0 + } + return *p.Size +} + +// GetDistinct returns the Distinct field if it's non-nil, zero value otherwise. +func (p *PushEventCommit) GetDistinct() bool { + if p == nil || p.Distinct == nil { + return false + } + return *p.Distinct +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *PushEventCommit) GetID() string { + if p == nil || p.ID == nil { + return "" + } + return *p.ID +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (p *PushEventCommit) GetMessage() string { + if p == nil || p.Message == nil { + return "" + } + return *p.Message +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (p *PushEventCommit) GetSHA() string { + if p == nil || p.SHA == nil { + return "" + } + return *p.SHA +} + +// GetTimestamp returns the Timestamp field if it's non-nil, zero value otherwise. +func (p *PushEventCommit) GetTimestamp() Timestamp { + if p == nil || p.Timestamp == nil { + return Timestamp{} + } + return *p.Timestamp +} + +// GetTreeID returns the TreeID field if it's non-nil, zero value otherwise. +func (p *PushEventCommit) GetTreeID() string { + if p == nil || p.TreeID == nil { + return "" + } + return *p.TreeID +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *PushEventCommit) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (p *PushEventRepoOwner) GetEmail() string { + if p == nil || p.Email == nil { + return "" + } + return *p.Email +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (p *PushEventRepoOwner) GetName() string { + if p == nil || p.Name == nil { + return "" + } + return *p.Name +} + +// GetCloneURL returns the CloneURL field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetCloneURL() string { + if p == nil || p.CloneURL == nil { + return "" + } + return *p.CloneURL +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetCreatedAt() Timestamp { + if p == nil || p.CreatedAt == nil { + return Timestamp{} + } + return *p.CreatedAt +} + +// GetDefaultBranch returns the DefaultBranch field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetDefaultBranch() string { + if p == nil || p.DefaultBranch == nil { + return "" + } + return *p.DefaultBranch +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetDescription() string { + if p == nil || p.Description == nil { + return "" + } + return *p.Description +} + +// GetFork returns the Fork field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetFork() bool { + if p == nil || p.Fork == nil { + return false + } + return *p.Fork +} + +// GetForksCount returns the ForksCount field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetForksCount() int { + if p == nil || p.ForksCount == nil { + return 0 + } + return *p.ForksCount +} + +// GetFullName returns the FullName field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetFullName() string { + if p == nil || p.FullName == nil { + return "" + } + return *p.FullName +} + +// GetGitURL returns the GitURL field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetGitURL() string { + if p == nil || p.GitURL == nil { + return "" + } + return *p.GitURL +} + +// GetHasDownloads returns the HasDownloads field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetHasDownloads() bool { + if p == nil || p.HasDownloads == nil { + return false + } + return *p.HasDownloads +} + +// GetHasIssues returns the HasIssues field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetHasIssues() bool { + if p == nil || p.HasIssues == nil { + return false + } + return *p.HasIssues +} + +// GetHasPages returns the HasPages field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetHasPages() bool { + if p == nil || p.HasPages == nil { + return false + } + return *p.HasPages +} + +// GetHasWiki returns the HasWiki field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetHasWiki() bool { + if p == nil || p.HasWiki == nil { + return false + } + return *p.HasWiki +} + +// GetHomepage returns the Homepage field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetHomepage() string { + if p == nil || p.Homepage == nil { + return "" + } + return *p.Homepage +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetHTMLURL() string { + if p == nil || p.HTMLURL == nil { + return "" + } + return *p.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetID() int { + if p == nil || p.ID == nil { + return 0 + } + return *p.ID +} + +// GetLanguage returns the Language field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetLanguage() string { + if p == nil || p.Language == nil { + return "" + } + return *p.Language +} + +// GetMasterBranch returns the MasterBranch field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetMasterBranch() string { + if p == nil || p.MasterBranch == nil { + return "" + } + return *p.MasterBranch +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetName() string { + if p == nil || p.Name == nil { + return "" + } + return *p.Name +} + +// GetOpenIssuesCount returns the OpenIssuesCount field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetOpenIssuesCount() int { + if p == nil || p.OpenIssuesCount == nil { + return 0 + } + return *p.OpenIssuesCount +} + +// GetOrganization returns the Organization field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetOrganization() string { + if p == nil || p.Organization == nil { + return "" + } + return *p.Organization +} + +// GetPrivate returns the Private field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetPrivate() bool { + if p == nil || p.Private == nil { + return false + } + return *p.Private +} + +// GetPushedAt returns the PushedAt field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetPushedAt() Timestamp { + if p == nil || p.PushedAt == nil { + return Timestamp{} + } + return *p.PushedAt +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetSize() int { + if p == nil || p.Size == nil { + return 0 + } + return *p.Size +} + +// GetSSHURL returns the SSHURL field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetSSHURL() string { + if p == nil || p.SSHURL == nil { + return "" + } + return *p.SSHURL +} + +// GetStargazersCount returns the StargazersCount field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetStargazersCount() int { + if p == nil || p.StargazersCount == nil { + return 0 + } + return *p.StargazersCount +} + +// GetStatusesURL returns the StatusesURL field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetStatusesURL() string { + if p == nil || p.StatusesURL == nil { + return "" + } + return *p.StatusesURL +} + +// GetSVNURL returns the SVNURL field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetSVNURL() string { + if p == nil || p.SVNURL == nil { + return "" + } + return *p.SVNURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetUpdatedAt() Timestamp { + if p == nil || p.UpdatedAt == nil { + return Timestamp{} + } + return *p.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetURL() string { + if p == nil || p.URL == nil { + return "" + } + return *p.URL +} + +// GetWatchersCount returns the WatchersCount field if it's non-nil, zero value otherwise. +func (p *PushEventRepository) GetWatchersCount() int { + if p == nil || p.WatchersCount == nil { + return 0 + } + return *p.WatchersCount +} + +// GetContent returns the Content field if it's non-nil, zero value otherwise. +func (r *Reaction) GetContent() string { + if r == nil || r.Content == nil { + return "" + } + return *r.Content +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (r *Reaction) GetID() int { + if r == nil || r.ID == nil { + return 0 + } + return *r.ID +} + +// GetConfused returns the Confused field if it's non-nil, zero value otherwise. +func (r *Reactions) GetConfused() int { + if r == nil || r.Confused == nil { + return 0 + } + return *r.Confused +} + +// GetHeart returns the Heart field if it's non-nil, zero value otherwise. +func (r *Reactions) GetHeart() int { + if r == nil || r.Heart == nil { + return 0 + } + return *r.Heart +} + +// GetHooray returns the Hooray field if it's non-nil, zero value otherwise. +func (r *Reactions) GetHooray() int { + if r == nil || r.Hooray == nil { + return 0 + } + return *r.Hooray +} + +// GetLaugh returns the Laugh field if it's non-nil, zero value otherwise. +func (r *Reactions) GetLaugh() int { + if r == nil || r.Laugh == nil { + return 0 + } + return *r.Laugh +} + +// GetMinusOne returns the MinusOne field if it's non-nil, zero value otherwise. +func (r *Reactions) GetMinusOne() int { + if r == nil || r.MinusOne == nil { + return 0 + } + return *r.MinusOne +} + +// GetPlusOne returns the PlusOne field if it's non-nil, zero value otherwise. +func (r *Reactions) GetPlusOne() int { + if r == nil || r.PlusOne == nil { + return 0 + } + return *r.PlusOne +} + +// GetTotalCount returns the TotalCount field if it's non-nil, zero value otherwise. +func (r *Reactions) GetTotalCount() int { + if r == nil || r.TotalCount == nil { + return 0 + } + return *r.TotalCount +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *Reactions) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (r *Reference) GetRef() string { + if r == nil || r.Ref == nil { + return "" + } + return *r.Ref +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *Reference) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetBrowserDownloadURL returns the BrowserDownloadURL field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetBrowserDownloadURL() string { + if r == nil || r.BrowserDownloadURL == nil { + return "" + } + return *r.BrowserDownloadURL +} + +// GetContentType returns the ContentType field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetContentType() string { + if r == nil || r.ContentType == nil { + return "" + } + return *r.ContentType +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetCreatedAt() Timestamp { + if r == nil || r.CreatedAt == nil { + return Timestamp{} + } + return *r.CreatedAt +} + +// GetDownloadCount returns the DownloadCount field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetDownloadCount() int { + if r == nil || r.DownloadCount == nil { + return 0 + } + return *r.DownloadCount +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetID() int { + if r == nil || r.ID == nil { + return 0 + } + return *r.ID +} + +// GetLabel returns the Label field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetLabel() string { + if r == nil || r.Label == nil { + return "" + } + return *r.Label +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetName() string { + if r == nil || r.Name == nil { + return "" + } + return *r.Name +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetSize() int { + if r == nil || r.Size == nil { + return 0 + } + return *r.Size +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetState() string { + if r == nil || r.State == nil { + return "" + } + return *r.State +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetUpdatedAt() Timestamp { + if r == nil || r.UpdatedAt == nil { + return Timestamp{} + } + return *r.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *ReleaseAsset) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (r *ReleaseEvent) GetAction() string { + if r == nil || r.Action == nil { + return "" + } + return *r.Action +} + +// GetFrom returns the From field if it's non-nil, zero value otherwise. +func (r *Rename) GetFrom() string { + if r == nil || r.From == nil { + return "" + } + return *r.From +} + +// GetTo returns the To field if it's non-nil, zero value otherwise. +func (r *Rename) GetTo() string { + if r == nil || r.To == nil { + return "" + } + return *r.To +} + +// GetIncompleteResults returns the IncompleteResults field if it's non-nil, zero value otherwise. +func (r *RepositoriesSearchResult) GetIncompleteResults() bool { + if r == nil || r.IncompleteResults == nil { + return false + } + return *r.IncompleteResults +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (r *RepositoriesSearchResult) GetTotal() int { + if r == nil || r.Total == nil { + return 0 + } + return *r.Total +} + +// GetAllowMergeCommit returns the AllowMergeCommit field if it's non-nil, zero value otherwise. +func (r *Repository) GetAllowMergeCommit() bool { + if r == nil || r.AllowMergeCommit == nil { + return false + } + return *r.AllowMergeCommit +} + +// GetAllowRebaseMerge returns the AllowRebaseMerge field if it's non-nil, zero value otherwise. +func (r *Repository) GetAllowRebaseMerge() bool { + if r == nil || r.AllowRebaseMerge == nil { + return false + } + return *r.AllowRebaseMerge +} + +// GetAllowSquashMerge returns the AllowSquashMerge field if it's non-nil, zero value otherwise. +func (r *Repository) GetAllowSquashMerge() bool { + if r == nil || r.AllowSquashMerge == nil { + return false + } + return *r.AllowSquashMerge +} + +// GetArchiveURL returns the ArchiveURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetArchiveURL() string { + if r == nil || r.ArchiveURL == nil { + return "" + } + return *r.ArchiveURL +} + +// GetAssigneesURL returns the AssigneesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetAssigneesURL() string { + if r == nil || r.AssigneesURL == nil { + return "" + } + return *r.AssigneesURL +} + +// GetAutoInit returns the AutoInit field if it's non-nil, zero value otherwise. +func (r *Repository) GetAutoInit() bool { + if r == nil || r.AutoInit == nil { + return false + } + return *r.AutoInit +} + +// GetBlobsURL returns the BlobsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetBlobsURL() string { + if r == nil || r.BlobsURL == nil { + return "" + } + return *r.BlobsURL +} + +// GetBranchesURL returns the BranchesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetBranchesURL() string { + if r == nil || r.BranchesURL == nil { + return "" + } + return *r.BranchesURL +} + +// GetCloneURL returns the CloneURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetCloneURL() string { + if r == nil || r.CloneURL == nil { + return "" + } + return *r.CloneURL +} + +// GetCollaboratorsURL returns the CollaboratorsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetCollaboratorsURL() string { + if r == nil || r.CollaboratorsURL == nil { + return "" + } + return *r.CollaboratorsURL +} + +// GetCommentsURL returns the CommentsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetCommentsURL() string { + if r == nil || r.CommentsURL == nil { + return "" + } + return *r.CommentsURL +} + +// GetCommitsURL returns the CommitsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetCommitsURL() string { + if r == nil || r.CommitsURL == nil { + return "" + } + return *r.CommitsURL +} + +// GetCompareURL returns the CompareURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetCompareURL() string { + if r == nil || r.CompareURL == nil { + return "" + } + return *r.CompareURL +} + +// GetContentsURL returns the ContentsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetContentsURL() string { + if r == nil || r.ContentsURL == nil { + return "" + } + return *r.ContentsURL +} + +// GetContributorsURL returns the ContributorsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetContributorsURL() string { + if r == nil || r.ContributorsURL == nil { + return "" + } + return *r.ContributorsURL +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (r *Repository) GetCreatedAt() Timestamp { + if r == nil || r.CreatedAt == nil { + return Timestamp{} + } + return *r.CreatedAt +} + +// GetDefaultBranch returns the DefaultBranch field if it's non-nil, zero value otherwise. +func (r *Repository) GetDefaultBranch() string { + if r == nil || r.DefaultBranch == nil { + return "" + } + return *r.DefaultBranch +} + +// GetDeploymentsURL returns the DeploymentsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetDeploymentsURL() string { + if r == nil || r.DeploymentsURL == nil { + return "" + } + return *r.DeploymentsURL +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (r *Repository) GetDescription() string { + if r == nil || r.Description == nil { + return "" + } + return *r.Description +} + +// GetDownloadsURL returns the DownloadsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetDownloadsURL() string { + if r == nil || r.DownloadsURL == nil { + return "" + } + return *r.DownloadsURL +} + +// GetEventsURL returns the EventsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetEventsURL() string { + if r == nil || r.EventsURL == nil { + return "" + } + return *r.EventsURL +} + +// GetFork returns the Fork field if it's non-nil, zero value otherwise. +func (r *Repository) GetFork() bool { + if r == nil || r.Fork == nil { + return false + } + return *r.Fork +} + +// GetForksCount returns the ForksCount field if it's non-nil, zero value otherwise. +func (r *Repository) GetForksCount() int { + if r == nil || r.ForksCount == nil { + return 0 + } + return *r.ForksCount +} + +// GetForksURL returns the ForksURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetForksURL() string { + if r == nil || r.ForksURL == nil { + return "" + } + return *r.ForksURL +} + +// GetFullName returns the FullName field if it's non-nil, zero value otherwise. +func (r *Repository) GetFullName() string { + if r == nil || r.FullName == nil { + return "" + } + return *r.FullName +} + +// GetGitCommitsURL returns the GitCommitsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetGitCommitsURL() string { + if r == nil || r.GitCommitsURL == nil { + return "" + } + return *r.GitCommitsURL +} + +// GetGitignoreTemplate returns the GitignoreTemplate field if it's non-nil, zero value otherwise. +func (r *Repository) GetGitignoreTemplate() string { + if r == nil || r.GitignoreTemplate == nil { + return "" + } + return *r.GitignoreTemplate +} + +// GetGitRefsURL returns the GitRefsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetGitRefsURL() string { + if r == nil || r.GitRefsURL == nil { + return "" + } + return *r.GitRefsURL +} + +// GetGitTagsURL returns the GitTagsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetGitTagsURL() string { + if r == nil || r.GitTagsURL == nil { + return "" + } + return *r.GitTagsURL +} + +// GetGitURL returns the GitURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetGitURL() string { + if r == nil || r.GitURL == nil { + return "" + } + return *r.GitURL +} + +// GetHasDownloads returns the HasDownloads field if it's non-nil, zero value otherwise. +func (r *Repository) GetHasDownloads() bool { + if r == nil || r.HasDownloads == nil { + return false + } + return *r.HasDownloads +} + +// GetHasIssues returns the HasIssues field if it's non-nil, zero value otherwise. +func (r *Repository) GetHasIssues() bool { + if r == nil || r.HasIssues == nil { + return false + } + return *r.HasIssues +} + +// GetHasPages returns the HasPages field if it's non-nil, zero value otherwise. +func (r *Repository) GetHasPages() bool { + if r == nil || r.HasPages == nil { + return false + } + return *r.HasPages +} + +// GetHasWiki returns the HasWiki field if it's non-nil, zero value otherwise. +func (r *Repository) GetHasWiki() bool { + if r == nil || r.HasWiki == nil { + return false + } + return *r.HasWiki +} + +// GetHomepage returns the Homepage field if it's non-nil, zero value otherwise. +func (r *Repository) GetHomepage() string { + if r == nil || r.Homepage == nil { + return "" + } + return *r.Homepage +} + +// GetHooksURL returns the HooksURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetHooksURL() string { + if r == nil || r.HooksURL == nil { + return "" + } + return *r.HooksURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetHTMLURL() string { + if r == nil || r.HTMLURL == nil { + return "" + } + return *r.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (r *Repository) GetID() int { + if r == nil || r.ID == nil { + return 0 + } + return *r.ID +} + +// GetIssueCommentURL returns the IssueCommentURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetIssueCommentURL() string { + if r == nil || r.IssueCommentURL == nil { + return "" + } + return *r.IssueCommentURL +} + +// GetIssueEventsURL returns the IssueEventsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetIssueEventsURL() string { + if r == nil || r.IssueEventsURL == nil { + return "" + } + return *r.IssueEventsURL +} + +// GetIssuesURL returns the IssuesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetIssuesURL() string { + if r == nil || r.IssuesURL == nil { + return "" + } + return *r.IssuesURL +} + +// GetKeysURL returns the KeysURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetKeysURL() string { + if r == nil || r.KeysURL == nil { + return "" + } + return *r.KeysURL +} + +// GetLabelsURL returns the LabelsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetLabelsURL() string { + if r == nil || r.LabelsURL == nil { + return "" + } + return *r.LabelsURL +} + +// GetLanguage returns the Language field if it's non-nil, zero value otherwise. +func (r *Repository) GetLanguage() string { + if r == nil || r.Language == nil { + return "" + } + return *r.Language +} + +// GetLanguagesURL returns the LanguagesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetLanguagesURL() string { + if r == nil || r.LanguagesURL == nil { + return "" + } + return *r.LanguagesURL +} + +// GetLicenseTemplate returns the LicenseTemplate field if it's non-nil, zero value otherwise. +func (r *Repository) GetLicenseTemplate() string { + if r == nil || r.LicenseTemplate == nil { + return "" + } + return *r.LicenseTemplate +} + +// GetMasterBranch returns the MasterBranch field if it's non-nil, zero value otherwise. +func (r *Repository) GetMasterBranch() string { + if r == nil || r.MasterBranch == nil { + return "" + } + return *r.MasterBranch +} + +// GetMergesURL returns the MergesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetMergesURL() string { + if r == nil || r.MergesURL == nil { + return "" + } + return *r.MergesURL +} + +// GetMilestonesURL returns the MilestonesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetMilestonesURL() string { + if r == nil || r.MilestonesURL == nil { + return "" + } + return *r.MilestonesURL +} + +// GetMirrorURL returns the MirrorURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetMirrorURL() string { + if r == nil || r.MirrorURL == nil { + return "" + } + return *r.MirrorURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (r *Repository) GetName() string { + if r == nil || r.Name == nil { + return "" + } + return *r.Name +} + +// GetNetworkCount returns the NetworkCount field if it's non-nil, zero value otherwise. +func (r *Repository) GetNetworkCount() int { + if r == nil || r.NetworkCount == nil { + return 0 + } + return *r.NetworkCount +} + +// GetNotificationsURL returns the NotificationsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetNotificationsURL() string { + if r == nil || r.NotificationsURL == nil { + return "" + } + return *r.NotificationsURL +} + +// GetOpenIssuesCount returns the OpenIssuesCount field if it's non-nil, zero value otherwise. +func (r *Repository) GetOpenIssuesCount() int { + if r == nil || r.OpenIssuesCount == nil { + return 0 + } + return *r.OpenIssuesCount +} + +// GetPermissions returns the Permissions field if it's non-nil, zero value otherwise. +func (r *Repository) GetPermissions() map[string]bool { + if r == nil || r.Permissions == nil { + return map[string]bool{} + } + return *r.Permissions +} + +// GetPrivate returns the Private field if it's non-nil, zero value otherwise. +func (r *Repository) GetPrivate() bool { + if r == nil || r.Private == nil { + return false + } + return *r.Private +} + +// GetPullsURL returns the PullsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetPullsURL() string { + if r == nil || r.PullsURL == nil { + return "" + } + return *r.PullsURL +} + +// GetPushedAt returns the PushedAt field if it's non-nil, zero value otherwise. +func (r *Repository) GetPushedAt() Timestamp { + if r == nil || r.PushedAt == nil { + return Timestamp{} + } + return *r.PushedAt +} + +// GetReleasesURL returns the ReleasesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetReleasesURL() string { + if r == nil || r.ReleasesURL == nil { + return "" + } + return *r.ReleasesURL +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (r *Repository) GetSize() int { + if r == nil || r.Size == nil { + return 0 + } + return *r.Size +} + +// GetSSHURL returns the SSHURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetSSHURL() string { + if r == nil || r.SSHURL == nil { + return "" + } + return *r.SSHURL +} + +// GetStargazersCount returns the StargazersCount field if it's non-nil, zero value otherwise. +func (r *Repository) GetStargazersCount() int { + if r == nil || r.StargazersCount == nil { + return 0 + } + return *r.StargazersCount +} + +// GetStargazersURL returns the StargazersURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetStargazersURL() string { + if r == nil || r.StargazersURL == nil { + return "" + } + return *r.StargazersURL +} + +// GetStatusesURL returns the StatusesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetStatusesURL() string { + if r == nil || r.StatusesURL == nil { + return "" + } + return *r.StatusesURL +} + +// GetSubscribersCount returns the SubscribersCount field if it's non-nil, zero value otherwise. +func (r *Repository) GetSubscribersCount() int { + if r == nil || r.SubscribersCount == nil { + return 0 + } + return *r.SubscribersCount +} + +// GetSubscribersURL returns the SubscribersURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetSubscribersURL() string { + if r == nil || r.SubscribersURL == nil { + return "" + } + return *r.SubscribersURL +} + +// GetSubscriptionURL returns the SubscriptionURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetSubscriptionURL() string { + if r == nil || r.SubscriptionURL == nil { + return "" + } + return *r.SubscriptionURL +} + +// GetSVNURL returns the SVNURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetSVNURL() string { + if r == nil || r.SVNURL == nil { + return "" + } + return *r.SVNURL +} + +// GetTagsURL returns the TagsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetTagsURL() string { + if r == nil || r.TagsURL == nil { + return "" + } + return *r.TagsURL +} + +// GetTeamID returns the TeamID field if it's non-nil, zero value otherwise. +func (r *Repository) GetTeamID() int { + if r == nil || r.TeamID == nil { + return 0 + } + return *r.TeamID +} + +// GetTeamsURL returns the TeamsURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetTeamsURL() string { + if r == nil || r.TeamsURL == nil { + return "" + } + return *r.TeamsURL +} + +// GetTreesURL returns the TreesURL field if it's non-nil, zero value otherwise. +func (r *Repository) GetTreesURL() string { + if r == nil || r.TreesURL == nil { + return "" + } + return *r.TreesURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (r *Repository) GetUpdatedAt() Timestamp { + if r == nil || r.UpdatedAt == nil { + return Timestamp{} + } + return *r.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *Repository) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetWatchersCount returns the WatchersCount field if it's non-nil, zero value otherwise. +func (r *Repository) GetWatchersCount() int { + if r == nil || r.WatchersCount == nil { + return 0 + } + return *r.WatchersCount +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetBody() string { + if r == nil || r.Body == nil { + return "" + } + return *r.Body +} + +// GetCommitID returns the CommitID field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetCommitID() string { + if r == nil || r.CommitID == nil { + return "" + } + return *r.CommitID +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetCreatedAt() time.Time { + if r == nil || r.CreatedAt == nil { + return time.Time{} + } + return *r.CreatedAt +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetHTMLURL() string { + if r == nil || r.HTMLURL == nil { + return "" + } + return *r.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetID() int { + if r == nil || r.ID == nil { + return 0 + } + return *r.ID +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetPath() string { + if r == nil || r.Path == nil { + return "" + } + return *r.Path +} + +// GetPosition returns the Position field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetPosition() int { + if r == nil || r.Position == nil { + return 0 + } + return *r.Position +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetUpdatedAt() time.Time { + if r == nil || r.UpdatedAt == nil { + return time.Time{} + } + return *r.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *RepositoryComment) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetCommentsURL returns the CommentsURL field if it's non-nil, zero value otherwise. +func (r *RepositoryCommit) GetCommentsURL() string { + if r == nil || r.CommentsURL == nil { + return "" + } + return *r.CommentsURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (r *RepositoryCommit) GetHTMLURL() string { + if r == nil || r.HTMLURL == nil { + return "" + } + return *r.HTMLURL +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (r *RepositoryCommit) GetSHA() string { + if r == nil || r.SHA == nil { + return "" + } + return *r.SHA +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *RepositoryCommit) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetDownloadURL returns the DownloadURL field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetDownloadURL() string { + if r == nil || r.DownloadURL == nil { + return "" + } + return *r.DownloadURL +} + +// GetEncoding returns the Encoding field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetEncoding() string { + if r == nil || r.Encoding == nil { + return "" + } + return *r.Encoding +} + +// GetGitURL returns the GitURL field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetGitURL() string { + if r == nil || r.GitURL == nil { + return "" + } + return *r.GitURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetHTMLURL() string { + if r == nil || r.HTMLURL == nil { + return "" + } + return *r.HTMLURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetName() string { + if r == nil || r.Name == nil { + return "" + } + return *r.Name +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetPath() string { + if r == nil || r.Path == nil { + return "" + } + return *r.Path +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetSHA() string { + if r == nil || r.SHA == nil { + return "" + } + return *r.SHA +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetSize() int { + if r == nil || r.Size == nil { + return 0 + } + return *r.Size +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetType() string { + if r == nil || r.Type == nil { + return "" + } + return *r.Type +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *RepositoryContent) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetBranch returns the Branch field if it's non-nil, zero value otherwise. +func (r *RepositoryContentFileOptions) GetBranch() string { + if r == nil || r.Branch == nil { + return "" + } + return *r.Branch +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (r *RepositoryContentFileOptions) GetMessage() string { + if r == nil || r.Message == nil { + return "" + } + return *r.Message +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (r *RepositoryContentFileOptions) GetSHA() string { + if r == nil || r.SHA == nil { + return "" + } + return *r.SHA +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (r *RepositoryEvent) GetAction() string { + if r == nil || r.Action == nil { + return "" + } + return *r.Action +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (r *RepositoryInvitation) GetCreatedAt() Timestamp { + if r == nil || r.CreatedAt == nil { + return Timestamp{} + } + return *r.CreatedAt +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (r *RepositoryInvitation) GetHTMLURL() string { + if r == nil || r.HTMLURL == nil { + return "" + } + return *r.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (r *RepositoryInvitation) GetID() int { + if r == nil || r.ID == nil { + return 0 + } + return *r.ID +} + +// GetPermissions returns the Permissions field if it's non-nil, zero value otherwise. +func (r *RepositoryInvitation) GetPermissions() string { + if r == nil || r.Permissions == nil { + return "" + } + return *r.Permissions +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *RepositoryInvitation) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetContent returns the Content field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetContent() string { + if r == nil || r.Content == nil { + return "" + } + return *r.Content +} + +// GetDownloadURL returns the DownloadURL field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetDownloadURL() string { + if r == nil || r.DownloadURL == nil { + return "" + } + return *r.DownloadURL +} + +// GetEncoding returns the Encoding field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetEncoding() string { + if r == nil || r.Encoding == nil { + return "" + } + return *r.Encoding +} + +// GetGitURL returns the GitURL field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetGitURL() string { + if r == nil || r.GitURL == nil { + return "" + } + return *r.GitURL +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetHTMLURL() string { + if r == nil || r.HTMLURL == nil { + return "" + } + return *r.HTMLURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetName() string { + if r == nil || r.Name == nil { + return "" + } + return *r.Name +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetPath() string { + if r == nil || r.Path == nil { + return "" + } + return *r.Path +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetSHA() string { + if r == nil || r.SHA == nil { + return "" + } + return *r.SHA +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetSize() int { + if r == nil || r.Size == nil { + return 0 + } + return *r.Size +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetType() string { + if r == nil || r.Type == nil { + return "" + } + return *r.Type +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *RepositoryLicense) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetBase returns the Base field if it's non-nil, zero value otherwise. +func (r *RepositoryMergeRequest) GetBase() string { + if r == nil || r.Base == nil { + return "" + } + return *r.Base +} + +// GetCommitMessage returns the CommitMessage field if it's non-nil, zero value otherwise. +func (r *RepositoryMergeRequest) GetCommitMessage() string { + if r == nil || r.CommitMessage == nil { + return "" + } + return *r.CommitMessage +} + +// GetHead returns the Head field if it's non-nil, zero value otherwise. +func (r *RepositoryMergeRequest) GetHead() string { + if r == nil || r.Head == nil { + return "" + } + return *r.Head +} + +// GetPermission returns the Permission field if it's non-nil, zero value otherwise. +func (r *RepositoryPermissionLevel) GetPermission() string { + if r == nil || r.Permission == nil { + return "" + } + return *r.Permission +} + +// GetAssetsURL returns the AssetsURL field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetAssetsURL() string { + if r == nil || r.AssetsURL == nil { + return "" + } + return *r.AssetsURL +} + +// GetBody returns the Body field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetBody() string { + if r == nil || r.Body == nil { + return "" + } + return *r.Body +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetCreatedAt() Timestamp { + if r == nil || r.CreatedAt == nil { + return Timestamp{} + } + return *r.CreatedAt +} + +// GetDraft returns the Draft field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetDraft() bool { + if r == nil || r.Draft == nil { + return false + } + return *r.Draft +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetHTMLURL() string { + if r == nil || r.HTMLURL == nil { + return "" + } + return *r.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetID() int { + if r == nil || r.ID == nil { + return 0 + } + return *r.ID +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetName() string { + if r == nil || r.Name == nil { + return "" + } + return *r.Name +} + +// GetPrerelease returns the Prerelease field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetPrerelease() bool { + if r == nil || r.Prerelease == nil { + return false + } + return *r.Prerelease +} + +// GetPublishedAt returns the PublishedAt field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetPublishedAt() Timestamp { + if r == nil || r.PublishedAt == nil { + return Timestamp{} + } + return *r.PublishedAt +} + +// GetTagName returns the TagName field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetTagName() string { + if r == nil || r.TagName == nil { + return "" + } + return *r.TagName +} + +// GetTarballURL returns the TarballURL field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetTarballURL() string { + if r == nil || r.TarballURL == nil { + return "" + } + return *r.TarballURL +} + +// GetTargetCommitish returns the TargetCommitish field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetTargetCommitish() string { + if r == nil || r.TargetCommitish == nil { + return "" + } + return *r.TargetCommitish +} + +// GetUploadURL returns the UploadURL field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetUploadURL() string { + if r == nil || r.UploadURL == nil { + return "" + } + return *r.UploadURL +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetZipballURL returns the ZipballURL field if it's non-nil, zero value otherwise. +func (r *RepositoryRelease) GetZipballURL() string { + if r == nil || r.ZipballURL == nil { + return "" + } + return *r.ZipballURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (r *RepositoryTag) GetName() string { + if r == nil || r.Name == nil { + return "" + } + return *r.Name +} + +// GetTarballURL returns the TarballURL field if it's non-nil, zero value otherwise. +func (r *RepositoryTag) GetTarballURL() string { + if r == nil || r.TarballURL == nil { + return "" + } + return *r.TarballURL +} + +// GetZipballURL returns the ZipballURL field if it's non-nil, zero value otherwise. +func (r *RepositoryTag) GetZipballURL() string { + if r == nil || r.ZipballURL == nil { + return "" + } + return *r.ZipballURL +} + +// GetContext returns the Context field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetContext() string { + if r == nil || r.Context == nil { + return "" + } + return *r.Context +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetCreatedAt() time.Time { + if r == nil || r.CreatedAt == nil { + return time.Time{} + } + return *r.CreatedAt +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetDescription() string { + if r == nil || r.Description == nil { + return "" + } + return *r.Description +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetID() int { + if r == nil || r.ID == nil { + return 0 + } + return *r.ID +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetState() string { + if r == nil || r.State == nil { + return "" + } + return *r.State +} + +// GetTargetURL returns the TargetURL field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetTargetURL() string { + if r == nil || r.TargetURL == nil { + return "" + } + return *r.TargetURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetUpdatedAt() time.Time { + if r == nil || r.UpdatedAt == nil { + return time.Time{} + } + return *r.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (r *RepoStatus) GetURL() string { + if r == nil || r.URL == nil { + return "" + } + return *r.URL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (s *ServiceHook) GetName() string { + if s == nil || s.Name == nil { + return "" + } + return *s.Name +} + +// GetPayload returns the Payload field if it's non-nil, zero value otherwise. +func (s *SignatureVerification) GetPayload() string { + if s == nil || s.Payload == nil { + return "" + } + return *s.Payload +} + +// GetReason returns the Reason field if it's non-nil, zero value otherwise. +func (s *SignatureVerification) GetReason() string { + if s == nil || s.Reason == nil { + return "" + } + return *s.Reason +} + +// GetSignature returns the Signature field if it's non-nil, zero value otherwise. +func (s *SignatureVerification) GetSignature() string { + if s == nil || s.Signature == nil { + return "" + } + return *s.Signature +} + +// GetVerified returns the Verified field if it's non-nil, zero value otherwise. +func (s *SignatureVerification) GetVerified() bool { + if s == nil || s.Verified == nil { + return false + } + return *s.Verified +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (s *Source) GetID() int { + if s == nil || s.ID == nil { + return 0 + } + return *s.ID +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (s *Source) GetURL() string { + if s == nil || s.URL == nil { + return "" + } + return *s.URL +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (s *SourceImportAuthor) GetEmail() string { + if s == nil || s.Email == nil { + return "" + } + return *s.Email +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (s *SourceImportAuthor) GetID() int { + if s == nil || s.ID == nil { + return 0 + } + return *s.ID +} + +// GetImportURL returns the ImportURL field if it's non-nil, zero value otherwise. +func (s *SourceImportAuthor) GetImportURL() string { + if s == nil || s.ImportURL == nil { + return "" + } + return *s.ImportURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (s *SourceImportAuthor) GetName() string { + if s == nil || s.Name == nil { + return "" + } + return *s.Name +} + +// GetRemoteID returns the RemoteID field if it's non-nil, zero value otherwise. +func (s *SourceImportAuthor) GetRemoteID() string { + if s == nil || s.RemoteID == nil { + return "" + } + return *s.RemoteID +} + +// GetRemoteName returns the RemoteName field if it's non-nil, zero value otherwise. +func (s *SourceImportAuthor) GetRemoteName() string { + if s == nil || s.RemoteName == nil { + return "" + } + return *s.RemoteName +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (s *SourceImportAuthor) GetURL() string { + if s == nil || s.URL == nil { + return "" + } + return *s.URL +} + +// GetStarredAt returns the StarredAt field if it's non-nil, zero value otherwise. +func (s *Stargazer) GetStarredAt() Timestamp { + if s == nil || s.StarredAt == nil { + return Timestamp{} + } + return *s.StarredAt +} + +// GetStarredAt returns the StarredAt field if it's non-nil, zero value otherwise. +func (s *StarredRepository) GetStarredAt() Timestamp { + if s == nil || s.StarredAt == nil { + return Timestamp{} + } + return *s.StarredAt +} + +// GetExcludeAttachments returns the ExcludeAttachments field if it's non-nil, zero value otherwise. +func (s *startMigration) GetExcludeAttachments() bool { + if s == nil || s.ExcludeAttachments == nil { + return false + } + return *s.ExcludeAttachments +} + +// GetLockRepositories returns the LockRepositories field if it's non-nil, zero value otherwise. +func (s *startMigration) GetLockRepositories() bool { + if s == nil || s.LockRepositories == nil { + return false + } + return *s.LockRepositories +} + +// GetContext returns the Context field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetContext() string { + if s == nil || s.Context == nil { + return "" + } + return *s.Context +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetCreatedAt() Timestamp { + if s == nil || s.CreatedAt == nil { + return Timestamp{} + } + return *s.CreatedAt +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetDescription() string { + if s == nil || s.Description == nil { + return "" + } + return *s.Description +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetID() int { + if s == nil || s.ID == nil { + return 0 + } + return *s.ID +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetName() string { + if s == nil || s.Name == nil { + return "" + } + return *s.Name +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetSHA() string { + if s == nil || s.SHA == nil { + return "" + } + return *s.SHA +} + +// GetState returns the State field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetState() string { + if s == nil || s.State == nil { + return "" + } + return *s.State +} + +// GetTargetURL returns the TargetURL field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetTargetURL() string { + if s == nil || s.TargetURL == nil { + return "" + } + return *s.TargetURL +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (s *StatusEvent) GetUpdatedAt() Timestamp { + if s == nil || s.UpdatedAt == nil { + return Timestamp{} + } + return *s.UpdatedAt +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (s *Subscription) GetCreatedAt() Timestamp { + if s == nil || s.CreatedAt == nil { + return Timestamp{} + } + return *s.CreatedAt +} + +// GetIgnored returns the Ignored field if it's non-nil, zero value otherwise. +func (s *Subscription) GetIgnored() bool { + if s == nil || s.Ignored == nil { + return false + } + return *s.Ignored +} + +// GetReason returns the Reason field if it's non-nil, zero value otherwise. +func (s *Subscription) GetReason() string { + if s == nil || s.Reason == nil { + return "" + } + return *s.Reason +} + +// GetRepositoryURL returns the RepositoryURL field if it's non-nil, zero value otherwise. +func (s *Subscription) GetRepositoryURL() string { + if s == nil || s.RepositoryURL == nil { + return "" + } + return *s.RepositoryURL +} + +// GetSubscribed returns the Subscribed field if it's non-nil, zero value otherwise. +func (s *Subscription) GetSubscribed() bool { + if s == nil || s.Subscribed == nil { + return false + } + return *s.Subscribed +} + +// GetThreadURL returns the ThreadURL field if it's non-nil, zero value otherwise. +func (s *Subscription) GetThreadURL() string { + if s == nil || s.ThreadURL == nil { + return "" + } + return *s.ThreadURL +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (s *Subscription) GetURL() string { + if s == nil || s.URL == nil { + return "" + } + return *s.URL +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (t *Tag) GetMessage() string { + if t == nil || t.Message == nil { + return "" + } + return *t.Message +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (t *Tag) GetSHA() string { + if t == nil || t.SHA == nil { + return "" + } + return *t.SHA +} + +// GetTag returns the Tag field if it's non-nil, zero value otherwise. +func (t *Tag) GetTag() string { + if t == nil || t.Tag == nil { + return "" + } + return *t.Tag +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (t *Tag) GetURL() string { + if t == nil || t.URL == nil { + return "" + } + return *t.URL +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (t *Team) GetDescription() string { + if t == nil || t.Description == nil { + return "" + } + return *t.Description +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (t *Team) GetID() int { + if t == nil || t.ID == nil { + return 0 + } + return *t.ID +} + +// GetMembersCount returns the MembersCount field if it's non-nil, zero value otherwise. +func (t *Team) GetMembersCount() int { + if t == nil || t.MembersCount == nil { + return 0 + } + return *t.MembersCount +} + +// GetMembersURL returns the MembersURL field if it's non-nil, zero value otherwise. +func (t *Team) GetMembersURL() string { + if t == nil || t.MembersURL == nil { + return "" + } + return *t.MembersURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (t *Team) GetName() string { + if t == nil || t.Name == nil { + return "" + } + return *t.Name +} + +// GetPermission returns the Permission field if it's non-nil, zero value otherwise. +func (t *Team) GetPermission() string { + if t == nil || t.Permission == nil { + return "" + } + return *t.Permission +} + +// GetPrivacy returns the Privacy field if it's non-nil, zero value otherwise. +func (t *Team) GetPrivacy() string { + if t == nil || t.Privacy == nil { + return "" + } + return *t.Privacy +} + +// GetReposCount returns the ReposCount field if it's non-nil, zero value otherwise. +func (t *Team) GetReposCount() int { + if t == nil || t.ReposCount == nil { + return 0 + } + return *t.ReposCount +} + +// GetRepositoriesURL returns the RepositoriesURL field if it's non-nil, zero value otherwise. +func (t *Team) GetRepositoriesURL() string { + if t == nil || t.RepositoriesURL == nil { + return "" + } + return *t.RepositoriesURL +} + +// GetSlug returns the Slug field if it's non-nil, zero value otherwise. +func (t *Team) GetSlug() string { + if t == nil || t.Slug == nil { + return "" + } + return *t.Slug +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (t *Team) GetURL() string { + if t == nil || t.URL == nil { + return "" + } + return *t.URL +} + +// GetDescription returns the Description field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetDescription() string { + if t == nil || t.Description == nil { + return "" + } + return *t.Description +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetID() int { + if t == nil || t.ID == nil { + return 0 + } + return *t.ID +} + +// GetLDAPDN returns the LDAPDN field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetLDAPDN() string { + if t == nil || t.LDAPDN == nil { + return "" + } + return *t.LDAPDN +} + +// GetMembersURL returns the MembersURL field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetMembersURL() string { + if t == nil || t.MembersURL == nil { + return "" + } + return *t.MembersURL +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetName() string { + if t == nil || t.Name == nil { + return "" + } + return *t.Name +} + +// GetPermission returns the Permission field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetPermission() string { + if t == nil || t.Permission == nil { + return "" + } + return *t.Permission +} + +// GetPrivacy returns the Privacy field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetPrivacy() string { + if t == nil || t.Privacy == nil { + return "" + } + return *t.Privacy +} + +// GetRepositoriesURL returns the RepositoriesURL field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetRepositoriesURL() string { + if t == nil || t.RepositoriesURL == nil { + return "" + } + return *t.RepositoriesURL +} + +// GetSlug returns the Slug field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetSlug() string { + if t == nil || t.Slug == nil { + return "" + } + return *t.Slug +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (t *TeamLDAPMapping) GetURL() string { + if t == nil || t.URL == nil { + return "" + } + return *t.URL +} + +// GetFragment returns the Fragment field if it's non-nil, zero value otherwise. +func (t *TextMatch) GetFragment() string { + if t == nil || t.Fragment == nil { + return "" + } + return *t.Fragment +} + +// GetObjectType returns the ObjectType field if it's non-nil, zero value otherwise. +func (t *TextMatch) GetObjectType() string { + if t == nil || t.ObjectType == nil { + return "" + } + return *t.ObjectType +} + +// GetObjectURL returns the ObjectURL field if it's non-nil, zero value otherwise. +func (t *TextMatch) GetObjectURL() string { + if t == nil || t.ObjectURL == nil { + return "" + } + return *t.ObjectURL +} + +// GetProperty returns the Property field if it's non-nil, zero value otherwise. +func (t *TextMatch) GetProperty() string { + if t == nil || t.Property == nil { + return "" + } + return *t.Property +} + +// GetCommitID returns the CommitID field if it's non-nil, zero value otherwise. +func (t *Timeline) GetCommitID() string { + if t == nil || t.CommitID == nil { + return "" + } + return *t.CommitID +} + +// GetCommitURL returns the CommitURL field if it's non-nil, zero value otherwise. +func (t *Timeline) GetCommitURL() string { + if t == nil || t.CommitURL == nil { + return "" + } + return *t.CommitURL +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (t *Timeline) GetCreatedAt() time.Time { + if t == nil || t.CreatedAt == nil { + return time.Time{} + } + return *t.CreatedAt +} + +// GetEvent returns the Event field if it's non-nil, zero value otherwise. +func (t *Timeline) GetEvent() string { + if t == nil || t.Event == nil { + return "" + } + return *t.Event +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (t *Timeline) GetID() int { + if t == nil || t.ID == nil { + return 0 + } + return *t.ID +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (t *Timeline) GetURL() string { + if t == nil || t.URL == nil { + return "" + } + return *t.URL +} + +// GetCount returns the Count field if it's non-nil, zero value otherwise. +func (t *TrafficClones) GetCount() int { + if t == nil || t.Count == nil { + return 0 + } + return *t.Count +} + +// GetUniques returns the Uniques field if it's non-nil, zero value otherwise. +func (t *TrafficClones) GetUniques() int { + if t == nil || t.Uniques == nil { + return 0 + } + return *t.Uniques +} + +// GetCount returns the Count field if it's non-nil, zero value otherwise. +func (t *TrafficData) GetCount() int { + if t == nil || t.Count == nil { + return 0 + } + return *t.Count +} + +// GetTimestamp returns the Timestamp field if it's non-nil, zero value otherwise. +func (t *TrafficData) GetTimestamp() Timestamp { + if t == nil || t.Timestamp == nil { + return Timestamp{} + } + return *t.Timestamp +} + +// GetUniques returns the Uniques field if it's non-nil, zero value otherwise. +func (t *TrafficData) GetUniques() int { + if t == nil || t.Uniques == nil { + return 0 + } + return *t.Uniques +} + +// GetCount returns the Count field if it's non-nil, zero value otherwise. +func (t *TrafficPath) GetCount() int { + if t == nil || t.Count == nil { + return 0 + } + return *t.Count +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (t *TrafficPath) GetPath() string { + if t == nil || t.Path == nil { + return "" + } + return *t.Path +} + +// GetTitle returns the Title field if it's non-nil, zero value otherwise. +func (t *TrafficPath) GetTitle() string { + if t == nil || t.Title == nil { + return "" + } + return *t.Title +} + +// GetUniques returns the Uniques field if it's non-nil, zero value otherwise. +func (t *TrafficPath) GetUniques() int { + if t == nil || t.Uniques == nil { + return 0 + } + return *t.Uniques +} + +// GetCount returns the Count field if it's non-nil, zero value otherwise. +func (t *TrafficReferrer) GetCount() int { + if t == nil || t.Count == nil { + return 0 + } + return *t.Count +} + +// GetReferrer returns the Referrer field if it's non-nil, zero value otherwise. +func (t *TrafficReferrer) GetReferrer() string { + if t == nil || t.Referrer == nil { + return "" + } + return *t.Referrer +} + +// GetUniques returns the Uniques field if it's non-nil, zero value otherwise. +func (t *TrafficReferrer) GetUniques() int { + if t == nil || t.Uniques == nil { + return 0 + } + return *t.Uniques +} + +// GetCount returns the Count field if it's non-nil, zero value otherwise. +func (t *TrafficViews) GetCount() int { + if t == nil || t.Count == nil { + return 0 + } + return *t.Count +} + +// GetUniques returns the Uniques field if it's non-nil, zero value otherwise. +func (t *TrafficViews) GetUniques() int { + if t == nil || t.Uniques == nil { + return 0 + } + return *t.Uniques +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (t *Tree) GetSHA() string { + if t == nil || t.SHA == nil { + return "" + } + return *t.SHA +} + +// GetContent returns the Content field if it's non-nil, zero value otherwise. +func (t *TreeEntry) GetContent() string { + if t == nil || t.Content == nil { + return "" + } + return *t.Content +} + +// GetMode returns the Mode field if it's non-nil, zero value otherwise. +func (t *TreeEntry) GetMode() string { + if t == nil || t.Mode == nil { + return "" + } + return *t.Mode +} + +// GetPath returns the Path field if it's non-nil, zero value otherwise. +func (t *TreeEntry) GetPath() string { + if t == nil || t.Path == nil { + return "" + } + return *t.Path +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (t *TreeEntry) GetSHA() string { + if t == nil || t.SHA == nil { + return "" + } + return *t.SHA +} + +// GetSize returns the Size field if it's non-nil, zero value otherwise. +func (t *TreeEntry) GetSize() int { + if t == nil || t.Size == nil { + return 0 + } + return *t.Size +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (t *TreeEntry) GetType() string { + if t == nil || t.Type == nil { + return "" + } + return *t.Type +} + +// GetForce returns the Force field if it's non-nil, zero value otherwise. +func (u *updateRefRequest) GetForce() bool { + if u == nil || u.Force == nil { + return false + } + return *u.Force +} + +// GetSHA returns the SHA field if it's non-nil, zero value otherwise. +func (u *updateRefRequest) GetSHA() string { + if u == nil || u.SHA == nil { + return "" + } + return *u.SHA +} + +// GetAvatarURL returns the AvatarURL field if it's non-nil, zero value otherwise. +func (u *User) GetAvatarURL() string { + if u == nil || u.AvatarURL == nil { + return "" + } + return *u.AvatarURL +} + +// GetBio returns the Bio field if it's non-nil, zero value otherwise. +func (u *User) GetBio() string { + if u == nil || u.Bio == nil { + return "" + } + return *u.Bio +} + +// GetBlog returns the Blog field if it's non-nil, zero value otherwise. +func (u *User) GetBlog() string { + if u == nil || u.Blog == nil { + return "" + } + return *u.Blog +} + +// GetCollaborators returns the Collaborators field if it's non-nil, zero value otherwise. +func (u *User) GetCollaborators() int { + if u == nil || u.Collaborators == nil { + return 0 + } + return *u.Collaborators +} + +// GetCompany returns the Company field if it's non-nil, zero value otherwise. +func (u *User) GetCompany() string { + if u == nil || u.Company == nil { + return "" + } + return *u.Company +} + +// GetCreatedAt returns the CreatedAt field if it's non-nil, zero value otherwise. +func (u *User) GetCreatedAt() Timestamp { + if u == nil || u.CreatedAt == nil { + return Timestamp{} + } + return *u.CreatedAt +} + +// GetDiskUsage returns the DiskUsage field if it's non-nil, zero value otherwise. +func (u *User) GetDiskUsage() int { + if u == nil || u.DiskUsage == nil { + return 0 + } + return *u.DiskUsage +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (u *User) GetEmail() string { + if u == nil || u.Email == nil { + return "" + } + return *u.Email +} + +// GetEventsURL returns the EventsURL field if it's non-nil, zero value otherwise. +func (u *User) GetEventsURL() string { + if u == nil || u.EventsURL == nil { + return "" + } + return *u.EventsURL +} + +// GetFollowers returns the Followers field if it's non-nil, zero value otherwise. +func (u *User) GetFollowers() int { + if u == nil || u.Followers == nil { + return 0 + } + return *u.Followers +} + +// GetFollowersURL returns the FollowersURL field if it's non-nil, zero value otherwise. +func (u *User) GetFollowersURL() string { + if u == nil || u.FollowersURL == nil { + return "" + } + return *u.FollowersURL +} + +// GetFollowing returns the Following field if it's non-nil, zero value otherwise. +func (u *User) GetFollowing() int { + if u == nil || u.Following == nil { + return 0 + } + return *u.Following +} + +// GetFollowingURL returns the FollowingURL field if it's non-nil, zero value otherwise. +func (u *User) GetFollowingURL() string { + if u == nil || u.FollowingURL == nil { + return "" + } + return *u.FollowingURL +} + +// GetGistsURL returns the GistsURL field if it's non-nil, zero value otherwise. +func (u *User) GetGistsURL() string { + if u == nil || u.GistsURL == nil { + return "" + } + return *u.GistsURL +} + +// GetGravatarID returns the GravatarID field if it's non-nil, zero value otherwise. +func (u *User) GetGravatarID() string { + if u == nil || u.GravatarID == nil { + return "" + } + return *u.GravatarID +} + +// GetHireable returns the Hireable field if it's non-nil, zero value otherwise. +func (u *User) GetHireable() bool { + if u == nil || u.Hireable == nil { + return false + } + return *u.Hireable +} + +// GetHTMLURL returns the HTMLURL field if it's non-nil, zero value otherwise. +func (u *User) GetHTMLURL() string { + if u == nil || u.HTMLURL == nil { + return "" + } + return *u.HTMLURL +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (u *User) GetID() int { + if u == nil || u.ID == nil { + return 0 + } + return *u.ID +} + +// GetLocation returns the Location field if it's non-nil, zero value otherwise. +func (u *User) GetLocation() string { + if u == nil || u.Location == nil { + return "" + } + return *u.Location +} + +// GetLogin returns the Login field if it's non-nil, zero value otherwise. +func (u *User) GetLogin() string { + if u == nil || u.Login == nil { + return "" + } + return *u.Login +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (u *User) GetName() string { + if u == nil || u.Name == nil { + return "" + } + return *u.Name +} + +// GetOrganizationsURL returns the OrganizationsURL field if it's non-nil, zero value otherwise. +func (u *User) GetOrganizationsURL() string { + if u == nil || u.OrganizationsURL == nil { + return "" + } + return *u.OrganizationsURL +} + +// GetOwnedPrivateRepos returns the OwnedPrivateRepos field if it's non-nil, zero value otherwise. +func (u *User) GetOwnedPrivateRepos() int { + if u == nil || u.OwnedPrivateRepos == nil { + return 0 + } + return *u.OwnedPrivateRepos +} + +// GetPermissions returns the Permissions field if it's non-nil, zero value otherwise. +func (u *User) GetPermissions() map[string]bool { + if u == nil || u.Permissions == nil { + return map[string]bool{} + } + return *u.Permissions +} + +// GetPrivateGists returns the PrivateGists field if it's non-nil, zero value otherwise. +func (u *User) GetPrivateGists() int { + if u == nil || u.PrivateGists == nil { + return 0 + } + return *u.PrivateGists +} + +// GetPublicGists returns the PublicGists field if it's non-nil, zero value otherwise. +func (u *User) GetPublicGists() int { + if u == nil || u.PublicGists == nil { + return 0 + } + return *u.PublicGists +} + +// GetPublicRepos returns the PublicRepos field if it's non-nil, zero value otherwise. +func (u *User) GetPublicRepos() int { + if u == nil || u.PublicRepos == nil { + return 0 + } + return *u.PublicRepos +} + +// GetReceivedEventsURL returns the ReceivedEventsURL field if it's non-nil, zero value otherwise. +func (u *User) GetReceivedEventsURL() string { + if u == nil || u.ReceivedEventsURL == nil { + return "" + } + return *u.ReceivedEventsURL +} + +// GetReposURL returns the ReposURL field if it's non-nil, zero value otherwise. +func (u *User) GetReposURL() string { + if u == nil || u.ReposURL == nil { + return "" + } + return *u.ReposURL +} + +// GetSiteAdmin returns the SiteAdmin field if it's non-nil, zero value otherwise. +func (u *User) GetSiteAdmin() bool { + if u == nil || u.SiteAdmin == nil { + return false + } + return *u.SiteAdmin +} + +// GetStarredURL returns the StarredURL field if it's non-nil, zero value otherwise. +func (u *User) GetStarredURL() string { + if u == nil || u.StarredURL == nil { + return "" + } + return *u.StarredURL +} + +// GetSubscriptionsURL returns the SubscriptionsURL field if it's non-nil, zero value otherwise. +func (u *User) GetSubscriptionsURL() string { + if u == nil || u.SubscriptionsURL == nil { + return "" + } + return *u.SubscriptionsURL +} + +// GetSuspendedAt returns the SuspendedAt field if it's non-nil, zero value otherwise. +func (u *User) GetSuspendedAt() Timestamp { + if u == nil || u.SuspendedAt == nil { + return Timestamp{} + } + return *u.SuspendedAt +} + +// GetTotalPrivateRepos returns the TotalPrivateRepos field if it's non-nil, zero value otherwise. +func (u *User) GetTotalPrivateRepos() int { + if u == nil || u.TotalPrivateRepos == nil { + return 0 + } + return *u.TotalPrivateRepos +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (u *User) GetType() string { + if u == nil || u.Type == nil { + return "" + } + return *u.Type +} + +// GetUpdatedAt returns the UpdatedAt field if it's non-nil, zero value otherwise. +func (u *User) GetUpdatedAt() Timestamp { + if u == nil || u.UpdatedAt == nil { + return Timestamp{} + } + return *u.UpdatedAt +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (u *User) GetURL() string { + if u == nil || u.URL == nil { + return "" + } + return *u.URL +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (u *UserEmail) GetEmail() string { + if u == nil || u.Email == nil { + return "" + } + return *u.Email +} + +// GetPrimary returns the Primary field if it's non-nil, zero value otherwise. +func (u *UserEmail) GetPrimary() bool { + if u == nil || u.Primary == nil { + return false + } + return *u.Primary +} + +// GetVerified returns the Verified field if it's non-nil, zero value otherwise. +func (u *UserEmail) GetVerified() bool { + if u == nil || u.Verified == nil { + return false + } + return *u.Verified +} + +// GetAvatarURL returns the AvatarURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetAvatarURL() string { + if u == nil || u.AvatarURL == nil { + return "" + } + return *u.AvatarURL +} + +// GetEventsURL returns the EventsURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetEventsURL() string { + if u == nil || u.EventsURL == nil { + return "" + } + return *u.EventsURL +} + +// GetFollowersURL returns the FollowersURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetFollowersURL() string { + if u == nil || u.FollowersURL == nil { + return "" + } + return *u.FollowersURL +} + +// GetFollowingURL returns the FollowingURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetFollowingURL() string { + if u == nil || u.FollowingURL == nil { + return "" + } + return *u.FollowingURL +} + +// GetGistsURL returns the GistsURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetGistsURL() string { + if u == nil || u.GistsURL == nil { + return "" + } + return *u.GistsURL +} + +// GetGravatarID returns the GravatarID field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetGravatarID() string { + if u == nil || u.GravatarID == nil { + return "" + } + return *u.GravatarID +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetID() int { + if u == nil || u.ID == nil { + return 0 + } + return *u.ID +} + +// GetLDAPDN returns the LDAPDN field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetLDAPDN() string { + if u == nil || u.LDAPDN == nil { + return "" + } + return *u.LDAPDN +} + +// GetLogin returns the Login field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetLogin() string { + if u == nil || u.Login == nil { + return "" + } + return *u.Login +} + +// GetOrganizationsURL returns the OrganizationsURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetOrganizationsURL() string { + if u == nil || u.OrganizationsURL == nil { + return "" + } + return *u.OrganizationsURL +} + +// GetReceivedEventsURL returns the ReceivedEventsURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetReceivedEventsURL() string { + if u == nil || u.ReceivedEventsURL == nil { + return "" + } + return *u.ReceivedEventsURL +} + +// GetReposURL returns the ReposURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetReposURL() string { + if u == nil || u.ReposURL == nil { + return "" + } + return *u.ReposURL +} + +// GetSiteAdmin returns the SiteAdmin field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetSiteAdmin() bool { + if u == nil || u.SiteAdmin == nil { + return false + } + return *u.SiteAdmin +} + +// GetStarredURL returns the StarredURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetStarredURL() string { + if u == nil || u.StarredURL == nil { + return "" + } + return *u.StarredURL +} + +// GetSubscriptionsURL returns the SubscriptionsURL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetSubscriptionsURL() string { + if u == nil || u.SubscriptionsURL == nil { + return "" + } + return *u.SubscriptionsURL +} + +// GetType returns the Type field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetType() string { + if u == nil || u.Type == nil { + return "" + } + return *u.Type +} + +// GetURL returns the URL field if it's non-nil, zero value otherwise. +func (u *UserLDAPMapping) GetURL() string { + if u == nil || u.URL == nil { + return "" + } + return *u.URL +} + +// GetIncompleteResults returns the IncompleteResults field if it's non-nil, zero value otherwise. +func (u *UsersSearchResult) GetIncompleteResults() bool { + if u == nil || u.IncompleteResults == nil { + return false + } + return *u.IncompleteResults +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (u *UsersSearchResult) GetTotal() int { + if u == nil || u.Total == nil { + return 0 + } + return *u.Total +} + +// GetAction returns the Action field if it's non-nil, zero value otherwise. +func (w *WatchEvent) GetAction() string { + if w == nil || w.Action == nil { + return "" + } + return *w.Action +} + +// GetEmail returns the Email field if it's non-nil, zero value otherwise. +func (w *WebHookAuthor) GetEmail() string { + if w == nil || w.Email == nil { + return "" + } + return *w.Email +} + +// GetName returns the Name field if it's non-nil, zero value otherwise. +func (w *WebHookAuthor) GetName() string { + if w == nil || w.Name == nil { + return "" + } + return *w.Name +} + +// GetUsername returns the Username field if it's non-nil, zero value otherwise. +func (w *WebHookAuthor) GetUsername() string { + if w == nil || w.Username == nil { + return "" + } + return *w.Username +} + +// GetDistinct returns the Distinct field if it's non-nil, zero value otherwise. +func (w *WebHookCommit) GetDistinct() bool { + if w == nil || w.Distinct == nil { + return false + } + return *w.Distinct +} + +// GetID returns the ID field if it's non-nil, zero value otherwise. +func (w *WebHookCommit) GetID() string { + if w == nil || w.ID == nil { + return "" + } + return *w.ID +} + +// GetMessage returns the Message field if it's non-nil, zero value otherwise. +func (w *WebHookCommit) GetMessage() string { + if w == nil || w.Message == nil { + return "" + } + return *w.Message +} + +// GetTimestamp returns the Timestamp field if it's non-nil, zero value otherwise. +func (w *WebHookCommit) GetTimestamp() time.Time { + if w == nil || w.Timestamp == nil { + return time.Time{} + } + return *w.Timestamp +} + +// GetAfter returns the After field if it's non-nil, zero value otherwise. +func (w *WebHookPayload) GetAfter() string { + if w == nil || w.After == nil { + return "" + } + return *w.After +} + +// GetBefore returns the Before field if it's non-nil, zero value otherwise. +func (w *WebHookPayload) GetBefore() string { + if w == nil || w.Before == nil { + return "" + } + return *w.Before +} + +// GetCompare returns the Compare field if it's non-nil, zero value otherwise. +func (w *WebHookPayload) GetCompare() string { + if w == nil || w.Compare == nil { + return "" + } + return *w.Compare +} + +// GetCreated returns the Created field if it's non-nil, zero value otherwise. +func (w *WebHookPayload) GetCreated() bool { + if w == nil || w.Created == nil { + return false + } + return *w.Created +} + +// GetDeleted returns the Deleted field if it's non-nil, zero value otherwise. +func (w *WebHookPayload) GetDeleted() bool { + if w == nil || w.Deleted == nil { + return false + } + return *w.Deleted +} + +// GetForced returns the Forced field if it's non-nil, zero value otherwise. +func (w *WebHookPayload) GetForced() bool { + if w == nil || w.Forced == nil { + return false + } + return *w.Forced +} + +// GetRef returns the Ref field if it's non-nil, zero value otherwise. +func (w *WebHookPayload) GetRef() string { + if w == nil || w.Ref == nil { + return "" + } + return *w.Ref +} + +// GetTotal returns the Total field if it's non-nil, zero value otherwise. +func (w *WeeklyCommitActivity) GetTotal() int { + if w == nil || w.Total == nil { + return 0 + } + return *w.Total +} + +// GetWeek returns the Week field if it's non-nil, zero value otherwise. +func (w *WeeklyCommitActivity) GetWeek() Timestamp { + if w == nil || w.Week == nil { + return Timestamp{} + } + return *w.Week +} + +// GetAdditions returns the Additions field if it's non-nil, zero value otherwise. +func (w *WeeklyStats) GetAdditions() int { + if w == nil || w.Additions == nil { + return 0 + } + return *w.Additions +} + +// GetCommits returns the Commits field if it's non-nil, zero value otherwise. +func (w *WeeklyStats) GetCommits() int { + if w == nil || w.Commits == nil { + return 0 + } + return *w.Commits +} + +// GetDeletions returns the Deletions field if it's non-nil, zero value otherwise. +func (w *WeeklyStats) GetDeletions() int { + if w == nil || w.Deletions == nil { + return 0 + } + return *w.Deletions +} + +// GetWeek returns the Week field if it's non-nil, zero value otherwise. +func (w *WeeklyStats) GetWeek() Timestamp { + if w == nil || w.Week == nil { + return Timestamp{} + } + return *w.Week +} diff --git a/vendor/github.com/google/go-github/github/github.go b/vendor/github.com/google/go-github/github/github.go index 640aec788b..848814265e 100644 --- a/vendor/github.com/google/go-github/github/github.go +++ b/vendor/github.com/google/go-github/github/github.go @@ -3,10 +3,13 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. +//go:generate go run gen-accessors.go + package github import ( "bytes" + "context" "encoding/json" "errors" "fmt" @@ -24,12 +27,7 @@ import ( ) const ( - // StatusUnprocessableEntity is the status code returned when sending a request with invalid fields. - StatusUnprocessableEntity = 422 -) - -const ( - libraryVersion = "0.1" + libraryVersion = "4" defaultBaseURL = "https://api.github.com/" uploadBaseURL = "https://uploads.github.com/" userAgent = "go-github/" + libraryVersion @@ -39,8 +37,12 @@ const ( headerRateReset = "X-RateLimit-Reset" headerOTP = "X-GitHub-OTP" - mediaTypeV3 = "application/vnd.github.v3+json" - defaultMediaType = "application/octet-stream" + mediaTypeV3 = "application/vnd.github.v3+json" + defaultMediaType = "application/octet-stream" + mediaTypeV3SHA = "application/vnd.github.v3.sha" + mediaTypeV3Diff = "application/vnd.github.v3.diff" + mediaTypeV3Patch = "application/vnd.github.v3.patch" + mediaTypeOrgPermissionRepo = "application/vnd.github.v3.repository+json" // Media Type values to access preview APIs @@ -50,24 +52,60 @@ const ( // https://developer.github.com/changes/2014-12-09-new-attributes-for-stars-api/ mediaTypeStarringPreview = "application/vnd.github.v3.star+json" - // https://developer.github.com/changes/2015-06-24-api-enhancements-for-working-with-organization-permissions/ - mediaTypeOrgPermissionPreview = "application/vnd.github.ironman-preview+json" - mediaTypeOrgPermissionRepoPreview = "application/vnd.github.ironman-preview.repository+json" - // https://developer.github.com/changes/2015-11-11-protected-branches-api/ mediaTypeProtectedBranchesPreview = "application/vnd.github.loki-preview+json" - // https://developer.github.com/changes/2016-02-11-issue-locking-api/ - mediaTypeIssueLockingPreview = "application/vnd.github.the-key-preview+json" + // https://help.github.com/enterprise/2.4/admin/guides/migrations/exporting-the-github-com-organization-s-repositories/ + mediaTypeMigrationsPreview = "application/vnd.github.wyandotte-preview+json" + + // https://developer.github.com/changes/2016-04-06-deployment-and-deployment-status-enhancements/ + mediaTypeDeploymentStatusPreview = "application/vnd.github.ant-man-preview+json" + + // https://developer.github.com/changes/2016-02-19-source-import-preview-api/ + mediaTypeImportPreview = "application/vnd.github.barred-rock-preview" + + // https://developer.github.com/changes/2016-05-12-reactions-api-preview/ + mediaTypeReactionsPreview = "application/vnd.github.squirrel-girl-preview" + + // https://developer.github.com/changes/2016-04-01-squash-api-preview/ + // https://developer.github.com/changes/2016-09-26-pull-request-merge-api-update/ + mediaTypeSquashPreview = "application/vnd.github.polaris-preview+json" + + // https://developer.github.com/changes/2016-04-04-git-signing-api-preview/ + mediaTypeGitSigningPreview = "application/vnd.github.cryptographer-preview+json" + + // https://developer.github.com/changes/2016-05-23-timeline-preview-api/ + mediaTypeTimelinePreview = "application/vnd.github.mockingbird-preview+json" + + // https://developer.github.com/changes/2016-06-14-repository-invitations/ + mediaTypeRepositoryInvitationsPreview = "application/vnd.github.swamp-thing-preview+json" + + // https://developer.github.com/changes/2016-07-06-github-pages-preiew-api/ + mediaTypePagesPreview = "application/vnd.github.mister-fantastic-preview+json" + + // https://developer.github.com/changes/2016-09-14-projects-api/ + mediaTypeProjectsPreview = "application/vnd.github.inertia-preview+json" + + // https://developer.github.com/changes/2016-09-14-Integrations-Early-Access/ + mediaTypeIntegrationPreview = "application/vnd.github.machine-man-preview+json" + + // https://developer.github.com/changes/2016-11-28-preview-org-membership/ + mediaTypeOrgMembershipPreview = "application/vnd.github.korra-preview+json" + + // https://developer.github.com/changes/2017-01-05-commit-search-api/ + mediaTypeCommitSearchPreview = "application/vnd.github.cloak-preview+json" + + // https://developer.github.com/changes/2016-12-14-reviews-api/ + mediaTypePullRequestReviewsPreview = "application/vnd.github.black-cat-preview+json" ) // A Client manages communication with the GitHub API. type Client struct { - // HTTP client used to communicate with the API. - client *http.Client + clientMu sync.Mutex // clientMu protects the client during calls that modify the CheckRedirect func. + client *http.Client // HTTP client used to communicate with the API. - // Base URL for API requests. Defaults to the public GitHub API, but can be - // set to a domain endpoint to use with GitHub Enterprise. BaseURL should + // Base URL for API requests. Defaults to the public GitHub API, but can be + // set to a domain endpoint to use with GitHub Enterprise. BaseURL should // always be specified with a trailing slash. BaseURL *url.URL @@ -77,21 +115,33 @@ type Client struct { // User agent used when communicating with the GitHub API. UserAgent string - rateMu sync.Mutex - rate Rate // Rate limit for the client as determined by the most recent API call. + rateMu sync.Mutex + rateLimits [categories]Rate // Rate limits for the client as determined by the most recent API calls. + + common service // Reuse a single struct instead of allocating one for each service on the heap. // Services used for talking to different parts of the GitHub API. - Activity *ActivityService - Gists *GistsService - Git *GitService - Gitignores *GitignoresService - Issues *IssuesService - Organizations *OrganizationsService - PullRequests *PullRequestsService - Repositories *RepositoriesService - Search *SearchService - Users *UsersService - Licenses *LicensesService + Activity *ActivityService + Admin *AdminService + Authorizations *AuthorizationsService + Gists *GistsService + Git *GitService + Gitignores *GitignoresService + Integrations *IntegrationsService + Issues *IssuesService + Organizations *OrganizationsService + Projects *ProjectsService + PullRequests *PullRequestsService + Repositories *RepositoriesService + Search *SearchService + Users *UsersService + Licenses *LicensesService + Migrations *MigrationService + Reactions *ReactionsService +} + +type service struct { + client *Client } // ListOptions specifies the optional parameters to various List methods that @@ -109,7 +159,23 @@ type UploadOptions struct { Name string `url:"name,omitempty"` } -// addOptions adds the parameters in opt as URL query parameters to s. opt +// RawType represents type of raw format of a request instead of JSON. +type RawType uint8 + +const ( + // Diff format. + Diff RawType = 1 + iota + // Patch format. + Patch +) + +// RawOptions specifies parameters when user wants to get raw format of +// a response instead of JSON. +type RawOptions struct { + Type RawType +} + +// addOptions adds the parameters in opt as URL query parameters to s. opt // must be a struct whose fields may contain "url" tags. func addOptions(s string, opt interface{}) (string, error) { v := reflect.ValueOf(opt) @@ -131,8 +197,8 @@ func addOptions(s string, opt interface{}) (string, error) { return u.String(), nil } -// NewClient returns a new GitHub API client. If a nil httpClient is -// provided, http.DefaultClient will be used. To use API methods which require +// NewClient returns a new GitHub API client. If a nil httpClient is +// provided, http.DefaultClient will be used. To use API methods which require // authentication, provide an http.Client that will perform the authentication // for you (such as that provided by the golang.org/x/oauth2 library). func NewClient(httpClient *http.Client) *Client { @@ -143,23 +209,30 @@ func NewClient(httpClient *http.Client) *Client { uploadURL, _ := url.Parse(uploadBaseURL) c := &Client{client: httpClient, BaseURL: baseURL, UserAgent: userAgent, UploadURL: uploadURL} - c.Activity = &ActivityService{client: c} - c.Gists = &GistsService{client: c} - c.Git = &GitService{client: c} - c.Gitignores = &GitignoresService{client: c} - c.Issues = &IssuesService{client: c} - c.Organizations = &OrganizationsService{client: c} - c.PullRequests = &PullRequestsService{client: c} - c.Repositories = &RepositoriesService{client: c} - c.Search = &SearchService{client: c} - c.Users = &UsersService{client: c} - c.Licenses = &LicensesService{client: c} + c.common.client = c + c.Activity = (*ActivityService)(&c.common) + c.Admin = (*AdminService)(&c.common) + c.Authorizations = (*AuthorizationsService)(&c.common) + c.Gists = (*GistsService)(&c.common) + c.Git = (*GitService)(&c.common) + c.Gitignores = (*GitignoresService)(&c.common) + c.Integrations = (*IntegrationsService)(&c.common) + c.Issues = (*IssuesService)(&c.common) + c.Licenses = (*LicensesService)(&c.common) + c.Migrations = (*MigrationService)(&c.common) + c.Organizations = (*OrganizationsService)(&c.common) + c.Projects = (*ProjectsService)(&c.common) + c.PullRequests = (*PullRequestsService)(&c.common) + c.Reactions = (*ReactionsService)(&c.common) + c.Repositories = (*RepositoriesService)(&c.common) + c.Search = (*SearchService)(&c.common) + c.Users = (*UsersService)(&c.common) return c } // NewRequest creates an API request. A relative URL can be provided in urlStr, // in which case it is resolved relative to the BaseURL of the Client. -// Relative URLs should always be specified without a preceding slash. If +// Relative URLs should always be specified without a preceding slash. If // specified, the value pointed to by body is JSON encoded and included as the // request body. func (c *Client) NewRequest(method, urlStr string, body interface{}) (*http.Request, error) { @@ -184,9 +257,12 @@ func (c *Client) NewRequest(method, urlStr string, body interface{}) (*http.Requ return nil, err } - req.Header.Add("Accept", mediaTypeV3) + if body != nil { + req.Header.Set("Content-Type", "application/json") + } + req.Header.Set("Accept", mediaTypeV3) if c.UserAgent != "" { - req.Header.Add("User-Agent", c.UserAgent) + req.Header.Set("User-Agent", c.UserAgent) } return req, nil } @@ -207,23 +283,23 @@ func (c *Client) NewUploadRequest(urlStr string, reader io.Reader, size int64, m } req.ContentLength = size - if len(mediaType) == 0 { + if mediaType == "" { mediaType = defaultMediaType } - req.Header.Add("Content-Type", mediaType) - req.Header.Add("Accept", mediaTypeV3) - req.Header.Add("User-Agent", c.UserAgent) + req.Header.Set("Content-Type", mediaType) + req.Header.Set("Accept", mediaTypeV3) + req.Header.Set("User-Agent", c.UserAgent) return req, nil } -// Response is a GitHub API response. This wraps the standard http.Response +// Response is a GitHub API response. This wraps the standard http.Response // returned from GitHub and provides convenient access to things like // pagination links. type Response struct { *http.Response // These fields provide the page values for paginating through a set of - // results. Any or all of these may be set to the zero value for + // results. Any or all of these may be set to the zero value for // responses that are not part of a paginated set, or for which there // are no additional pages. @@ -304,34 +380,56 @@ func parseRate(r *http.Response) Rate { return rate } -// Rate specifies the current rate limit for the client as determined by the -// most recent API call. If the client is used in a multi-user application, -// this rate may not always be up-to-date. Call RateLimits() to check the -// current rate. -func (c *Client) Rate() Rate { - c.rateMu.Lock() - rate := c.rate - c.rateMu.Unlock() - return rate -} - -// Do sends an API request and returns the API response. The API response is +// Do sends an API request and returns the API response. The API response is // JSON decoded and stored in the value pointed to by v, or returned as an -// error if an API error has occurred. If v implements the io.Writer +// error if an API error has occurred. If v implements the io.Writer // interface, the raw response body will be written to v, without attempting to -// first decode it. -func (c *Client) Do(req *http.Request, v interface{}) (*Response, error) { - resp, err := c.client.Do(req) - if err != nil { +// first decode it. If rate limit is exceeded and reset time is in the future, +// Do returns *RateLimitError immediately without making a network API call. +// +// The provided ctx must be non-nil. If it is canceled or times out, +// ctx.Err() will be returned. +func (c *Client) Do(ctx context.Context, req *http.Request, v interface{}) (*Response, error) { + req = req.WithContext(ctx) + + rateLimitCategory := category(req.URL.Path) + + // If we've hit rate limit, don't make further requests before Reset time. + if err := c.checkRateLimitBeforeDo(req, rateLimitCategory); err != nil { return nil, err } - defer resp.Body.Close() + resp, err := c.client.Do(req) + if err != nil { + // If we got an error, and the context has been canceled, + // the context's error is probably more useful. + select { + case <-ctx.Done(): + return nil, ctx.Err() + default: + } + + // If the error type is *url.Error, sanitize its URL before returning. + if e, ok := err.(*url.Error); ok { + if url, err := url.Parse(e.URL); err == nil { + e.URL = sanitizeURL(url).String() + return nil, e + } + } + + return nil, err + } + + defer func() { + // Drain up to 512 bytes and close the body to let the Transport reuse the connection + io.CopyN(ioutil.Discard, resp.Body, 512) + resp.Body.Close() + }() response := newResponse(resp) c.rateMu.Lock() - c.rate = response.Rate + c.rateLimits[rateLimitCategory] = response.Rate c.rateMu.Unlock() err = CheckResponse(resp) @@ -351,18 +449,57 @@ func (c *Client) Do(req *http.Request, v interface{}) (*Response, error) { } } } + return response, err } +// checkRateLimitBeforeDo does not make any network calls, but uses existing knowledge from +// current client state in order to quickly check if *RateLimitError can be immediately returned +// from Client.Do, and if so, returns it so that Client.Do can skip making a network API call unnecessarily. +// Otherwise it returns nil, and Client.Do should proceed normally. +func (c *Client) checkRateLimitBeforeDo(req *http.Request, rateLimitCategory rateLimitCategory) error { + c.rateMu.Lock() + rate := c.rateLimits[rateLimitCategory] + c.rateMu.Unlock() + if !rate.Reset.Time.IsZero() && rate.Remaining == 0 && time.Now().Before(rate.Reset.Time) { + // Create a fake response. + resp := &http.Response{ + Status: http.StatusText(http.StatusForbidden), + StatusCode: http.StatusForbidden, + Request: req, + Header: make(http.Header), + Body: ioutil.NopCloser(strings.NewReader("")), + } + return &RateLimitError{ + Rate: rate, + Response: resp, + Message: fmt.Sprintf("API rate limit of %v still exceeded until %v, not making remote request.", rate.Limit, rate.Reset.Time), + } + } + + return nil +} + /* An ErrorResponse reports one or more errors caused by an API request. -GitHub API docs: http://developer.github.com/v3/#client-errors +GitHub API docs: https://developer.github.com/v3/#client-errors */ type ErrorResponse struct { Response *http.Response // HTTP response that caused this error Message string `json:"message"` // error message Errors []Error `json:"errors"` // more detail on individual errors + // Block is only populated on certain types of errors such as code 451. + // See https://developer.github.com/changes/2016-03-17-the-451-status-code-is-now-supported/ + // for more information. + Block *struct { + Reason string `json:"reason,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + } `json:"block,omitempty"` + // Most errors will also include a documentation_url field pointing + // to some content that might help you resolve the error, see + // https://developer.github.com/v3/#client-errors + DocumentationURL string `json:"documentation_url,omitempty"` } func (r *ErrorResponse) Error() string { @@ -372,7 +509,7 @@ func (r *ErrorResponse) Error() string { } // TwoFactorAuthError occurs when using HTTP Basic Authentication for a user -// that has two-factor authentication enabled. The request can be reattempted +// that has two-factor authentication enabled. The request can be reattempted // by providing a one-time password in the request. type TwoFactorAuthError ErrorResponse @@ -392,8 +529,38 @@ func (r *RateLimitError) Error() string { r.Response.StatusCode, r.Message, r.Rate.Reset.Time.Sub(time.Now())) } +// AcceptedError occurs when GitHub returns 202 Accepted response with an +// empty body, which means a job was scheduled on the GitHub side to process +// the information needed and cache it. +// Technically, 202 Accepted is not a real error, it's just used to +// indicate that results are not ready yet, but should be available soon. +// The request can be repeated after some time. +type AcceptedError struct{} + +func (*AcceptedError) Error() string { + return "job scheduled on GitHub side; try again later" +} + +// AbuseRateLimitError occurs when GitHub returns 403 Forbidden response with the +// "documentation_url" field value equal to "https://developer.github.com/v3#abuse-rate-limits". +type AbuseRateLimitError struct { + Response *http.Response // HTTP response that caused this error + Message string `json:"message"` // error message + + // RetryAfter is provided with some abuse rate limit errors. If present, + // it is the amount of time that the client should wait before retrying. + // Otherwise, the client should try again later (after an unspecified amount of time). + RetryAfter *time.Duration +} + +func (r *AbuseRateLimitError) Error() string { + return fmt.Sprintf("%v %v: %d %v", + r.Response.Request.Method, sanitizeURL(r.Response.Request.URL), + r.Response.StatusCode, r.Message) +} + // sanitizeURL redacts the client_secret parameter from the URL which may be -// exposed to the user, specifically in the ErrorResponse error message. +// exposed to the user. func sanitizeURL(uri *url.URL) *url.URL { if uri == nil { return nil @@ -418,13 +585,17 @@ These are the possible validation error codes: the formatting of a field is invalid already_exists: another resource has the same valid as this field + custom: + some resources return this (e.g. github.User.CreateKey()), additional + information is set in the Message field of the Error -GitHub API docs: http://developer.github.com/v3/#client-errors +GitHub API docs: https://developer.github.com/v3/#client-errors */ type Error struct { Resource string `json:"resource"` // resource on which the error occurred Field string `json:"field"` // field on which the error occurred Code string `json:"code"` // validation error code + Message string `json:"message"` // Message describing the error. Errors with Code == "custom" will always have this set. } func (e *Error) Error() string { @@ -433,14 +604,19 @@ func (e *Error) Error() string { } // CheckResponse checks the API response for errors, and returns them if -// present. A response is considered an error if it has a status code outside -// the 200 range. API error responses are expected to have either no response -// body, or a JSON response body that maps to ErrorResponse. Any other +// present. A response is considered an error if it has a status code outside +// the 200 range or equal to 202 Accepted. +// API error responses are expected to have either no response +// body, or a JSON response body that maps to ErrorResponse. Any other // response body will be silently ignored. // // The error type will be *RateLimitError for rate limit exceeded errors, +// *AcceptedError for 202 Accepted status codes, // and *TwoFactorAuthError for two-factor authentication errors. func CheckResponse(r *http.Response) error { + if r.StatusCode == http.StatusAccepted { + return &AcceptedError{} + } if c := r.StatusCode; 200 <= c && c <= 299 { return nil } @@ -458,6 +634,20 @@ func CheckResponse(r *http.Response) error { Response: errorResponse.Response, Message: errorResponse.Message, } + case r.StatusCode == http.StatusForbidden && errorResponse.DocumentationURL == "https://developer.github.com/v3#abuse-rate-limits": + abuseRateLimitError := &AbuseRateLimitError{ + Response: errorResponse.Response, + Message: errorResponse.Message, + } + if v := r.Header["Retry-After"]; len(v) > 0 { + // According to GitHub support, the "Retry-After" header value will be + // an integer which represents the number of seconds that one should + // wait before resuming making requests. + retryAfterSeconds, _ := strconv.ParseInt(v[0], 10, 64) // Error handling is noop. + retryAfter := time.Duration(retryAfterSeconds) * time.Second + abuseRateLimitError.RetryAfter = &retryAfter + } + return abuseRateLimitError default: return errorResponse } @@ -466,15 +656,15 @@ func CheckResponse(r *http.Response) error { // parseBoolResponse determines the boolean result from a GitHub API response. // Several GitHub API methods return boolean responses indicated by the HTTP // status code in the response (true indicated by a 204, false indicated by a -// 404). This helper function will determine that result and hide the 404 -// error if present. Any other error will be returned through as-is. +// 404). This helper function will determine that result and hide the 404 +// error if present. Any other error will be returned through as-is. func parseBoolResponse(err error) (bool, error) { if err == nil { return true, nil } if err, ok := err.(*ErrorResponse); ok && err.Response.StatusCode == http.StatusNotFound { - // Simply false. In this one case, we do not pass the error through. + // Simply false. In this one case, we do not pass the error through. return false, nil } @@ -500,14 +690,16 @@ func (r Rate) String() string { // RateLimits represents the rate limits for the current client. type RateLimits struct { - // The rate limit for non-search API requests. Unauthenticated - // requests are limited to 60 per hour. Authenticated requests are + // The rate limit for non-search API requests. Unauthenticated + // requests are limited to 60 per hour. Authenticated requests are // limited to 5,000 per hour. + // + // GitHub API docs: https://developer.github.com/v3/#rate-limiting Core *Rate `json:"core"` - // The rate limit for search API requests. Unauthenticated requests - // are limited to 5 requests per minutes. Authenticated requests are - // limited to 20 per minute. + // The rate limit for search API requests. Unauthenticated requests + // are limited to 10 requests per minutes. Authenticated requests are + // limited to 30 per minute. // // GitHub API docs: https://developer.github.com/v3/search/#rate-limit Search *Rate `json:"search"` @@ -517,18 +709,27 @@ func (r RateLimits) String() string { return Stringify(r) } -// Deprecated: RateLimit is deprecated, use RateLimits instead. -func (c *Client) RateLimit() (*Rate, *Response, error) { - limits, resp, err := c.RateLimits() - if limits == nil { - return nil, nil, err - } +type rateLimitCategory uint8 - return limits.Core, resp, err +const ( + coreCategory rateLimitCategory = iota + searchCategory + + categories // An array of this length will be able to contain all rate limit categories. +) + +// category returns the rate limit category of the endpoint, determined by Request.URL.Path. +func category(path string) rateLimitCategory { + switch { + default: + return coreCategory + case strings.HasPrefix(path, "/search/"): + return searchCategory + } } // RateLimits returns the rate limits for the current client. -func (c *Client) RateLimits() (*RateLimits, *Response, error) { +func (c *Client) RateLimits(ctx context.Context) (*RateLimits, *Response, error) { req, err := c.NewRequest("GET", "rate_limit", nil) if err != nil { return nil, nil, err @@ -537,12 +738,23 @@ func (c *Client) RateLimits() (*RateLimits, *Response, error) { response := new(struct { Resources *RateLimits `json:"resources"` }) - resp, err := c.Do(req, response) + resp, err := c.Do(ctx, req, response) if err != nil { return nil, nil, err } - return response.Resources, resp, err + if response.Resources != nil { + c.rateMu.Lock() + if response.Resources.Core != nil { + c.rateLimits[coreCategory] = *response.Resources.Core + } + if response.Resources.Search != nil { + c.rateLimits[searchCategory] = *response.Resources.Search + } + c.rateMu.Unlock() + } + + return response.Resources, resp, nil } /* @@ -558,7 +770,7 @@ that need to use a higher rate limit associated with your OAuth application. This will append the querystring params client_id=xxx&client_secret=yyy to all requests. -See http://developer.github.com/v3/#unauthenticated-rate-limited-requests for +See https://developer.github.com/v3/#unauthenticated-rate-limited-requests for more information. */ type UnauthenticatedRateLimitedTransport struct { @@ -612,7 +824,7 @@ func (t *UnauthenticatedRateLimitedTransport) transport() http.RoundTripper { } // BasicAuthTransport is an http.RoundTripper that authenticates all requests -// using HTTP Basic Authentication with the provided username and password. It +// using HTTP Basic Authentication with the provided username and password. It // additionally supports users who have two-factor authentication enabled on // their GitHub account. type BasicAuthTransport struct { @@ -630,7 +842,7 @@ func (t *BasicAuthTransport) RoundTrip(req *http.Request) (*http.Response, error req = cloneRequest(req) // per RoundTrip contract req.SetBasicAuth(t.Username, t.Password) if t.OTP != "" { - req.Header.Add(headerOTP, t.OTP) + req.Header.Set(headerOTP, t.OTP) } return t.transport().RoundTrip(req) } @@ -664,25 +876,12 @@ func cloneRequest(r *http.Request) *http.Request { // Bool is a helper routine that allocates a new bool value // to store v and returns a pointer to it. -func Bool(v bool) *bool { - p := new(bool) - *p = v - return p -} +func Bool(v bool) *bool { return &v } -// Int is a helper routine that allocates a new int32 value -// to store v and returns a pointer to it, but unlike Int32 -// its argument value is an int. -func Int(v int) *int { - p := new(int) - *p = v - return p -} +// Int is a helper routine that allocates a new int value +// to store v and returns a pointer to it. +func Int(v int) *int { return &v } // String is a helper routine that allocates a new string value // to store v and returns a pointer to it. -func String(v string) *string { - p := new(string) - *p = v - return p -} +func String(v string) *string { return &v } diff --git a/vendor/github.com/google/go-github/github/gitignore.go b/vendor/github.com/google/go-github/github/gitignore.go index 31d5902559..2f691bc323 100644 --- a/vendor/github.com/google/go-github/github/gitignore.go +++ b/vendor/github.com/google/go-github/github/gitignore.go @@ -5,15 +5,16 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // GitignoresService provides access to the gitignore related functions in the // GitHub API. // -// GitHub API docs: http://developer.github.com/v3/gitignore/ -type GitignoresService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/gitignore/ +type GitignoresService service // Gitignore represents a .gitignore file as returned by the GitHub API. type Gitignore struct { @@ -27,26 +28,26 @@ func (g Gitignore) String() string { // List all available Gitignore templates. // -// http://developer.github.com/v3/gitignore/#listing-available-templates -func (s GitignoresService) List() ([]string, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gitignore/#listing-available-templates +func (s GitignoresService) List(ctx context.Context) ([]string, *Response, error) { req, err := s.client.NewRequest("GET", "gitignore/templates", nil) if err != nil { return nil, nil, err } - availableTemplates := new([]string) - resp, err := s.client.Do(req, availableTemplates) + var availableTemplates []string + resp, err := s.client.Do(ctx, req, &availableTemplates) if err != nil { return nil, resp, err } - return *availableTemplates, resp, err + return availableTemplates, resp, nil } // Get a Gitignore by name. // -// http://developer.github.com/v3/gitignore/#get-a-single-template -func (s GitignoresService) Get(name string) (*Gitignore, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/gitignore/#get-a-single-template +func (s GitignoresService) Get(ctx context.Context, name string) (*Gitignore, *Response, error) { u := fmt.Sprintf("gitignore/templates/%v", name) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -54,10 +55,10 @@ func (s GitignoresService) Get(name string) (*Gitignore, *Response, error) { } gitignore := new(Gitignore) - resp, err := s.client.Do(req, gitignore) + resp, err := s.client.Do(ctx, req, gitignore) if err != nil { return nil, resp, err } - return gitignore, resp, err + return gitignore, resp, nil } diff --git a/vendor/github.com/google/go-github/github/integration.go b/vendor/github.com/google/go-github/github/integration.go new file mode 100644 index 0000000000..6d74e44f00 --- /dev/null +++ b/vendor/github.com/google/go-github/github/integration.go @@ -0,0 +1,40 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import "context" + +// IntegrationsService provides access to the installation related functions +// in the GitHub API. +// +// GitHub API docs: https://developer.github.com/v3/integrations/ +type IntegrationsService service + +// ListInstallations lists the installations that the current integration has. +// +// GitHub API docs: https://developer.github.com/v3/integrations/#find-installations +func (s *IntegrationsService) ListInstallations(ctx context.Context, opt *ListOptions) ([]*Installation, *Response, error) { + u, err := addOptions("integration/installations", opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeIntegrationPreview) + + var i []*Installation + resp, err := s.client.Do(ctx, req, &i) + if err != nil { + return nil, resp, err + } + + return i, resp, nil +} diff --git a/vendor/github.com/google/go-github/github/integration_installation.go b/vendor/github.com/google/go-github/github/integration_installation.go new file mode 100644 index 0000000000..933106400b --- /dev/null +++ b/vendor/github.com/google/go-github/github/integration_installation.go @@ -0,0 +1,48 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import "context" + +// Installation represents a GitHub integration installation. +type Installation struct { + ID *int `json:"id,omitempty"` + Account *User `json:"account,omitempty"` + AccessTokensURL *string `json:"access_tokens_url,omitempty"` + RepositoriesURL *string `json:"repositories_url,omitempty"` +} + +func (i Installation) String() string { + return Stringify(i) +} + +// ListRepos lists the repositories that the current installation has access to. +// +// GitHub API docs: https://developer.github.com/v3/integrations/installations/#list-repositories +func (s *IntegrationsService) ListRepos(ctx context.Context, opt *ListOptions) ([]*Repository, *Response, error) { + u, err := addOptions("installation/repositories", opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeIntegrationPreview) + + var r struct { + Repositories []*Repository `json:"repositories"` + } + resp, err := s.client.Do(ctx, req, &r) + if err != nil { + return nil, resp, err + } + + return r.Repositories, resp, nil +} diff --git a/vendor/github.com/google/go-github/github/issues.go b/vendor/github.com/google/go-github/github/issues.go index 58a3a69816..b437d5063a 100644 --- a/vendor/github.com/google/go-github/github/issues.go +++ b/vendor/github.com/google/go-github/github/issues.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -13,15 +14,20 @@ import ( // IssuesService handles communication with the issue related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/issues/ -type IssuesService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/issues/ +type IssuesService service // Issue represents a GitHub issue on a repository. +// +// Note: As far as the GitHub API is concerned, every pull request is an issue, +// but not every issue is a pull request. Some endpoints, events, and webhooks +// may also return pull requests via this struct. If PullRequestLinks is nil, +// this is an issue, and if PullRequestLinks is not nil, this is a pull request. type Issue struct { + ID *int `json:"id,omitempty"` Number *int `json:"number,omitempty"` State *string `json:"state,omitempty"` + Locked *bool `json:"locked,omitempty"` Title *string `json:"title,omitempty"` Body *string `json:"body,omitempty"` User *User `json:"user,omitempty"` @@ -31,11 +37,14 @@ type Issue struct { ClosedAt *time.Time `json:"closed_at,omitempty"` CreatedAt *time.Time `json:"created_at,omitempty"` UpdatedAt *time.Time `json:"updated_at,omitempty"` + ClosedBy *User `json:"closed_by,omitempty"` URL *string `json:"url,omitempty"` HTMLURL *string `json:"html_url,omitempty"` Milestone *Milestone `json:"milestone,omitempty"` PullRequestLinks *PullRequestLinks `json:"pull_request,omitempty"` Repository *Repository `json:"repository,omitempty"` + Reactions *Reactions `json:"reactions,omitempty"` + Assignees []*User `json:"assignees,omitempty"` // TextMatches is only populated from search results that request text matches // See: search.go and https://developer.github.com/v3/search/#text-match-metadata @@ -56,27 +65,28 @@ type IssueRequest struct { Assignee *string `json:"assignee,omitempty"` State *string `json:"state,omitempty"` Milestone *int `json:"milestone,omitempty"` + Assignees *[]string `json:"assignees,omitempty"` } // IssueListOptions specifies the optional parameters to the IssuesService.List // and IssuesService.ListByOrg methods. type IssueListOptions struct { - // Filter specifies which issues to list. Possible values are: assigned, - // created, mentioned, subscribed, all. Default is "assigned". + // Filter specifies which issues to list. Possible values are: assigned, + // created, mentioned, subscribed, all. Default is "assigned". Filter string `url:"filter,omitempty"` - // State filters issues based on their state. Possible values are: open, - // closed, all. Default is "open". + // State filters issues based on their state. Possible values are: open, + // closed, all. Default is "open". State string `url:"state,omitempty"` // Labels filters issues based on their label. Labels []string `url:"labels,comma,omitempty"` - // Sort specifies how to sort issues. Possible values are: created, updated, - // and comments. Default value is "created". + // Sort specifies how to sort issues. Possible values are: created, updated, + // and comments. Default value is "created". Sort string `url:"sort,omitempty"` - // Direction in which to sort issues. Possible values are: asc, desc. + // Direction in which to sort issues. Possible values are: asc, desc. // Default is "desc". Direction string `url:"direction,omitempty"` @@ -95,32 +105,32 @@ type PullRequestLinks struct { PatchURL *string `json:"patch_url,omitempty"` } -// List the issues for the authenticated user. If all is true, list issues +// List the issues for the authenticated user. If all is true, list issues // across all the user's visible repositories including owned, member, and // organization repositories; if false, list only owned and member // repositories. // -// GitHub API docs: http://developer.github.com/v3/issues/#list-issues -func (s *IssuesService) List(all bool, opt *IssueListOptions) ([]Issue, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/#list-issues +func (s *IssuesService) List(ctx context.Context, all bool, opt *IssueListOptions) ([]*Issue, *Response, error) { var u string if all { u = "issues" } else { u = "user/issues" } - return s.listIssues(u, opt) + return s.listIssues(ctx, u, opt) } // ListByOrg fetches the issues in the specified organization for the // authenticated user. // -// GitHub API docs: http://developer.github.com/v3/issues/#list-issues -func (s *IssuesService) ListByOrg(org string, opt *IssueListOptions) ([]Issue, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/#list-issues +func (s *IssuesService) ListByOrg(ctx context.Context, org string, opt *IssueListOptions) ([]*Issue, *Response, error) { u := fmt.Sprintf("orgs/%v/issues", org) - return s.listIssues(u, opt) + return s.listIssues(ctx, u, opt) } -func (s *IssuesService) listIssues(u string, opt *IssueListOptions) ([]Issue, *Response, error) { +func (s *IssuesService) listIssues(ctx context.Context, u string, opt *IssueListOptions) ([]*Issue, *Response, error) { u, err := addOptions(u, opt) if err != nil { return nil, nil, err @@ -131,28 +141,31 @@ func (s *IssuesService) listIssues(u string, opt *IssueListOptions) ([]Issue, *R return nil, nil, err } - issues := new([]Issue) - resp, err := s.client.Do(req, issues) + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var issues []*Issue + resp, err := s.client.Do(ctx, req, &issues) if err != nil { return nil, resp, err } - return *issues, resp, err + return issues, resp, nil } // IssueListByRepoOptions specifies the optional parameters to the // IssuesService.ListByRepo method. type IssueListByRepoOptions struct { - // Milestone limits issues for the specified milestone. Possible values are + // Milestone limits issues for the specified milestone. Possible values are // a milestone number, "none" for issues with no milestone, "*" for issues // with any milestone. Milestone string `url:"milestone,omitempty"` - // State filters issues based on their state. Possible values are: open, - // closed, all. Default is "open". + // State filters issues based on their state. Possible values are: open, + // closed, all. Default is "open". State string `url:"state,omitempty"` - // Assignee filters issues based on their assignee. Possible values are a + // Assignee filters issues based on their assignee. Possible values are a // user name, "none" for issues that are not assigned, "*" for issues with // any assigned user. Assignee string `url:"assignee,omitempty"` @@ -166,11 +179,11 @@ type IssueListByRepoOptions struct { // Labels filters issues based on their label. Labels []string `url:"labels,omitempty,comma"` - // Sort specifies how to sort issues. Possible values are: created, updated, - // and comments. Default value is "created". + // Sort specifies how to sort issues. Possible values are: created, updated, + // and comments. Default value is "created". Sort string `url:"sort,omitempty"` - // Direction in which to sort issues. Possible values are: asc, desc. + // Direction in which to sort issues. Possible values are: asc, desc. // Default is "desc". Direction string `url:"direction,omitempty"` @@ -182,8 +195,8 @@ type IssueListByRepoOptions struct { // ListByRepo lists the issues for the specified repository. // -// GitHub API docs: http://developer.github.com/v3/issues/#list-issues-for-a-repository -func (s *IssuesService) ListByRepo(owner string, repo string, opt *IssueListByRepoOptions) ([]Issue, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/#list-issues-for-a-repository +func (s *IssuesService) ListByRepo(ctx context.Context, owner string, repo string, opt *IssueListByRepoOptions) ([]*Issue, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -195,38 +208,44 @@ func (s *IssuesService) ListByRepo(owner string, repo string, opt *IssueListByRe return nil, nil, err } - issues := new([]Issue) - resp, err := s.client.Do(req, issues) + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var issues []*Issue + resp, err := s.client.Do(ctx, req, &issues) if err != nil { return nil, resp, err } - return *issues, resp, err + return issues, resp, nil } // Get a single issue. // -// GitHub API docs: http://developer.github.com/v3/issues/#get-a-single-issue -func (s *IssuesService) Get(owner string, repo string, number int) (*Issue, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/#get-a-single-issue +func (s *IssuesService) Get(ctx context.Context, owner string, repo string, number int) (*Issue, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d", owner, repo, number) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + issue := new(Issue) - resp, err := s.client.Do(req, issue) + resp, err := s.client.Do(ctx, req, issue) if err != nil { return nil, resp, err } - return issue, resp, err + return issue, resp, nil } // Create a new issue on the specified repository. // -// GitHub API docs: http://developer.github.com/v3/issues/#create-an-issue -func (s *IssuesService) Create(owner string, repo string, issue *IssueRequest) (*Issue, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/#create-an-issue +func (s *IssuesService) Create(ctx context.Context, owner string, repo string, issue *IssueRequest) (*Issue, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues", owner, repo) req, err := s.client.NewRequest("POST", u, issue) if err != nil { @@ -234,18 +253,18 @@ func (s *IssuesService) Create(owner string, repo string, issue *IssueRequest) ( } i := new(Issue) - resp, err := s.client.Do(req, i) + resp, err := s.client.Do(ctx, req, i) if err != nil { return nil, resp, err } - return i, resp, err + return i, resp, nil } // Edit an issue. // -// GitHub API docs: http://developer.github.com/v3/issues/#edit-an-issue -func (s *IssuesService) Edit(owner string, repo string, number int, issue *IssueRequest) (*Issue, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/#edit-an-issue +func (s *IssuesService) Edit(ctx context.Context, owner string, repo string, number int, issue *IssueRequest) (*Issue, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d", owner, repo, number) req, err := s.client.NewRequest("PATCH", u, issue) if err != nil { @@ -253,42 +272,36 @@ func (s *IssuesService) Edit(owner string, repo string, number int, issue *Issue } i := new(Issue) - resp, err := s.client.Do(req, i) + resp, err := s.client.Do(ctx, req, i) if err != nil { return nil, resp, err } - return i, resp, err + return i, resp, nil } // Lock an issue's conversation. // // GitHub API docs: https://developer.github.com/v3/issues/#lock-an-issue -func (s *IssuesService) Lock(owner string, repo string, number int) (*Response, error) { +func (s *IssuesService) Lock(ctx context.Context, owner string, repo string, number int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/lock", owner, repo, number) req, err := s.client.NewRequest("PUT", u, nil) if err != nil { return nil, err } - // TODO: remove custom Accept header when this API fully launches. - req.Header.Set("Accept", mediaTypeIssueLockingPreview) - - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Unlock an issue's conversation. // // GitHub API docs: https://developer.github.com/v3/issues/#unlock-an-issue -func (s *IssuesService) Unlock(owner string, repo string, number int) (*Response, error) { +func (s *IssuesService) Unlock(ctx context.Context, owner string, repo string, number int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/lock", owner, repo, number) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - // TODO: remove custom Accept header when this API fully launches. - req.Header.Set("Accept", mediaTypeIssueLockingPreview) - - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/issues_assignees.go b/vendor/github.com/google/go-github/github/issues_assignees.go index 6338c22eca..9cb366f50a 100644 --- a/vendor/github.com/google/go-github/github/issues_assignees.go +++ b/vendor/github.com/google/go-github/github/issues_assignees.go @@ -5,13 +5,16 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // ListAssignees fetches all available assignees (owners and collaborators) to // which issues may be assigned. // -// GitHub API docs: http://developer.github.com/v3/issues/assignees/#list-assignees -func (s *IssuesService) ListAssignees(owner string, repo string, opt *ListOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/assignees/#list-assignees +func (s *IssuesService) ListAssignees(ctx context.Context, owner, repo string, opt *ListOptions) ([]*User, *Response, error) { u := fmt.Sprintf("repos/%v/%v/assignees", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -22,25 +25,61 @@ func (s *IssuesService) ListAssignees(owner string, repo string, opt *ListOption if err != nil { return nil, nil, err } - assignees := new([]User) - resp, err := s.client.Do(req, assignees) + var assignees []*User + resp, err := s.client.Do(ctx, req, &assignees) if err != nil { return nil, resp, err } - return *assignees, resp, err + return assignees, resp, nil } // IsAssignee checks if a user is an assignee for the specified repository. // -// GitHub API docs: http://developer.github.com/v3/issues/assignees/#check-assignee -func (s *IssuesService) IsAssignee(owner string, repo string, user string) (bool, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/assignees/#check-assignee +func (s *IssuesService) IsAssignee(ctx context.Context, owner, repo, user string) (bool, *Response, error) { u := fmt.Sprintf("repos/%v/%v/assignees/%v", owner, repo, user) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) assignee, err := parseBoolResponse(err) return assignee, resp, err } + +// AddAssignees adds the provided GitHub users as assignees to the issue. +// +// GitHub API docs: https://developer.github.com/v3/issues/assignees/#add-assignees-to-an-issue +func (s *IssuesService) AddAssignees(ctx context.Context, owner, repo string, number int, assignees []string) (*Issue, *Response, error) { + users := &struct { + Assignees []string `json:"assignees,omitempty"` + }{Assignees: assignees} + u := fmt.Sprintf("repos/%v/%v/issues/%v/assignees", owner, repo, number) + req, err := s.client.NewRequest("POST", u, users) + if err != nil { + return nil, nil, err + } + + issue := &Issue{} + resp, err := s.client.Do(ctx, req, issue) + return issue, resp, err +} + +// RemoveAssignees removes the provided GitHub users as assignees from the issue. +// +// GitHub API docs: https://developer.github.com/v3/issues/assignees/#remove-assignees-from-an-issue +func (s *IssuesService) RemoveAssignees(ctx context.Context, owner, repo string, number int, assignees []string) (*Issue, *Response, error) { + users := &struct { + Assignees []string `json:"assignees,omitempty"` + }{Assignees: assignees} + u := fmt.Sprintf("repos/%v/%v/issues/%v/assignees", owner, repo, number) + req, err := s.client.NewRequest("DELETE", u, users) + if err != nil { + return nil, nil, err + } + + issue := &Issue{} + resp, err := s.client.Do(ctx, req, issue) + return issue, resp, err +} diff --git a/vendor/github.com/google/go-github/github/issues_comments.go b/vendor/github.com/google/go-github/github/issues_comments.go index db48e144f6..fd72657cd4 100644 --- a/vendor/github.com/google/go-github/github/issues_comments.go +++ b/vendor/github.com/google/go-github/github/issues_comments.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -15,6 +16,7 @@ type IssueComment struct { ID *int `json:"id,omitempty"` Body *string `json:"body,omitempty"` User *User `json:"user,omitempty"` + Reactions *Reactions `json:"reactions,omitempty"` CreatedAt *time.Time `json:"created_at,omitempty"` UpdatedAt *time.Time `json:"updated_at,omitempty"` URL *string `json:"url,omitempty"` @@ -29,10 +31,10 @@ func (i IssueComment) String() string { // IssueListCommentsOptions specifies the optional parameters to the // IssuesService.ListComments method. type IssueListCommentsOptions struct { - // Sort specifies how to sort comments. Possible values are: created, updated. + // Sort specifies how to sort comments. Possible values are: created, updated. Sort string `url:"sort,omitempty"` - // Direction in which to sort comments. Possible values are: asc, desc. + // Direction in which to sort comments. Possible values are: asc, desc. Direction string `url:"direction,omitempty"` // Since filters comments by time. @@ -41,11 +43,11 @@ type IssueListCommentsOptions struct { ListOptions } -// ListComments lists all comments on the specified issue. Specifying an issue +// ListComments lists all comments on the specified issue. Specifying an issue // number of 0 will return all comments on all issues for the repository. // -// GitHub API docs: http://developer.github.com/v3/issues/comments/#list-comments-on-an-issue -func (s *IssuesService) ListComments(owner string, repo string, number int, opt *IssueListCommentsOptions) ([]IssueComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/comments/#list-comments-on-an-issue +func (s *IssuesService) ListComments(ctx context.Context, owner string, repo string, number int, opt *IssueListCommentsOptions) ([]*IssueComment, *Response, error) { var u string if number == 0 { u = fmt.Sprintf("repos/%v/%v/issues/comments", owner, repo) @@ -61,78 +63,86 @@ func (s *IssuesService) ListComments(owner string, repo string, number int, opt if err != nil { return nil, nil, err } - comments := new([]IssueComment) - resp, err := s.client.Do(req, comments) + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var comments []*IssueComment + resp, err := s.client.Do(ctx, req, &comments) if err != nil { return nil, resp, err } - return *comments, resp, err + return comments, resp, nil } // GetComment fetches the specified issue comment. // -// GitHub API docs: http://developer.github.com/v3/issues/comments/#get-a-single-comment -func (s *IssuesService) GetComment(owner string, repo string, id int) (*IssueComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/comments/#get-a-single-comment +func (s *IssuesService) GetComment(ctx context.Context, owner string, repo string, id int) (*IssueComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/comments/%d", owner, repo, id) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + comment := new(IssueComment) - resp, err := s.client.Do(req, comment) + resp, err := s.client.Do(ctx, req, comment) if err != nil { return nil, resp, err } - return comment, resp, err + return comment, resp, nil } // CreateComment creates a new comment on the specified issue. // -// GitHub API docs: http://developer.github.com/v3/issues/comments/#create-a-comment -func (s *IssuesService) CreateComment(owner string, repo string, number int, comment *IssueComment) (*IssueComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/comments/#create-a-comment +func (s *IssuesService) CreateComment(ctx context.Context, owner string, repo string, number int, comment *IssueComment) (*IssueComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/comments", owner, repo, number) req, err := s.client.NewRequest("POST", u, comment) if err != nil { return nil, nil, err } c := new(IssueComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // EditComment updates an issue comment. // -// GitHub API docs: http://developer.github.com/v3/issues/comments/#edit-a-comment -func (s *IssuesService) EditComment(owner string, repo string, id int, comment *IssueComment) (*IssueComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/comments/#edit-a-comment +func (s *IssuesService) EditComment(ctx context.Context, owner string, repo string, id int, comment *IssueComment) (*IssueComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/comments/%d", owner, repo, id) req, err := s.client.NewRequest("PATCH", u, comment) if err != nil { return nil, nil, err } c := new(IssueComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // DeleteComment deletes an issue comment. // -// GitHub API docs: http://developer.github.com/v3/issues/comments/#delete-a-comment -func (s *IssuesService) DeleteComment(owner string, repo string, id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/comments/#delete-a-comment +func (s *IssuesService) DeleteComment(ctx context.Context, owner string, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/comments/%d", owner, repo, id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/issues_events.go b/vendor/github.com/google/go-github/github/issues_events.go index 9062d4da1e..bede41901f 100644 --- a/vendor/github.com/google/go-github/github/issues_events.go +++ b/vendor/github.com/google/go-github/github/issues_events.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -18,7 +19,7 @@ type IssueEvent struct { // The User that generated this event. Actor *User `json:"actor,omitempty"` - // Event identifies the actual type of Event that occurred. Possible + // Event identifies the actual type of Event that occurred. Possible // values are: // // closed @@ -73,7 +74,7 @@ type IssueEvent struct { // ListIssueEvents lists events for the specified issue. // // GitHub API docs: https://developer.github.com/v3/issues/events/#list-events-for-an-issue -func (s *IssuesService) ListIssueEvents(owner, repo string, number int, opt *ListOptions) ([]IssueEvent, *Response, error) { +func (s *IssuesService) ListIssueEvents(ctx context.Context, owner, repo string, number int, opt *ListOptions) ([]*IssueEvent, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%v/events", owner, repo, number) u, err := addOptions(u, opt) if err != nil { @@ -85,19 +86,19 @@ func (s *IssuesService) ListIssueEvents(owner, repo string, number int, opt *Lis return nil, nil, err } - var events []IssueEvent - resp, err := s.client.Do(req, &events) + var events []*IssueEvent + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return events, resp, err + return events, resp, nil } // ListRepositoryEvents lists events for the specified repository. // // GitHub API docs: https://developer.github.com/v3/issues/events/#list-events-for-a-repository -func (s *IssuesService) ListRepositoryEvents(owner, repo string, opt *ListOptions) ([]IssueEvent, *Response, error) { +func (s *IssuesService) ListRepositoryEvents(ctx context.Context, owner, repo string, opt *ListOptions) ([]*IssueEvent, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/events", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -109,19 +110,19 @@ func (s *IssuesService) ListRepositoryEvents(owner, repo string, opt *ListOption return nil, nil, err } - var events []IssueEvent - resp, err := s.client.Do(req, &events) + var events []*IssueEvent + resp, err := s.client.Do(ctx, req, &events) if err != nil { return nil, resp, err } - return events, resp, err + return events, resp, nil } // GetEvent returns the specified issue event. // // GitHub API docs: https://developer.github.com/v3/issues/events/#get-a-single-event -func (s *IssuesService) GetEvent(owner, repo string, id int) (*IssueEvent, *Response, error) { +func (s *IssuesService) GetEvent(ctx context.Context, owner, repo string, id int) (*IssueEvent, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/events/%v", owner, repo, id) req, err := s.client.NewRequest("GET", u, nil) @@ -130,12 +131,12 @@ func (s *IssuesService) GetEvent(owner, repo string, id int) (*IssueEvent, *Resp } event := new(IssueEvent) - resp, err := s.client.Do(req, event) + resp, err := s.client.Do(ctx, req, event) if err != nil { return nil, resp, err } - return event, resp, err + return event, resp, nil } // Rename contains details for 'renamed' events. diff --git a/vendor/github.com/google/go-github/github/issues_labels.go b/vendor/github.com/google/go-github/github/issues_labels.go index 88f9f3ff96..5c0b821c31 100644 --- a/vendor/github.com/google/go-github/github/issues_labels.go +++ b/vendor/github.com/google/go-github/github/issues_labels.go @@ -5,7 +5,10 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // Label represents a GitHub label on an Issue type Label struct { @@ -20,8 +23,8 @@ func (l Label) String() string { // ListLabels lists all labels for a repository. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#list-all-labels-for-this-repository -func (s *IssuesService) ListLabels(owner string, repo string, opt *ListOptions) ([]Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#list-all-labels-for-this-repository +func (s *IssuesService) ListLabels(ctx context.Context, owner string, repo string, opt *ListOptions) ([]*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/labels", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -33,19 +36,19 @@ func (s *IssuesService) ListLabels(owner string, repo string, opt *ListOptions) return nil, nil, err } - labels := new([]Label) - resp, err := s.client.Do(req, labels) + var labels []*Label + resp, err := s.client.Do(ctx, req, &labels) if err != nil { return nil, resp, err } - return *labels, resp, err + return labels, resp, nil } // GetLabel gets a single label. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#get-a-single-label -func (s *IssuesService) GetLabel(owner string, repo string, name string) (*Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#get-a-single-label +func (s *IssuesService) GetLabel(ctx context.Context, owner string, repo string, name string) (*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/labels/%v", owner, repo, name) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -53,18 +56,18 @@ func (s *IssuesService) GetLabel(owner string, repo string, name string) (*Label } label := new(Label) - resp, err := s.client.Do(req, label) + resp, err := s.client.Do(ctx, req, label) if err != nil { return nil, resp, err } - return label, resp, err + return label, resp, nil } // CreateLabel creates a new label on the specified repository. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#create-a-label -func (s *IssuesService) CreateLabel(owner string, repo string, label *Label) (*Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#create-a-label +func (s *IssuesService) CreateLabel(ctx context.Context, owner string, repo string, label *Label) (*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/labels", owner, repo) req, err := s.client.NewRequest("POST", u, label) if err != nil { @@ -72,18 +75,18 @@ func (s *IssuesService) CreateLabel(owner string, repo string, label *Label) (*L } l := new(Label) - resp, err := s.client.Do(req, l) + resp, err := s.client.Do(ctx, req, l) if err != nil { return nil, resp, err } - return l, resp, err + return l, resp, nil } // EditLabel edits a label. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#update-a-label -func (s *IssuesService) EditLabel(owner string, repo string, name string, label *Label) (*Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#update-a-label +func (s *IssuesService) EditLabel(ctx context.Context, owner string, repo string, name string, label *Label) (*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/labels/%v", owner, repo, name) req, err := s.client.NewRequest("PATCH", u, label) if err != nil { @@ -91,30 +94,30 @@ func (s *IssuesService) EditLabel(owner string, repo string, name string, label } l := new(Label) - resp, err := s.client.Do(req, l) + resp, err := s.client.Do(ctx, req, l) if err != nil { return nil, resp, err } - return l, resp, err + return l, resp, nil } // DeleteLabel deletes a label. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#delete-a-label -func (s *IssuesService) DeleteLabel(owner string, repo string, name string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#delete-a-label +func (s *IssuesService) DeleteLabel(ctx context.Context, owner string, repo string, name string) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/labels/%v", owner, repo, name) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // ListLabelsByIssue lists all labels for an issue. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#list-all-labels-for-this-repository -func (s *IssuesService) ListLabelsByIssue(owner string, repo string, number int, opt *ListOptions) ([]Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#list-labels-on-an-issue +func (s *IssuesService) ListLabelsByIssue(ctx context.Context, owner string, repo string, number int, opt *ListOptions) ([]*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/labels", owner, repo, number) u, err := addOptions(u, opt) if err != nil { @@ -126,81 +129,81 @@ func (s *IssuesService) ListLabelsByIssue(owner string, repo string, number int, return nil, nil, err } - labels := new([]Label) - resp, err := s.client.Do(req, labels) + var labels []*Label + resp, err := s.client.Do(ctx, req, &labels) if err != nil { return nil, resp, err } - return *labels, resp, err + return labels, resp, nil } // AddLabelsToIssue adds labels to an issue. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#list-all-labels-for-this-repository -func (s *IssuesService) AddLabelsToIssue(owner string, repo string, number int, labels []string) ([]Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#add-labels-to-an-issue +func (s *IssuesService) AddLabelsToIssue(ctx context.Context, owner string, repo string, number int, labels []string) ([]*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/labels", owner, repo, number) req, err := s.client.NewRequest("POST", u, labels) if err != nil { return nil, nil, err } - l := new([]Label) - resp, err := s.client.Do(req, l) + var l []*Label + resp, err := s.client.Do(ctx, req, &l) if err != nil { return nil, resp, err } - return *l, resp, err + return l, resp, nil } // RemoveLabelForIssue removes a label for an issue. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#remove-a-label-from-an-issue -func (s *IssuesService) RemoveLabelForIssue(owner string, repo string, number int, label string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#remove-a-label-from-an-issue +func (s *IssuesService) RemoveLabelForIssue(ctx context.Context, owner string, repo string, number int, label string) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/labels/%v", owner, repo, number, label) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // ReplaceLabelsForIssue replaces all labels for an issue. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#replace-all-labels-for-an-issue -func (s *IssuesService) ReplaceLabelsForIssue(owner string, repo string, number int, labels []string) ([]Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#replace-all-labels-for-an-issue +func (s *IssuesService) ReplaceLabelsForIssue(ctx context.Context, owner string, repo string, number int, labels []string) ([]*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/labels", owner, repo, number) req, err := s.client.NewRequest("PUT", u, labels) if err != nil { return nil, nil, err } - l := new([]Label) - resp, err := s.client.Do(req, l) + var l []*Label + resp, err := s.client.Do(ctx, req, &l) if err != nil { return nil, resp, err } - return *l, resp, err + return l, resp, nil } // RemoveLabelsForIssue removes all labels for an issue. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#remove-all-labels-from-an-issue -func (s *IssuesService) RemoveLabelsForIssue(owner string, repo string, number int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#remove-all-labels-from-an-issue +func (s *IssuesService) RemoveLabelsForIssue(ctx context.Context, owner string, repo string, number int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/issues/%d/labels", owner, repo, number) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // ListLabelsForMilestone lists labels for every issue in a milestone. // -// GitHub API docs: http://developer.github.com/v3/issues/labels/#get-labels-for-every-issue-in-a-milestone -func (s *IssuesService) ListLabelsForMilestone(owner string, repo string, number int, opt *ListOptions) ([]Label, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/issues/labels/#get-labels-for-every-issue-in-a-milestone +func (s *IssuesService) ListLabelsForMilestone(ctx context.Context, owner string, repo string, number int, opt *ListOptions) ([]*Label, *Response, error) { u := fmt.Sprintf("repos/%v/%v/milestones/%d/labels", owner, repo, number) u, err := addOptions(u, opt) if err != nil { @@ -212,11 +215,11 @@ func (s *IssuesService) ListLabelsForMilestone(owner string, repo string, number return nil, nil, err } - labels := new([]Label) - resp, err := s.client.Do(req, labels) + var labels []*Label + resp, err := s.client.Do(ctx, req, &labels) if err != nil { return nil, resp, err } - return *labels, resp, err + return labels, resp, nil } diff --git a/vendor/github.com/google/go-github/github/issues_milestones.go b/vendor/github.com/google/go-github/github/issues_milestones.go index cbd79200e1..bc89816a64 100644 --- a/vendor/github.com/google/go-github/github/issues_milestones.go +++ b/vendor/github.com/google/go-github/github/issues_milestones.go @@ -6,13 +6,17 @@ package github import ( + "context" "fmt" "time" ) -// Milestone represents a Github repository milestone. +// Milestone represents a GitHub repository milestone. type Milestone struct { URL *string `json:"url,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` + LabelsURL *string `json:"labels_url,omitempty"` + ID *int `json:"id,omitempty"` Number *int `json:"number,omitempty"` State *string `json:"state,omitempty"` Title *string `json:"title,omitempty"` @@ -22,6 +26,7 @@ type Milestone struct { ClosedIssues *int `json:"closed_issues,omitempty"` CreatedAt *time.Time `json:"created_at,omitempty"` UpdatedAt *time.Time `json:"updated_at,omitempty"` + ClosedAt *time.Time `json:"closed_at,omitempty"` DueOn *time.Time `json:"due_on,omitempty"` } @@ -43,12 +48,14 @@ type MilestoneListOptions struct { // Direction in which to sort milestones. Possible values are: asc, desc. // Default is "asc". Direction string `url:"direction,omitempty"` + + ListOptions } // ListMilestones lists all milestones for a repository. // // GitHub API docs: https://developer.github.com/v3/issues/milestones/#list-milestones-for-a-repository -func (s *IssuesService) ListMilestones(owner string, repo string, opt *MilestoneListOptions) ([]Milestone, *Response, error) { +func (s *IssuesService) ListMilestones(ctx context.Context, owner string, repo string, opt *MilestoneListOptions) ([]*Milestone, *Response, error) { u := fmt.Sprintf("repos/%v/%v/milestones", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -60,19 +67,19 @@ func (s *IssuesService) ListMilestones(owner string, repo string, opt *Milestone return nil, nil, err } - milestones := new([]Milestone) - resp, err := s.client.Do(req, milestones) + var milestones []*Milestone + resp, err := s.client.Do(ctx, req, &milestones) if err != nil { return nil, resp, err } - return *milestones, resp, err + return milestones, resp, nil } // GetMilestone gets a single milestone. // // GitHub API docs: https://developer.github.com/v3/issues/milestones/#get-a-single-milestone -func (s *IssuesService) GetMilestone(owner string, repo string, number int) (*Milestone, *Response, error) { +func (s *IssuesService) GetMilestone(ctx context.Context, owner string, repo string, number int) (*Milestone, *Response, error) { u := fmt.Sprintf("repos/%v/%v/milestones/%d", owner, repo, number) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -80,18 +87,18 @@ func (s *IssuesService) GetMilestone(owner string, repo string, number int) (*Mi } milestone := new(Milestone) - resp, err := s.client.Do(req, milestone) + resp, err := s.client.Do(ctx, req, milestone) if err != nil { return nil, resp, err } - return milestone, resp, err + return milestone, resp, nil } // CreateMilestone creates a new milestone on the specified repository. // // GitHub API docs: https://developer.github.com/v3/issues/milestones/#create-a-milestone -func (s *IssuesService) CreateMilestone(owner string, repo string, milestone *Milestone) (*Milestone, *Response, error) { +func (s *IssuesService) CreateMilestone(ctx context.Context, owner string, repo string, milestone *Milestone) (*Milestone, *Response, error) { u := fmt.Sprintf("repos/%v/%v/milestones", owner, repo) req, err := s.client.NewRequest("POST", u, milestone) if err != nil { @@ -99,18 +106,18 @@ func (s *IssuesService) CreateMilestone(owner string, repo string, milestone *Mi } m := new(Milestone) - resp, err := s.client.Do(req, m) + resp, err := s.client.Do(ctx, req, m) if err != nil { return nil, resp, err } - return m, resp, err + return m, resp, nil } // EditMilestone edits a milestone. // // GitHub API docs: https://developer.github.com/v3/issues/milestones/#update-a-milestone -func (s *IssuesService) EditMilestone(owner string, repo string, number int, milestone *Milestone) (*Milestone, *Response, error) { +func (s *IssuesService) EditMilestone(ctx context.Context, owner string, repo string, number int, milestone *Milestone) (*Milestone, *Response, error) { u := fmt.Sprintf("repos/%v/%v/milestones/%d", owner, repo, number) req, err := s.client.NewRequest("PATCH", u, milestone) if err != nil { @@ -118,23 +125,23 @@ func (s *IssuesService) EditMilestone(owner string, repo string, number int, mil } m := new(Milestone) - resp, err := s.client.Do(req, m) + resp, err := s.client.Do(ctx, req, m) if err != nil { return nil, resp, err } - return m, resp, err + return m, resp, nil } // DeleteMilestone deletes a milestone. // // GitHub API docs: https://developer.github.com/v3/issues/milestones/#delete-a-milestone -func (s *IssuesService) DeleteMilestone(owner string, repo string, number int) (*Response, error) { +func (s *IssuesService) DeleteMilestone(ctx context.Context, owner string, repo string, number int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/milestones/%d", owner, repo, number) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/issues_timeline.go b/vendor/github.com/google/go-github/github/issues_timeline.go new file mode 100644 index 0000000000..bc0b108990 --- /dev/null +++ b/vendor/github.com/google/go-github/github/issues_timeline.go @@ -0,0 +1,149 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" + "time" +) + +// Timeline represents an event that occurred around an Issue or Pull Request. +// +// It is similar to an IssueEvent but may contain more information. +// GitHub API docs: https://developer.github.com/v3/issues/timeline/ +type Timeline struct { + ID *int `json:"id,omitempty"` + URL *string `json:"url,omitempty"` + CommitURL *string `json:"commit_url,omitempty"` + + // The User object that generated the event. + Actor *User `json:"actor,omitempty"` + + // Event identifies the actual type of Event that occurred. Possible values + // are: + // + // assigned + // The issue was assigned to the assignee. + // + // closed + // The issue was closed by the actor. When the commit_id is present, it + // identifies the commit that closed the issue using "closes / fixes #NN" + // syntax. + // + // commented + // A comment was added to the issue. + // + // committed + // A commit was added to the pull request's 'HEAD' branch. Only provided + // for pull requests. + // + // cross-referenced + // The issue was referenced from another issue. The 'source' attribute + // contains the 'id', 'actor', and 'url' of the reference's source. + // + // demilestoned + // The issue was removed from a milestone. + // + // head_ref_deleted + // The pull request's branch was deleted. + // + // head_ref_restored + // The pull request's branch was restored. + // + // labeled + // A label was added to the issue. + // + // locked + // The issue was locked by the actor. + // + // mentioned + // The actor was @mentioned in an issue body. + // + // merged + // The issue was merged by the actor. The 'commit_id' attribute is the + // SHA1 of the HEAD commit that was merged. + // + // milestoned + // The issue was added to a milestone. + // + // referenced + // The issue was referenced from a commit message. The 'commit_id' + // attribute is the commit SHA1 of where that happened. + // + // renamed + // The issue title was changed. + // + // reopened + // The issue was reopened by the actor. + // + // subscribed + // The actor subscribed to receive notifications for an issue. + // + // unassigned + // The assignee was unassigned from the issue. + // + // unlabeled + // A label was removed from the issue. + // + // unlocked + // The issue was unlocked by the actor. + // + // unsubscribed + // The actor unsubscribed to stop receiving notifications for an issue. + // + Event *string `json:"event,omitempty"` + + // The string SHA of a commit that referenced this Issue or Pull Request. + CommitID *string `json:"commit_id,omitempty"` + // The timestamp indicating when the event occurred. + CreatedAt *time.Time `json:"created_at,omitempty"` + // The Label object including `name` and `color` attributes. Only provided for + // 'labeled' and 'unlabeled' events. + Label *Label `json:"label,omitempty"` + // The User object which was assigned to (or unassigned from) this Issue or + // Pull Request. Only provided for 'assigned' and 'unassigned' events. + Assignee *User `json:"assignee,omitempty"` + // The Milestone object including a 'title' attribute. + // Only provided for 'milestoned' and 'demilestoned' events. + Milestone *Milestone `json:"milestone,omitempty"` + // The 'id', 'actor', and 'url' for the source of a reference from another issue. + // Only provided for 'cross-referenced' events. + Source *Source `json:"source,omitempty"` + // An object containing rename details including 'from' and 'to' attributes. + // Only provided for 'renamed' events. + Rename *Rename `json:"rename,omitempty"` +} + +// Source represents a reference's source. +type Source struct { + ID *int `json:"id,omitempty"` + URL *string `json:"url,omitempty"` + Actor *User `json:"actor,omitempty"` +} + +// ListIssueTimeline lists events for the specified issue. +// +// GitHub API docs: https://developer.github.com/v3/issues/timeline/#list-events-for-an-issue +func (s *IssuesService) ListIssueTimeline(ctx context.Context, owner, repo string, number int, opt *ListOptions) ([]*Timeline, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/issues/%v/timeline", owner, repo, number) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeTimelinePreview) + + var events []*Timeline + resp, err := s.client.Do(ctx, req, &events) + return events, resp, err +} diff --git a/vendor/github.com/google/go-github/github/licenses.go b/vendor/github.com/google/go-github/github/licenses.go index fb2fb5af2c..e9cd1777af 100644 --- a/vendor/github.com/google/go-github/github/licenses.go +++ b/vendor/github.com/google/go-github/github/licenses.go @@ -5,14 +5,36 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // LicensesService handles communication with the license related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/pulls/ -type LicensesService struct { - client *Client +// GitHub API docs: https://developer.github.com/v3/licenses/ +type LicensesService service + +// RepositoryLicense represents the license for a repository. +type RepositoryLicense struct { + Name *string `json:"name,omitempty"` + Path *string `json:"path,omitempty"` + + SHA *string `json:"sha,omitempty"` + Size *int `json:"size,omitempty"` + URL *string `json:"url,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` + GitURL *string `json:"git_url,omitempty"` + DownloadURL *string `json:"download_url,omitempty"` + Type *string `json:"type,omitempty"` + Content *string `json:"content,omitempty"` + Encoding *string `json:"encoding,omitempty"` + License *License `json:"license,omitempty"` +} + +func (l RepositoryLicense) String() string { + return Stringify(l) } // License represents an open source license. @@ -21,14 +43,14 @@ type License struct { Name *string `json:"name,omitempty"` URL *string `json:"url,omitempty"` + SPDXID *string `json:"spdx_id,omitempty"` HTMLURL *string `json:"html_url,omitempty"` Featured *bool `json:"featured,omitempty"` Description *string `json:"description,omitempty"` - Category *string `json:"category,omitempty"` Implementation *string `json:"implementation,omitempty"` - Required *[]string `json:"required,omitempty"` - Permitted *[]string `json:"permitted,omitempty"` - Forbidden *[]string `json:"forbidden,omitempty"` + Permissions *[]string `json:"permissions,omitempty"` + Conditions *[]string `json:"conditions,omitempty"` + Limitations *[]string `json:"limitations,omitempty"` Body *string `json:"body,omitempty"` } @@ -39,7 +61,7 @@ func (l License) String() string { // List popular open source licenses. // // GitHub API docs: https://developer.github.com/v3/licenses/#list-all-licenses -func (s *LicensesService) List() ([]License, *Response, error) { +func (s *LicensesService) List(ctx context.Context) ([]*License, *Response, error) { req, err := s.client.NewRequest("GET", "licenses", nil) if err != nil { return nil, nil, err @@ -48,19 +70,19 @@ func (s *LicensesService) List() ([]License, *Response, error) { // TODO: remove custom Accept header when this API fully launches req.Header.Set("Accept", mediaTypeLicensesPreview) - licenses := new([]License) - resp, err := s.client.Do(req, licenses) + var licenses []*License + resp, err := s.client.Do(ctx, req, &licenses) if err != nil { return nil, resp, err } - return *licenses, resp, err + return licenses, resp, nil } // Get extended metadata for one license. // // GitHub API docs: https://developer.github.com/v3/licenses/#get-an-individual-license -func (s *LicensesService) Get(licenseName string) (*License, *Response, error) { +func (s *LicensesService) Get(ctx context.Context, licenseName string) (*License, *Response, error) { u := fmt.Sprintf("licenses/%s", licenseName) req, err := s.client.NewRequest("GET", u, nil) @@ -72,10 +94,10 @@ func (s *LicensesService) Get(licenseName string) (*License, *Response, error) { req.Header.Set("Accept", mediaTypeLicensesPreview) license := new(License) - resp, err := s.client.Do(req, license) + resp, err := s.client.Do(ctx, req, license) if err != nil { return nil, resp, err } - return license, resp, err + return license, resp, nil } diff --git a/vendor/github.com/google/go-github/github/messages.go b/vendor/github.com/google/go-github/github/messages.go new file mode 100644 index 0000000000..a7ec65fba2 --- /dev/null +++ b/vendor/github.com/google/go-github/github/messages.go @@ -0,0 +1,198 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file provides functions for validating payloads from GitHub Webhooks. +// GitHub API docs: https://developer.github.com/webhooks/securing/#validating-payloads-from-github + +package github + +import ( + "crypto/hmac" + "crypto/sha1" + "crypto/sha256" + "crypto/sha512" + "encoding/hex" + "encoding/json" + "errors" + "fmt" + "hash" + "io/ioutil" + "net/http" + "strings" +) + +const ( + // sha1Prefix is the prefix used by GitHub before the HMAC hexdigest. + sha1Prefix = "sha1" + // sha256Prefix and sha512Prefix are provided for future compatibility. + sha256Prefix = "sha256" + sha512Prefix = "sha512" + // signatureHeader is the GitHub header key used to pass the HMAC hexdigest. + signatureHeader = "X-Hub-Signature" + // eventTypeHeader is the GitHub header key used to pass the event type. + eventTypeHeader = "X-Github-Event" +) + +var ( + // eventTypeMapping maps webhooks types to their corresponding go-github struct types. + eventTypeMapping = map[string]string{ + "commit_comment": "CommitCommentEvent", + "create": "CreateEvent", + "delete": "DeleteEvent", + "deployment": "DeploymentEvent", + "deployment_status": "DeploymentStatusEvent", + "fork": "ForkEvent", + "gollum": "GollumEvent", + "integration_installation": "IntegrationInstallationEvent", + "integration_installation_repositories": "IntegrationInstallationRepositoriesEvent", + "issue_comment": "IssueCommentEvent", + "issues": "IssuesEvent", + "label": "LabelEvent", + "member": "MemberEvent", + "membership": "MembershipEvent", + "milestone": "MilestoneEvent", + "organization": "OrganizationEvent", + "page_build": "PageBuildEvent", + "ping": "PingEvent", + "project": "ProjectEvent", + "project_card": "ProjectCardEvent", + "project_column": "ProjectColumnEvent", + "public": "PublicEvent", + "pull_request_review": "PullRequestReviewEvent", + "pull_request_review_comment": "PullRequestReviewCommentEvent", + "pull_request": "PullRequestEvent", + "push": "PushEvent", + "repository": "RepositoryEvent", + "release": "ReleaseEvent", + "status": "StatusEvent", + "team_add": "TeamAddEvent", + "watch": "WatchEvent", + } +) + +// genMAC generates the HMAC signature for a message provided the secret key +// and hashFunc. +func genMAC(message, key []byte, hashFunc func() hash.Hash) []byte { + mac := hmac.New(hashFunc, key) + mac.Write(message) + return mac.Sum(nil) +} + +// checkMAC reports whether messageMAC is a valid HMAC tag for message. +func checkMAC(message, messageMAC, key []byte, hashFunc func() hash.Hash) bool { + expectedMAC := genMAC(message, key, hashFunc) + return hmac.Equal(messageMAC, expectedMAC) +} + +// messageMAC returns the hex-decoded HMAC tag from the signature and its +// corresponding hash function. +func messageMAC(signature string) ([]byte, func() hash.Hash, error) { + if signature == "" { + return nil, nil, errors.New("missing signature") + } + sigParts := strings.SplitN(signature, "=", 2) + if len(sigParts) != 2 { + return nil, nil, fmt.Errorf("error parsing signature %q", signature) + } + + var hashFunc func() hash.Hash + switch sigParts[0] { + case sha1Prefix: + hashFunc = sha1.New + case sha256Prefix: + hashFunc = sha256.New + case sha512Prefix: + hashFunc = sha512.New + default: + return nil, nil, fmt.Errorf("unknown hash type prefix: %q", sigParts[0]) + } + + buf, err := hex.DecodeString(sigParts[1]) + if err != nil { + return nil, nil, fmt.Errorf("error decoding signature %q: %v", signature, err) + } + return buf, hashFunc, nil +} + +// ValidatePayload validates an incoming GitHub Webhook event request +// and returns the (JSON) payload. +// secretKey is the GitHub Webhook secret message. +// +// Example usage: +// +// func (s *GitHubEventMonitor) ServeHTTP(w http.ResponseWriter, r *http.Request) { +// payload, err := github.ValidatePayload(r, s.webhookSecretKey) +// if err != nil { ... } +// // Process payload... +// } +// +func ValidatePayload(r *http.Request, secretKey []byte) (payload []byte, err error) { + payload, err = ioutil.ReadAll(r.Body) + if err != nil { + return nil, err + } + + sig := r.Header.Get(signatureHeader) + if err := validateSignature(sig, payload, secretKey); err != nil { + return nil, err + } + return payload, nil +} + +// validateSignature validates the signature for the given payload. +// signature is the GitHub hash signature delivered in the X-Hub-Signature header. +// payload is the JSON payload sent by GitHub Webhooks. +// secretKey is the GitHub Webhook secret message. +// +// GitHub API docs: https://developer.github.com/webhooks/securing/#validating-payloads-from-github +func validateSignature(signature string, payload, secretKey []byte) error { + messageMAC, hashFunc, err := messageMAC(signature) + if err != nil { + return err + } + if !checkMAC(payload, messageMAC, secretKey, hashFunc) { + return errors.New("payload signature check failed") + } + return nil +} + +// WebHookType returns the event type of webhook request r. +func WebHookType(r *http.Request) string { + return r.Header.Get(eventTypeHeader) +} + +// ParseWebHook parses the event payload. For recognized event types, a +// value of the corresponding struct type will be returned (as returned +// by Event.ParsePayload()). An error will be returned for unrecognized event +// types. +// +// Example usage: +// +// func (s *GitHubEventMonitor) ServeHTTP(w http.ResponseWriter, r *http.Request) { +// payload, err := github.ValidatePayload(r, s.webhookSecretKey) +// if err != nil { ... } +// event, err := github.ParseWebHook(github.WebHookType(r), payload) +// if err != nil { ... } +// switch event := event.(type) { +// case *github.CommitCommentEvent: +// processCommitCommentEvent(event) +// case *github.CreateEvent: +// processCreateEvent(event) +// ... +// } +// } +// +func ParseWebHook(messageType string, payload []byte) (interface{}, error) { + eventType, ok := eventTypeMapping[messageType] + if !ok { + return nil, fmt.Errorf("unknown X-Github-Event in message: %v", messageType) + } + + event := Event{ + Type: &eventType, + RawPayload: (*json.RawMessage)(&payload), + } + return event.ParsePayload() +} diff --git a/vendor/github.com/google/go-github/github/migrations.go b/vendor/github.com/google/go-github/github/migrations.go new file mode 100644 index 0000000000..6793269cd3 --- /dev/null +++ b/vendor/github.com/google/go-github/github/migrations.go @@ -0,0 +1,224 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "errors" + "fmt" + "net/http" + "strings" +) + +// MigrationService provides access to the migration related functions +// in the GitHub API. +// +// GitHub API docs: https://developer.github.com/v3/migration/ +type MigrationService service + +// Migration represents a GitHub migration (archival). +type Migration struct { + ID *int `json:"id,omitempty"` + GUID *string `json:"guid,omitempty"` + // State is the current state of a migration. + // Possible values are: + // "pending" which means the migration hasn't started yet, + // "exporting" which means the migration is in progress, + // "exported" which means the migration finished successfully, or + // "failed" which means the migration failed. + State *string `json:"state,omitempty"` + // LockRepositories indicates whether repositories are locked (to prevent + // manipulation) while migrating data. + LockRepositories *bool `json:"lock_repositories,omitempty"` + // ExcludeAttachments indicates whether attachments should be excluded from + // the migration (to reduce migration archive file size). + ExcludeAttachments *bool `json:"exclude_attachments,omitempty"` + URL *string `json:"url,omitempty"` + CreatedAt *string `json:"created_at,omitempty"` + UpdatedAt *string `json:"updated_at,omitempty"` + Repositories []*Repository `json:"repositories,omitempty"` +} + +func (m Migration) String() string { + return Stringify(m) +} + +// MigrationOptions specifies the optional parameters to Migration methods. +type MigrationOptions struct { + // LockRepositories indicates whether repositories should be locked (to prevent + // manipulation) while migrating data. + LockRepositories bool + + // ExcludeAttachments indicates whether attachments should be excluded from + // the migration (to reduce migration archive file size). + ExcludeAttachments bool +} + +// startMigration represents the body of a StartMigration request. +type startMigration struct { + // Repositories is a slice of repository names to migrate. + Repositories []string `json:"repositories,omitempty"` + + // LockRepositories indicates whether repositories should be locked (to prevent + // manipulation) while migrating data. + LockRepositories *bool `json:"lock_repositories,omitempty"` + + // ExcludeAttachments indicates whether attachments should be excluded from + // the migration (to reduce migration archive file size). + ExcludeAttachments *bool `json:"exclude_attachments,omitempty"` +} + +// StartMigration starts the generation of a migration archive. +// repos is a slice of repository names to migrate. +// +// GitHub API docs: https://developer.github.com/v3/migration/migrations/#start-a-migration +func (s *MigrationService) StartMigration(ctx context.Context, org string, repos []string, opt *MigrationOptions) (*Migration, *Response, error) { + u := fmt.Sprintf("orgs/%v/migrations", org) + + body := &startMigration{Repositories: repos} + if opt != nil { + body.LockRepositories = Bool(opt.LockRepositories) + body.ExcludeAttachments = Bool(opt.ExcludeAttachments) + } + + req, err := s.client.NewRequest("POST", u, body) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeMigrationsPreview) + + m := &Migration{} + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// ListMigrations lists the most recent migrations. +// +// GitHub API docs: https://developer.github.com/v3/migration/migrations/#get-a-list-of-migrations +func (s *MigrationService) ListMigrations(ctx context.Context, org string) ([]*Migration, *Response, error) { + u := fmt.Sprintf("orgs/%v/migrations", org) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeMigrationsPreview) + + var m []*Migration + resp, err := s.client.Do(ctx, req, &m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// MigrationStatus gets the status of a specific migration archive. +// id is the migration ID. +// +// GitHub API docs: https://developer.github.com/v3/migration/migrations/#get-the-status-of-a-migration +func (s *MigrationService) MigrationStatus(ctx context.Context, org string, id int) (*Migration, *Response, error) { + u := fmt.Sprintf("orgs/%v/migrations/%v", org, id) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeMigrationsPreview) + + m := &Migration{} + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// MigrationArchiveURL fetches a migration archive URL. +// id is the migration ID. +// +// GitHub API docs: https://developer.github.com/v3/migration/migrations/#download-a-migration-archive +func (s *MigrationService) MigrationArchiveURL(ctx context.Context, org string, id int) (url string, err error) { + u := fmt.Sprintf("orgs/%v/migrations/%v/archive", org, id) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return "", err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeMigrationsPreview) + + s.client.clientMu.Lock() + defer s.client.clientMu.Unlock() + + // Disable the redirect mechanism because AWS fails if the GitHub auth token is provided. + var loc string + saveRedirect := s.client.client.CheckRedirect + s.client.client.CheckRedirect = func(req *http.Request, via []*http.Request) error { + loc = req.URL.String() + return errors.New("disable redirect") + } + defer func() { s.client.client.CheckRedirect = saveRedirect }() + + _, err = s.client.Do(ctx, req, nil) // expect error from disable redirect + if err == nil { + return "", errors.New("expected redirect, none provided") + } + if !strings.Contains(err.Error(), "disable redirect") { + return "", err + } + return loc, nil +} + +// DeleteMigration deletes a previous migration archive. +// id is the migration ID. +// +// GitHub API docs: https://developer.github.com/v3/migration/migrations/#delete-a-migration-archive +func (s *MigrationService) DeleteMigration(ctx context.Context, org string, id int) (*Response, error) { + u := fmt.Sprintf("orgs/%v/migrations/%v/archive", org, id) + + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeMigrationsPreview) + + return s.client.Do(ctx, req, nil) +} + +// UnlockRepo unlocks a repository that was locked for migration. +// id is the migration ID. +// You should unlock each migrated repository and delete them when the migration +// is complete and you no longer need the source data. +// +// GitHub API docs: https://developer.github.com/v3/migration/migrations/#unlock-a-repository +func (s *MigrationService) UnlockRepo(ctx context.Context, org string, id int, repo string) (*Response, error) { + u := fmt.Sprintf("orgs/%v/migrations/%v/repos/%v/lock", org, id, repo) + + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeMigrationsPreview) + + return s.client.Do(ctx, req, nil) +} diff --git a/vendor/github.com/google/go-github/github/migrations_source_import.go b/vendor/github.com/google/go-github/github/migrations_source_import.go new file mode 100644 index 0000000000..aa45a5a364 --- /dev/null +++ b/vendor/github.com/google/go-github/github/migrations_source_import.go @@ -0,0 +1,329 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// Import represents a repository import request. +type Import struct { + // The URL of the originating repository. + VCSURL *string `json:"vcs_url,omitempty"` + // The originating VCS type. Can be one of 'subversion', 'git', + // 'mercurial', or 'tfvc'. Without this parameter, the import job will + // take additional time to detect the VCS type before beginning the + // import. This detection step will be reflected in the response. + VCS *string `json:"vcs,omitempty"` + // VCSUsername and VCSPassword are only used for StartImport calls that + // are importing a password-protected repository. + VCSUsername *string `json:"vcs_username,omitempty"` + VCSPassword *string `json:"vcs_password,omitempty"` + // For a tfvc import, the name of the project that is being imported. + TFVCProject *string `json:"tfvc_project,omitempty"` + + // LFS related fields that may be preset in the Import Progress response + + // Describes whether the import has been opted in or out of using Git + // LFS. The value can be 'opt_in', 'opt_out', or 'undecided' if no + // action has been taken. + UseLFS *string `json:"use_lfs,omitempty"` + // Describes whether files larger than 100MB were found during the + // importing step. + HasLargeFiles *bool `json:"has_large_files,omitempty"` + // The total size in gigabytes of files larger than 100MB found in the + // originating repository. + LargeFilesSize *int `json:"large_files_size,omitempty"` + // The total number of files larger than 100MB found in the originating + // repository. To see a list of these files, call LargeFiles. + LargeFilesCount *int `json:"large_files_count,omitempty"` + + // Identifies the current status of an import. An import that does not + // have errors will progress through these steps: + // + // detecting - the "detection" step of the import is in progress + // because the request did not include a VCS parameter. The + // import is identifying the type of source control present at + // the URL. + // importing - the "raw" step of the import is in progress. This is + // where commit data is fetched from the original repository. + // The import progress response will include CommitCount (the + // total number of raw commits that will be imported) and + // Percent (0 - 100, the current progress through the import). + // mapping - the "rewrite" step of the import is in progress. This + // is where SVN branches are converted to Git branches, and + // where author updates are applied. The import progress + // response does not include progress information. + // pushing - the "push" step of the import is in progress. This is + // where the importer updates the repository on GitHub. The + // import progress response will include PushPercent, which is + // the percent value reported by git push when it is "Writing + // objects". + // complete - the import is complete, and the repository is ready + // on GitHub. + // + // If there are problems, you will see one of these in the status field: + // + // auth_failed - the import requires authentication in order to + // connect to the original repository. Make an UpdateImport + // request, and include VCSUsername and VCSPassword. + // error - the import encountered an error. The import progress + // response will include the FailedStep and an error message. + // Contact GitHub support for more information. + // detection_needs_auth - the importer requires authentication for + // the originating repository to continue detection. Make an + // UpdatImport request, and include VCSUsername and + // VCSPassword. + // detection_found_nothing - the importer didn't recognize any + // source control at the URL. + // detection_found_multiple - the importer found several projects + // or repositories at the provided URL. When this is the case, + // the Import Progress response will also include a + // ProjectChoices field with the possible project choices as + // values. Make an UpdateImport request, and include VCS and + // (if applicable) TFVCProject. + Status *string `json:"status,omitempty"` + CommitCount *int `json:"commit_count,omitempty"` + StatusText *string `json:"status_text,omitempty"` + AuthorsCount *int `json:"authors_count,omitempty"` + Percent *int `json:"percent,omitempty"` + PushPercent *int `json:"push_percent,omitempty"` + URL *string `json:"url,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` + AuthorsURL *string `json:"authors_url,omitempty"` + RepositoryURL *string `json:"repository_url,omitempty"` + Message *string `json:"message,omitempty"` + FailedStep *string `json:"failed_step,omitempty"` + + // Human readable display name, provided when the Import appears as + // part of ProjectChoices. + HumanName *string `json:"human_name,omitempty"` + + // When the importer finds several projects or repositories at the + // provided URLs, this will identify the available choices. Call + // UpdateImport with the selected Import value. + ProjectChoices []Import `json:"project_choices,omitempty"` +} + +func (i Import) String() string { + return Stringify(i) +} + +// SourceImportAuthor identifies an author imported from a source repository. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#get-commit-authors +type SourceImportAuthor struct { + ID *int `json:"id,omitempty"` + RemoteID *string `json:"remote_id,omitempty"` + RemoteName *string `json:"remote_name,omitempty"` + Email *string `json:"email,omitempty"` + Name *string `json:"name,omitempty"` + URL *string `json:"url,omitempty"` + ImportURL *string `json:"import_url,omitempty"` +} + +func (a SourceImportAuthor) String() string { + return Stringify(a) +} + +// LargeFile identifies a file larger than 100MB found during a repository import. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#get-large-files +type LargeFile struct { + RefName *string `json:"ref_name,omitempty"` + Path *string `json:"path,omitempty"` + OID *string `json:"oid,omitempty"` + Size *int `json:"size,omitempty"` +} + +func (f LargeFile) String() string { + return Stringify(f) +} + +// StartImport initiates a repository import. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#start-an-import +func (s *MigrationService) StartImport(ctx context.Context, owner, repo string, in *Import) (*Import, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/import", owner, repo) + req, err := s.client.NewRequest("PUT", u, in) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + out := new(Import) + resp, err := s.client.Do(ctx, req, out) + if err != nil { + return nil, resp, err + } + + return out, resp, nil +} + +// ImportProgress queries for the status and progress of an ongoing repository import. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#get-import-progress +func (s *MigrationService) ImportProgress(ctx context.Context, owner, repo string) (*Import, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/import", owner, repo) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + out := new(Import) + resp, err := s.client.Do(ctx, req, out) + if err != nil { + return nil, resp, err + } + + return out, resp, nil +} + +// UpdateImport initiates a repository import. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#update-existing-import +func (s *MigrationService) UpdateImport(ctx context.Context, owner, repo string, in *Import) (*Import, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/import", owner, repo) + req, err := s.client.NewRequest("PATCH", u, in) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + out := new(Import) + resp, err := s.client.Do(ctx, req, out) + if err != nil { + return nil, resp, err + } + + return out, resp, nil +} + +// CommitAuthors gets the authors mapped from the original repository. +// +// Each type of source control system represents authors in a different way. +// For example, a Git commit author has a display name and an email address, +// but a Subversion commit author just has a username. The GitHub Importer will +// make the author information valid, but the author might not be correct. For +// example, it will change the bare Subversion username "hubot" into something +// like "hubot ". +// +// This method and MapCommitAuthor allow you to provide correct Git author +// information. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#get-commit-authors +func (s *MigrationService) CommitAuthors(ctx context.Context, owner, repo string) ([]*SourceImportAuthor, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/import/authors", owner, repo) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + var authors []*SourceImportAuthor + resp, err := s.client.Do(ctx, req, &authors) + if err != nil { + return nil, resp, err + } + + return authors, resp, nil +} + +// MapCommitAuthor updates an author's identity for the import. Your +// application can continue updating authors any time before you push new +// commits to the repository. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#map-a-commit-author +func (s *MigrationService) MapCommitAuthor(ctx context.Context, owner, repo string, id int, author *SourceImportAuthor) (*SourceImportAuthor, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/import/authors/%v", owner, repo, id) + req, err := s.client.NewRequest("PATCH", u, author) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + out := new(SourceImportAuthor) + resp, err := s.client.Do(ctx, req, out) + if err != nil { + return nil, resp, err + } + + return out, resp, nil +} + +// SetLFSPreference sets whether imported repositories should use Git LFS for +// files larger than 100MB. Only the UseLFS field on the provided Import is +// used. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#set-git-lfs-preference +func (s *MigrationService) SetLFSPreference(ctx context.Context, owner, repo string, in *Import) (*Import, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/import/lfs", owner, repo) + req, err := s.client.NewRequest("PATCH", u, in) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + out := new(Import) + resp, err := s.client.Do(ctx, req, out) + if err != nil { + return nil, resp, err + } + + return out, resp, nil +} + +// LargeFiles lists files larger than 100MB found during the import. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#get-large-files +func (s *MigrationService) LargeFiles(ctx context.Context, owner, repo string) ([]*LargeFile, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/import/large_files", owner, repo) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + var files []*LargeFile + resp, err := s.client.Do(ctx, req, &files) + if err != nil { + return nil, resp, err + } + + return files, resp, nil +} + +// CancelImport stops an import for a repository. +// +// GitHub API docs: https://developer.github.com/v3/migration/source_imports/#cancel-an-import +func (s *MigrationService) CancelImport(ctx context.Context, owner, repo string) (*Response, error) { + u := fmt.Sprintf("repos/%v/%v/import", owner, repo) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeImportPreview) + + return s.client.Do(ctx, req, nil) +} diff --git a/vendor/github.com/google/go-github/github/misc.go b/vendor/github.com/google/go-github/github/misc.go index 66e7f5239d..42d0d30339 100644 --- a/vendor/github.com/google/go-github/github/misc.go +++ b/vendor/github.com/google/go-github/github/misc.go @@ -7,13 +7,14 @@ package github import ( "bytes" + "context" "fmt" "net/url" ) // MarkdownOptions specifies optional parameters to the Markdown method. type MarkdownOptions struct { - // Mode identifies the rendering mode. Possible values are: + // Mode identifies the rendering mode. Possible values are: // markdown - render a document as plain Markdown, just like // README files are rendered. // @@ -25,7 +26,7 @@ type MarkdownOptions struct { // Default is "markdown". Mode string - // Context identifies the repository context. Only taken into account + // Context identifies the repository context. Only taken into account // when rendering as "gfm". Context string } @@ -39,7 +40,7 @@ type markdownRequest struct { // Markdown renders an arbitrary Markdown document. // // GitHub API docs: https://developer.github.com/v3/markdown/ -func (c *Client) Markdown(text string, opt *MarkdownOptions) (string, *Response, error) { +func (c *Client) Markdown(ctx context.Context, text string, opt *MarkdownOptions) (string, *Response, error) { request := &markdownRequest{Text: String(text)} if opt != nil { if opt.Mode != "" { @@ -56,7 +57,7 @@ func (c *Client) Markdown(text string, opt *MarkdownOptions) (string, *Response, } buf := new(bytes.Buffer) - resp, err := c.Do(req, buf) + resp, err := c.Do(ctx, req, buf) if err != nil { return "", resp, err } @@ -67,14 +68,14 @@ func (c *Client) Markdown(text string, opt *MarkdownOptions) (string, *Response, // ListEmojis returns the emojis available to use on GitHub. // // GitHub API docs: https://developer.github.com/v3/emojis/ -func (c *Client) ListEmojis() (map[string]string, *Response, error) { +func (c *Client) ListEmojis(ctx context.Context) (map[string]string, *Response, error) { req, err := c.NewRequest("GET", "emojis", nil) if err != nil { return nil, nil, err } var emoji map[string]string - resp, err := c.Do(req, &emoji) + resp, err := c.Do(ctx, req, &emoji) if err != nil { return nil, resp, err } @@ -109,14 +110,14 @@ type APIMeta struct { // endpoint provides information about that installation. // // GitHub API docs: https://developer.github.com/v3/meta/ -func (c *Client) APIMeta() (*APIMeta, *Response, error) { +func (c *Client) APIMeta(ctx context.Context) (*APIMeta, *Response, error) { req, err := c.NewRequest("GET", "meta", nil) if err != nil { return nil, nil, err } meta := new(APIMeta) - resp, err := c.Do(req, meta) + resp, err := c.Do(ctx, req, meta) if err != nil { return nil, resp, err } @@ -125,8 +126,8 @@ func (c *Client) APIMeta() (*APIMeta, *Response, error) { } // Octocat returns an ASCII art octocat with the specified message in a speech -// bubble. If message is empty, a random zen phrase is used. -func (c *Client) Octocat(message string) (string, *Response, error) { +// bubble. If message is empty, a random zen phrase is used. +func (c *Client) Octocat(ctx context.Context, message string) (string, *Response, error) { u := "octocat" if message != "" { u = fmt.Sprintf("%s?s=%s", u, url.QueryEscape(message)) @@ -138,7 +139,7 @@ func (c *Client) Octocat(message string) (string, *Response, error) { } buf := new(bytes.Buffer) - resp, err := c.Do(req, buf) + resp, err := c.Do(ctx, req, buf) if err != nil { return "", resp, err } @@ -149,14 +150,14 @@ func (c *Client) Octocat(message string) (string, *Response, error) { // Zen returns a random line from The Zen of GitHub. // // see also: http://warpspire.com/posts/taste/ -func (c *Client) Zen() (string, *Response, error) { +func (c *Client) Zen(ctx context.Context) (string, *Response, error) { req, err := c.NewRequest("GET", "zen", nil) if err != nil { return "", nil, err } buf := new(bytes.Buffer) - resp, err := c.Do(req, buf) + resp, err := c.Do(ctx, req, buf) if err != nil { return "", resp, err } @@ -180,18 +181,18 @@ func (s *ServiceHook) String() string { // ListServiceHooks lists all of the available service hooks. // // GitHub API docs: https://developer.github.com/webhooks/#services -func (c *Client) ListServiceHooks() ([]ServiceHook, *Response, error) { +func (c *Client) ListServiceHooks(ctx context.Context) ([]*ServiceHook, *Response, error) { u := "hooks" req, err := c.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } - hooks := new([]ServiceHook) - resp, err := c.Do(req, hooks) + var hooks []*ServiceHook + resp, err := c.Do(ctx, req, &hooks) if err != nil { return nil, resp, err } - return *hooks, resp, err + return hooks, resp, nil } diff --git a/vendor/github.com/google/go-github/github/orgs.go b/vendor/github.com/google/go-github/github/orgs.go index 7596873cbb..8b126f00f1 100644 --- a/vendor/github.com/google/go-github/github/orgs.go +++ b/vendor/github.com/google/go-github/github/orgs.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -13,10 +14,8 @@ import ( // OrganizationsService provides access to the organization related functions // in the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/orgs/ -type OrganizationsService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/orgs/ +type OrganizationsService service // Organization represents a GitHub organization account. type Organization struct { @@ -29,6 +28,7 @@ type Organization struct { Blog *string `json:"blog,omitempty"` Location *string `json:"location,omitempty"` Email *string `json:"email,omitempty"` + Description *string `json:"description,omitempty"` PublicRepos *int `json:"public_repos,omitempty"` PublicGists *int `json:"public_gists,omitempty"` Followers *int `json:"followers,omitempty"` @@ -47,6 +47,8 @@ type Organization struct { // API URLs URL *string `json:"url,omitempty"` EventsURL *string `json:"events_url,omitempty"` + HooksURL *string `json:"hooks_url,omitempty"` + IssuesURL *string `json:"issues_url,omitempty"` MembersURL *string `json:"members_url,omitempty"` PublicMembersURL *string `json:"public_members_url,omitempty"` ReposURL *string `json:"repos_url,omitempty"` @@ -56,7 +58,7 @@ func (o Organization) String() string { return Stringify(o) } -// Plan represents the payment plan for an account. See plans at https://github.com/plans. +// Plan represents the payment plan for an account. See plans at https://github.com/plans. type Plan struct { Name *string `json:"name,omitempty"` Space *int `json:"space,omitempty"` @@ -68,11 +70,46 @@ func (p Plan) String() string { return Stringify(p) } -// List the organizations for a user. Passing the empty string will list +// OrganizationsListOptions specifies the optional parameters to the +// OrganizationsService.ListAll method. +type OrganizationsListOptions struct { + // Since filters Organizations by ID. + Since int `url:"since,omitempty"` + + ListOptions +} + +// ListAll lists all organizations, in the order that they were created on GitHub. +// +// Note: Pagination is powered exclusively by the since parameter. To continue +// listing the next set of organizations, use the ID of the last-returned organization +// as the opts.Since parameter for the next call. +// +// GitHub API docs: https://developer.github.com/v3/orgs/#list-all-organizations +func (s *OrganizationsService) ListAll(ctx context.Context, opt *OrganizationsListOptions) ([]*Organization, *Response, error) { + u, err := addOptions("organizations", opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + orgs := []*Organization{} + resp, err := s.client.Do(ctx, req, &orgs) + if err != nil { + return nil, resp, err + } + return orgs, resp, nil +} + +// List the organizations for a user. Passing the empty string will list // organizations for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/orgs/#list-user-organizations -func (s *OrganizationsService) List(user string, opt *ListOptions) ([]Organization, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/#list-user-organizations +func (s *OrganizationsService) List(ctx context.Context, user string, opt *ListOptions) ([]*Organization, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/orgs", user) @@ -89,19 +126,19 @@ func (s *OrganizationsService) List(user string, opt *ListOptions) ([]Organizati return nil, nil, err } - orgs := new([]Organization) - resp, err := s.client.Do(req, orgs) + var orgs []*Organization + resp, err := s.client.Do(ctx, req, &orgs) if err != nil { return nil, resp, err } - return *orgs, resp, err + return orgs, resp, nil } // Get fetches an organization by name. // -// GitHub API docs: http://developer.github.com/v3/orgs/#get-an-organization -func (s *OrganizationsService) Get(org string) (*Organization, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/#get-an-organization +func (s *OrganizationsService) Get(ctx context.Context, org string) (*Organization, *Response, error) { u := fmt.Sprintf("orgs/%v", org) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -109,18 +146,18 @@ func (s *OrganizationsService) Get(org string) (*Organization, *Response, error) } organization := new(Organization) - resp, err := s.client.Do(req, organization) + resp, err := s.client.Do(ctx, req, organization) if err != nil { return nil, resp, err } - return organization, resp, err + return organization, resp, nil } // Edit an organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/#edit-an-organization -func (s *OrganizationsService) Edit(name string, org *Organization) (*Organization, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/#edit-an-organization +func (s *OrganizationsService) Edit(ctx context.Context, name string, org *Organization) (*Organization, *Response, error) { u := fmt.Sprintf("orgs/%v", name) req, err := s.client.NewRequest("PATCH", u, org) if err != nil { @@ -128,10 +165,10 @@ func (s *OrganizationsService) Edit(name string, org *Organization) (*Organizati } o := new(Organization) - resp, err := s.client.Do(req, o) + resp, err := s.client.Do(ctx, req, o) if err != nil { return nil, resp, err } - return o, resp, err + return o, resp, nil } diff --git a/vendor/github.com/google/go-github/github/orgs_hooks.go b/vendor/github.com/google/go-github/github/orgs_hooks.go index 3e7ad40ff2..4fc692e0f6 100644 --- a/vendor/github.com/google/go-github/github/orgs_hooks.go +++ b/vendor/github.com/google/go-github/github/orgs_hooks.go @@ -5,12 +5,15 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // ListHooks lists all Hooks for the specified organization. // // GitHub API docs: https://developer.github.com/v3/orgs/hooks/#list-hooks -func (s *OrganizationsService) ListHooks(org string, opt *ListOptions) ([]Hook, *Response, error) { +func (s *OrganizationsService) ListHooks(ctx context.Context, org string, opt *ListOptions) ([]*Hook, *Response, error) { u := fmt.Sprintf("orgs/%v/hooks", org) u, err := addOptions(u, opt) if err != nil { @@ -22,26 +25,26 @@ func (s *OrganizationsService) ListHooks(org string, opt *ListOptions) ([]Hook, return nil, nil, err } - hooks := new([]Hook) - resp, err := s.client.Do(req, hooks) + var hooks []*Hook + resp, err := s.client.Do(ctx, req, &hooks) if err != nil { return nil, resp, err } - return *hooks, resp, err + return hooks, resp, nil } // GetHook returns a single specified Hook. // // GitHub API docs: https://developer.github.com/v3/orgs/hooks/#get-single-hook -func (s *OrganizationsService) GetHook(org string, id int) (*Hook, *Response, error) { +func (s *OrganizationsService) GetHook(ctx context.Context, org string, id int) (*Hook, *Response, error) { u := fmt.Sprintf("orgs/%v/hooks/%d", org, id) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } hook := new(Hook) - resp, err := s.client.Do(req, hook) + resp, err := s.client.Do(ctx, req, hook) return hook, resp, err } @@ -49,7 +52,7 @@ func (s *OrganizationsService) GetHook(org string, id int) (*Hook, *Response, er // Name and Config are required fields. // // GitHub API docs: https://developer.github.com/v3/orgs/hooks/#create-a-hook -func (s *OrganizationsService) CreateHook(org string, hook *Hook) (*Hook, *Response, error) { +func (s *OrganizationsService) CreateHook(ctx context.Context, org string, hook *Hook) (*Hook, *Response, error) { u := fmt.Sprintf("orgs/%v/hooks", org) req, err := s.client.NewRequest("POST", u, hook) if err != nil { @@ -57,48 +60,48 @@ func (s *OrganizationsService) CreateHook(org string, hook *Hook) (*Hook, *Respo } h := new(Hook) - resp, err := s.client.Do(req, h) + resp, err := s.client.Do(ctx, req, h) if err != nil { return nil, resp, err } - return h, resp, err + return h, resp, nil } // EditHook updates a specified Hook. // // GitHub API docs: https://developer.github.com/v3/orgs/hooks/#edit-a-hook -func (s *OrganizationsService) EditHook(org string, id int, hook *Hook) (*Hook, *Response, error) { +func (s *OrganizationsService) EditHook(ctx context.Context, org string, id int, hook *Hook) (*Hook, *Response, error) { u := fmt.Sprintf("orgs/%v/hooks/%d", org, id) req, err := s.client.NewRequest("PATCH", u, hook) if err != nil { return nil, nil, err } h := new(Hook) - resp, err := s.client.Do(req, h) + resp, err := s.client.Do(ctx, req, h) return h, resp, err } // PingHook triggers a 'ping' event to be sent to the Hook. // // GitHub API docs: https://developer.github.com/v3/orgs/hooks/#ping-a-hook -func (s *OrganizationsService) PingHook(org string, id int) (*Response, error) { +func (s *OrganizationsService) PingHook(ctx context.Context, org string, id int) (*Response, error) { u := fmt.Sprintf("orgs/%v/hooks/%d/pings", org, id) req, err := s.client.NewRequest("POST", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // DeleteHook deletes a specified Hook. // // GitHub API docs: https://developer.github.com/v3/orgs/hooks/#delete-a-hook -func (s *OrganizationsService) DeleteHook(org string, id int) (*Response, error) { +func (s *OrganizationsService) DeleteHook(ctx context.Context, org string, id int) (*Response, error) { u := fmt.Sprintf("orgs/%v/hooks/%d", org, id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/orgs_members.go b/vendor/github.com/google/go-github/github/orgs_members.go index 01a9ba9b61..58fb019153 100644 --- a/vendor/github.com/google/go-github/github/orgs_members.go +++ b/vendor/github.com/google/go-github/github/orgs_members.go @@ -5,7 +5,10 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // Membership represents the status of a user's membership in an organization or team. type Membership struct { @@ -48,8 +51,8 @@ type ListMembersOptions struct { // organization), list only publicly visible members. PublicOnly bool `url:"-"` - // Filter members returned in the list. Possible values are: - // 2fa_disabled, all. Default is "all". + // Filter members returned in the list. Possible values are: + // 2fa_disabled, all. Default is "all". Filter string `url:"filter,omitempty"` // Role filters members returned by their role in the organization. @@ -64,12 +67,12 @@ type ListMembersOptions struct { ListOptions } -// ListMembers lists the members for an organization. If the authenticated +// ListMembers lists the members for an organization. If the authenticated // user is an owner of the organization, this will return both concealed and // public members, otherwise it will only return public members. // -// GitHub API docs: http://developer.github.com/v3/orgs/members/#members-list -func (s *OrganizationsService) ListMembers(org string, opt *ListMembersOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/members/#members-list +func (s *OrganizationsService) ListMembers(ctx context.Context, org string, opt *ListMembersOptions) ([]*User, *Response, error) { var u string if opt != nil && opt.PublicOnly { u = fmt.Sprintf("orgs/%v/public_members", org) @@ -86,87 +89,83 @@ func (s *OrganizationsService) ListMembers(org string, opt *ListMembersOptions) return nil, nil, err } - if opt != nil && opt.Role != "" { - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - } - - members := new([]User) - resp, err := s.client.Do(req, members) + var members []*User + resp, err := s.client.Do(ctx, req, &members) if err != nil { return nil, resp, err } - return *members, resp, err + return members, resp, nil } // IsMember checks if a user is a member of an organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/members/#check-membership -func (s *OrganizationsService) IsMember(org, user string) (bool, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/members/#check-membership +func (s *OrganizationsService) IsMember(ctx context.Context, org, user string) (bool, *Response, error) { u := fmt.Sprintf("orgs/%v/members/%v", org, user) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) member, err := parseBoolResponse(err) return member, resp, err } // IsPublicMember checks if a user is a public member of an organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/members/#check-public-membership -func (s *OrganizationsService) IsPublicMember(org, user string) (bool, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/members/#check-public-membership +func (s *OrganizationsService) IsPublicMember(ctx context.Context, org, user string) (bool, *Response, error) { u := fmt.Sprintf("orgs/%v/public_members/%v", org, user) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) member, err := parseBoolResponse(err) return member, resp, err } // RemoveMember removes a user from all teams of an organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/members/#remove-a-member -func (s *OrganizationsService) RemoveMember(org, user string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/members/#remove-a-member +func (s *OrganizationsService) RemoveMember(ctx context.Context, org, user string) (*Response, error) { u := fmt.Sprintf("orgs/%v/members/%v", org, user) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // PublicizeMembership publicizes a user's membership in an organization. (A // user cannot publicize the membership for another user.) // -// GitHub API docs: http://developer.github.com/v3/orgs/members/#publicize-a-users-membership -func (s *OrganizationsService) PublicizeMembership(org, user string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/members/#publicize-a-users-membership +func (s *OrganizationsService) PublicizeMembership(ctx context.Context, org, user string) (*Response, error) { u := fmt.Sprintf("orgs/%v/public_members/%v", org, user) req, err := s.client.NewRequest("PUT", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // ConcealMembership conceals a user's membership in an organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/members/#conceal-a-users-membership -func (s *OrganizationsService) ConcealMembership(org, user string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/members/#conceal-a-users-membership +func (s *OrganizationsService) ConcealMembership(ctx context.Context, org, user string) (*Response, error) { u := fmt.Sprintf("orgs/%v/public_members/%v", org, user) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // ListOrgMembershipsOptions specifies optional parameters to the @@ -182,7 +181,7 @@ type ListOrgMembershipsOptions struct { // ListOrgMemberships lists the organization memberships for the authenticated user. // // GitHub API docs: https://developer.github.com/v3/orgs/members/#list-your-organization-memberships -func (s *OrganizationsService) ListOrgMemberships(opt *ListOrgMembershipsOptions) ([]Membership, *Response, error) { +func (s *OrganizationsService) ListOrgMemberships(ctx context.Context, opt *ListOrgMembershipsOptions) ([]*Membership, *Response, error) { u := "user/memberships/orgs" u, err := addOptions(u, opt) if err != nil { @@ -194,22 +193,23 @@ func (s *OrganizationsService) ListOrgMemberships(opt *ListOrgMembershipsOptions return nil, nil, err } - var memberships []Membership - resp, err := s.client.Do(req, &memberships) + var memberships []*Membership + resp, err := s.client.Do(ctx, req, &memberships) if err != nil { return nil, resp, err } - return memberships, resp, err + return memberships, resp, nil } // GetOrgMembership gets the membership for a user in a specified organization. // Passing an empty string for user will get the membership for the // authenticated user. // -// GitHub API docs: https://developer.github.com/v3/orgs/members/#get-organization-membership -// GitHub API docs: https://developer.github.com/v3/orgs/members/#get-your-organization-membership -func (s *OrganizationsService) GetOrgMembership(user, org string) (*Membership, *Response, error) { +// GitHub API docs: +// https://developer.github.com/v3/orgs/members/#get-organization-membership +// https://developer.github.com/v3/orgs/members/#get-your-organization-membership +func (s *OrganizationsService) GetOrgMembership(ctx context.Context, user, org string) (*Membership, *Response, error) { var u string if user != "" { u = fmt.Sprintf("orgs/%v/memberships/%v", org, user) @@ -223,12 +223,12 @@ func (s *OrganizationsService) GetOrgMembership(user, org string) (*Membership, } membership := new(Membership) - resp, err := s.client.Do(req, membership) + resp, err := s.client.Do(ctx, req, membership) if err != nil { return nil, resp, err } - return membership, resp, err + return membership, resp, nil } // EditOrgMembership edits the membership for user in specified organization. @@ -237,7 +237,7 @@ func (s *OrganizationsService) GetOrgMembership(user, org string) (*Membership, // // GitHub API docs: https://developer.github.com/v3/orgs/members/#add-or-update-organization-membership // GitHub API docs: https://developer.github.com/v3/orgs/members/#edit-your-organization-membership -func (s *OrganizationsService) EditOrgMembership(user, org string, membership *Membership) (*Membership, *Response, error) { +func (s *OrganizationsService) EditOrgMembership(ctx context.Context, user, org string, membership *Membership) (*Membership, *Response, error) { var u, method string if user != "" { u = fmt.Sprintf("orgs/%v/memberships/%v", org, user) @@ -253,24 +253,50 @@ func (s *OrganizationsService) EditOrgMembership(user, org string, membership *M } m := new(Membership) - resp, err := s.client.Do(req, m) + resp, err := s.client.Do(ctx, req, m) if err != nil { return nil, resp, err } - return m, resp, err + return m, resp, nil } -// RemoveOrgMembership removes user from the specified organization. If the +// RemoveOrgMembership removes user from the specified organization. If the // user has been invited to the organization, this will cancel their invitation. // // GitHub API docs: https://developer.github.com/v3/orgs/members/#remove-organization-membership -func (s *OrganizationsService) RemoveOrgMembership(user, org string) (*Response, error) { +func (s *OrganizationsService) RemoveOrgMembership(ctx context.Context, user, org string) (*Response, error) { u := fmt.Sprintf("orgs/%v/memberships/%v", org, user) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) +} + +// ListPendingOrgInvitations returns a list of pending invitations. +// +// GitHub API docs: https://developer.github.com/v3/orgs/members/#list-pending-organization-invitations +func (s *OrganizationsService) ListPendingOrgInvitations(ctx context.Context, org int, opt *ListOptions) ([]*Invitation, *Response, error) { + u := fmt.Sprintf("orgs/%v/invitations", org) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeOrgMembershipPreview) + + var pendingInvitations []*Invitation + resp, err := s.client.Do(ctx, req, &pendingInvitations) + if err != nil { + return nil, resp, err + } + return pendingInvitations, resp, nil } diff --git a/vendor/github.com/google/go-github/github/orgs_outside_collaborators.go b/vendor/github.com/google/go-github/github/orgs_outside_collaborators.go new file mode 100644 index 0000000000..10bc6f0600 --- /dev/null +++ b/vendor/github.com/google/go-github/github/orgs_outside_collaborators.go @@ -0,0 +1,53 @@ +// Copyright 2017 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// ListOutsideCollaboratorsOptions specifies optional parameters to the +// OrganizationsService.ListOutsideCollaborators method. +type ListOutsideCollaboratorsOptions struct { + // Filter outside collaborators returned in the list. Possible values are: + // 2fa_disabled, all. Default is "all". + Filter string `url:"filter,omitempty"` + + ListOptions +} + +// ListOutsideCollaborators lists outside collaborators of organization's repositories. +// This will only work if the authenticated +// user is an owner of the organization. +// +// Warning: The API may change without advance notice during the preview period. +// Preview features are not supported for production use. +// +// GitHub API docs: https://developer.github.com/v3/orgs/outside_collaborators/#list-outside-collaborators +func (s *OrganizationsService) ListOutsideCollaborators(ctx context.Context, org string, opt *ListOutsideCollaboratorsOptions) ([]*User, *Response, error) { + u := fmt.Sprintf("orgs/%v/outside_collaborators", org) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeOrgMembershipPreview) + + var members []*User + resp, err := s.client.Do(ctx, req, &members) + if err != nil { + return nil, resp, err + } + + return members, resp, nil +} diff --git a/vendor/github.com/google/go-github/github/orgs_teams.go b/vendor/github.com/google/go-github/github/orgs_teams.go index ddcaa24f46..5bdd66dd21 100644 --- a/vendor/github.com/google/go-github/github/orgs_teams.go +++ b/vendor/github.com/google/go-github/github/orgs_teams.go @@ -5,9 +5,13 @@ package github -import "fmt" +import ( + "context" + "fmt" + "time" +) -// Team represents a team within a GitHub organization. Teams are used to +// Team represents a team within a GitHub organization. Teams are used to // manage access to an organization's repositories. type Team struct { ID *int `json:"id,omitempty"` @@ -17,9 +21,9 @@ type Team struct { Slug *string `json:"slug,omitempty"` // Permission is deprecated when creating or editing a team in an org - // using the new GitHub permission model. It no longer identifies the + // using the new GitHub permission model. It no longer identifies the // permission a team has on its repos, but only specifies the default - // permission a repo is initially added with. Avoid confusion by + // permission a repo is initially added with. Avoid confusion by // specifying a permission value when calling AddTeamRepo. Permission *string `json:"permission,omitempty"` @@ -41,10 +45,25 @@ func (t Team) String() string { return Stringify(t) } +// Invitation represents a team member's invitation status. +type Invitation struct { + ID *int `json:"id,omitempty"` + Login *string `json:"login,omitempty"` + Email *string `json:"email,omitempty"` + // Role can be one of the values - 'direct_member', 'admin', 'billing_manager', 'hiring_manager', or 'reinstate'. + Role *string `json:"role,omitempty"` + CreatedAt *time.Time `json:"created_at,omitempty"` + Inviter *User `json:"inviter,omitempty"` +} + +func (i Invitation) String() string { + return Stringify(i) +} + // ListTeams lists all of the teams for an organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#list-teams -func (s *OrganizationsService) ListTeams(org string, opt *ListOptions) ([]Team, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#list-teams +func (s *OrganizationsService) ListTeams(ctx context.Context, org string, opt *ListOptions) ([]*Team, *Response, error) { u := fmt.Sprintf("orgs/%v/teams", org) u, err := addOptions(u, opt) if err != nil { @@ -56,19 +75,19 @@ func (s *OrganizationsService) ListTeams(org string, opt *ListOptions) ([]Team, return nil, nil, err } - teams := new([]Team) - resp, err := s.client.Do(req, teams) + var teams []*Team + resp, err := s.client.Do(ctx, req, &teams) if err != nil { return nil, resp, err } - return *teams, resp, err + return teams, resp, nil } // GetTeam fetches a team by ID. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#get-team -func (s *OrganizationsService) GetTeam(team int) (*Team, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#get-team +func (s *OrganizationsService) GetTeam(ctx context.Context, team int) (*Team, *Response, error) { u := fmt.Sprintf("teams/%v", team) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -76,78 +95,70 @@ func (s *OrganizationsService) GetTeam(team int) (*Team, *Response, error) { } t := new(Team) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) if err != nil { return nil, resp, err } - return t, resp, err + return t, resp, nil } // CreateTeam creates a new team within an organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#create-team -func (s *OrganizationsService) CreateTeam(org string, team *Team) (*Team, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#create-team +func (s *OrganizationsService) CreateTeam(ctx context.Context, org string, team *Team) (*Team, *Response, error) { u := fmt.Sprintf("orgs/%v/teams", org) req, err := s.client.NewRequest("POST", u, team) if err != nil { return nil, nil, err } - if team.Privacy != nil { - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - } - t := new(Team) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) if err != nil { return nil, resp, err } - return t, resp, err + return t, resp, nil } // EditTeam edits a team. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#edit-team -func (s *OrganizationsService) EditTeam(id int, team *Team) (*Team, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#edit-team +func (s *OrganizationsService) EditTeam(ctx context.Context, id int, team *Team) (*Team, *Response, error) { u := fmt.Sprintf("teams/%v", id) req, err := s.client.NewRequest("PATCH", u, team) if err != nil { return nil, nil, err } - if team.Privacy != nil { - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - } - t := new(Team) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) if err != nil { return nil, resp, err } - return t, resp, err + return t, resp, nil } // DeleteTeam deletes a team. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#delete-team -func (s *OrganizationsService) DeleteTeam(team int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#delete-team +func (s *OrganizationsService) DeleteTeam(ctx context.Context, team int) (*Response, error) { u := fmt.Sprintf("teams/%v", team) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // OrganizationListTeamMembersOptions specifies the optional parameters to the // OrganizationsService.ListTeamMembers method. type OrganizationListTeamMembersOptions struct { - // Role filters members returned by their role in the team. Possible - // values are "all", "member", "maintainer". Default is "all". + // Role filters members returned by their role in the team. Possible + // values are "all", "member", "maintainer". Default is "all". Role string `url:"role,omitempty"` ListOptions @@ -156,8 +167,8 @@ type OrganizationListTeamMembersOptions struct { // ListTeamMembers lists all of the users who are members of the specified // team. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#list-team-members -func (s *OrganizationsService) ListTeamMembers(team int, opt *OrganizationListTeamMembersOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#list-team-members +func (s *OrganizationsService) ListTeamMembers(ctx context.Context, team int, opt *OrganizationListTeamMembersOptions) ([]*User, *Response, error) { u := fmt.Sprintf("teams/%v/members", team) u, err := addOptions(u, opt) if err != nil { @@ -169,38 +180,34 @@ func (s *OrganizationsService) ListTeamMembers(team int, opt *OrganizationListTe return nil, nil, err } - if opt != nil && opt.Role != "" { - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - } - - members := new([]User) - resp, err := s.client.Do(req, members) + var members []*User + resp, err := s.client.Do(ctx, req, &members) if err != nil { return nil, resp, err } - return *members, resp, err + return members, resp, nil } // IsTeamMember checks if a user is a member of the specified team. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#get-team-member -func (s *OrganizationsService) IsTeamMember(team int, user string) (bool, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#get-team-member +func (s *OrganizationsService) IsTeamMember(ctx context.Context, team int, user string) (bool, *Response, error) { u := fmt.Sprintf("teams/%v/members/%v", team, user) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) member, err := parseBoolResponse(err) return member, resp, err } // ListTeamRepos lists the repositories that the specified team has access to. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#list-team-repos -func (s *OrganizationsService) ListTeamRepos(team int, opt *ListOptions) ([]Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#list-team-repos +func (s *OrganizationsService) ListTeamRepos(ctx context.Context, team int, opt *ListOptions) ([]*Repository, *Response, error) { u := fmt.Sprintf("teams/%v/repos", team) u, err := addOptions(u, opt) if err != nil { @@ -212,36 +219,36 @@ func (s *OrganizationsService) ListTeamRepos(team int, opt *ListOptions) ([]Repo return nil, nil, err } - repos := new([]Repository) - resp, err := s.client.Do(req, repos) + var repos []*Repository + resp, err := s.client.Do(ctx, req, &repos) if err != nil { return nil, resp, err } - return *repos, resp, err + return repos, resp, nil } -// IsTeamRepo checks if a team manages the specified repository. If the +// IsTeamRepo checks if a team manages the specified repository. If the // repository is managed by team, a Repository is returned which includes the // permissions team has for that repo. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#get-team-repo -func (s *OrganizationsService) IsTeamRepo(team int, owner string, repo string) (*Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#check-if-a-team-manages-a-repository +func (s *OrganizationsService) IsTeamRepo(ctx context.Context, team int, owner string, repo string) (*Repository, *Response, error) { u := fmt.Sprintf("teams/%v/repos/%v/%v", team, owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } - req.Header.Set("Accept", mediaTypeOrgPermissionRepoPreview) + req.Header.Set("Accept", mediaTypeOrgPermissionRepo) repository := new(Repository) - resp, err := s.client.Do(req, repository) + resp, err := s.client.Do(ctx, req, repository) if err != nil { return nil, resp, err } - return repository, resp, err + return repository, resp, nil } // OrganizationAddTeamRepoOptions specifies the optional parameters to the @@ -257,43 +264,39 @@ type OrganizationAddTeamRepoOptions struct { Permission string `json:"permission,omitempty"` } -// AddTeamRepo adds a repository to be managed by the specified team. The +// AddTeamRepo adds a repository to be managed by the specified team. The // specified repository must be owned by the organization to which the team // belongs, or a direct fork of a repository owned by the organization. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#add-team-repo -func (s *OrganizationsService) AddTeamRepo(team int, owner string, repo string, opt *OrganizationAddTeamRepoOptions) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#add-team-repo +func (s *OrganizationsService) AddTeamRepo(ctx context.Context, team int, owner string, repo string, opt *OrganizationAddTeamRepoOptions) (*Response, error) { u := fmt.Sprintf("teams/%v/repos/%v/%v", team, owner, repo) req, err := s.client.NewRequest("PUT", u, opt) if err != nil { return nil, err } - if opt != nil { - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - } - - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // RemoveTeamRepo removes a repository from being managed by the specified -// team. Note that this does not delete the repository, it just removes it +// team. Note that this does not delete the repository, it just removes it // from the team. // -// GitHub API docs: http://developer.github.com/v3/orgs/teams/#remove-team-repo -func (s *OrganizationsService) RemoveTeamRepo(team int, owner string, repo string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#remove-team-repo +func (s *OrganizationsService) RemoveTeamRepo(ctx context.Context, team int, owner string, repo string) (*Response, error) { u := fmt.Sprintf("teams/%v/repos/%v/%v", team, owner, repo) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // ListUserTeams lists a user's teams // GitHub API docs: https://developer.github.com/v3/orgs/teams/#list-user-teams -func (s *OrganizationsService) ListUserTeams(opt *ListOptions) ([]Team, *Response, error) { +func (s *OrganizationsService) ListUserTeams(ctx context.Context, opt *ListOptions) ([]*Team, *Response, error) { u := "user/teams" u, err := addOptions(u, opt) if err != nil { @@ -305,19 +308,19 @@ func (s *OrganizationsService) ListUserTeams(opt *ListOptions) ([]Team, *Respons return nil, nil, err } - teams := new([]Team) - resp, err := s.client.Do(req, teams) + var teams []*Team + resp, err := s.client.Do(ctx, req, &teams) if err != nil { return nil, resp, err } - return *teams, resp, err + return teams, resp, nil } // GetTeamMembership returns the membership status for a user in a team. // // GitHub API docs: https://developer.github.com/v3/orgs/teams/#get-team-membership -func (s *OrganizationsService) GetTeamMembership(team int, user string) (*Membership, *Response, error) { +func (s *OrganizationsService) GetTeamMembership(ctx context.Context, team int, user string) (*Membership, *Response, error) { u := fmt.Sprintf("teams/%v/memberships/%v", team, user) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -325,18 +328,18 @@ func (s *OrganizationsService) GetTeamMembership(team int, user string) (*Member } t := new(Membership) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) if err != nil { return nil, resp, err } - return t, resp, err + return t, resp, nil } // OrganizationAddTeamMembershipOptions does stuff specifies the optional // parameters to the OrganizationsService.AddTeamMembership method. type OrganizationAddTeamMembershipOptions struct { - // Role specifies the role the user should have in the team. Possible + // Role specifies the role the user should have in the team. Possible // values are: // member - a normal member of the team // maintainer - a team maintainer. Able to add/remove other team @@ -365,35 +368,60 @@ type OrganizationAddTeamMembershipOptions struct { // added as a member of the team. // // GitHub API docs: https://developer.github.com/v3/orgs/teams/#add-team-membership -func (s *OrganizationsService) AddTeamMembership(team int, user string, opt *OrganizationAddTeamMembershipOptions) (*Membership, *Response, error) { +func (s *OrganizationsService) AddTeamMembership(ctx context.Context, team int, user string, opt *OrganizationAddTeamMembershipOptions) (*Membership, *Response, error) { u := fmt.Sprintf("teams/%v/memberships/%v", team, user) req, err := s.client.NewRequest("PUT", u, opt) if err != nil { return nil, nil, err } - if opt != nil { - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - } - t := new(Membership) - resp, err := s.client.Do(req, t) + resp, err := s.client.Do(ctx, req, t) if err != nil { return nil, resp, err } - return t, resp, err + return t, resp, nil } // RemoveTeamMembership removes a user from a team. // // GitHub API docs: https://developer.github.com/v3/orgs/teams/#remove-team-membership -func (s *OrganizationsService) RemoveTeamMembership(team int, user string) (*Response, error) { +func (s *OrganizationsService) RemoveTeamMembership(ctx context.Context, team int, user string) (*Response, error) { u := fmt.Sprintf("teams/%v/memberships/%v", team, user) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) +} + +// ListPendingTeamInvitations get pending invitaion list in team. +// Warning: The API may change without advance notice during the preview period. +// Preview features are not supported for production use. +// +// GitHub API docs: https://developer.github.com/v3/orgs/teams/#list-pending-team-invitations +func (s *OrganizationsService) ListPendingTeamInvitations(ctx context.Context, team int, opt *ListOptions) ([]*Invitation, *Response, error) { + u := fmt.Sprintf("teams/%v/invitations", team) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeOrgMembershipPreview) + + var pendingInvitations []*Invitation + resp, err := s.client.Do(ctx, req, &pendingInvitations) + if err != nil { + return nil, resp, err + } + + return pendingInvitations, resp, nil } diff --git a/vendor/github.com/google/go-github/github/projects.go b/vendor/github.com/google/go-github/github/projects.go new file mode 100644 index 0000000000..58b638eb8c --- /dev/null +++ b/vendor/github.com/google/go-github/github/projects.go @@ -0,0 +1,420 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// ProjectsService provides access to the projects functions in the +// GitHub API. +// +// GitHub API docs: https://developer.github.com/v3/projects/ +type ProjectsService service + +// Project represents a GitHub Project. +type Project struct { + ID *int `json:"id,omitempty"` + URL *string `json:"url,omitempty"` + OwnerURL *string `json:"owner_url,omitempty"` + Name *string `json:"name,omitempty"` + Body *string `json:"body,omitempty"` + Number *int `json:"number,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + UpdatedAt *Timestamp `json:"updated_at,omitempty"` + + // The User object that generated the project. + Creator *User `json:"creator,omitempty"` +} + +func (p Project) String() string { + return Stringify(p) +} + +// GetProject gets a GitHub Project for a repo. +// +// GitHub API docs: https://developer.github.com/v3/projects/#get-a-project +func (s *ProjectsService) GetProject(ctx context.Context, id int) (*Project, *Response, error) { + u := fmt.Sprintf("projects/%v", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + project := &Project{} + resp, err := s.client.Do(ctx, req, project) + if err != nil { + return nil, resp, err + } + + return project, resp, nil +} + +// ProjectOptions specifies the parameters to the +// RepositoriesService.CreateProject and +// ProjectsService.UpdateProject methods. +type ProjectOptions struct { + // The name of the project. (Required for creation; optional for update.) + Name string `json:"name,omitempty"` + // The body of the project. (Optional.) + Body string `json:"body,omitempty"` +} + +// UpdateProject updates a repository project. +// +// GitHub API docs: https://developer.github.com/v3/projects/#update-a-project +func (s *ProjectsService) UpdateProject(ctx context.Context, id int, opt *ProjectOptions) (*Project, *Response, error) { + u := fmt.Sprintf("projects/%v", id) + req, err := s.client.NewRequest("PATCH", u, opt) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + project := &Project{} + resp, err := s.client.Do(ctx, req, project) + if err != nil { + return nil, resp, err + } + + return project, resp, nil +} + +// DeleteProject deletes a GitHub Project from a repository. +// +// GitHub API docs: https://developer.github.com/v3/projects/#delete-a-project +func (s *ProjectsService) DeleteProject(ctx context.Context, id int) (*Response, error) { + u := fmt.Sprintf("projects/%v", id) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + return s.client.Do(ctx, req, nil) +} + +// ProjectColumn represents a column of a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/repos/projects/ +type ProjectColumn struct { + ID *int `json:"id,omitempty"` + Name *string `json:"name,omitempty"` + ProjectURL *string `json:"project_url,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + UpdatedAt *Timestamp `json:"updated_at,omitempty"` +} + +// ListProjectColumns lists the columns of a GitHub Project for a repo. +// +// GitHub API docs: https://developer.github.com/v3/projects/columns/#list-project-columns +func (s *ProjectsService) ListProjectColumns(ctx context.Context, projectID int, opt *ListOptions) ([]*ProjectColumn, *Response, error) { + u := fmt.Sprintf("projects/%v/columns", projectID) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + columns := []*ProjectColumn{} + resp, err := s.client.Do(ctx, req, &columns) + if err != nil { + return nil, resp, err + } + + return columns, resp, nil +} + +// GetProjectColumn gets a column of a GitHub Project for a repo. +// +// GitHub API docs: https://developer.github.com/v3/projects/columns/#get-a-project-column +func (s *ProjectsService) GetProjectColumn(ctx context.Context, id int) (*ProjectColumn, *Response, error) { + u := fmt.Sprintf("projects/columns/%v", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + column := &ProjectColumn{} + resp, err := s.client.Do(ctx, req, column) + if err != nil { + return nil, resp, err + } + + return column, resp, nil +} + +// ProjectColumnOptions specifies the parameters to the +// ProjectsService.CreateProjectColumn and +// ProjectsService.UpdateProjectColumn methods. +type ProjectColumnOptions struct { + // The name of the project column. (Required for creation and update.) + Name string `json:"name"` +} + +// CreateProjectColumn creates a column for the specified (by number) project. +// +// GitHub API docs: https://developer.github.com/v3/projects/columns/#create-a-project-column +func (s *ProjectsService) CreateProjectColumn(ctx context.Context, projectID int, opt *ProjectColumnOptions) (*ProjectColumn, *Response, error) { + u := fmt.Sprintf("projects/%v/columns", projectID) + req, err := s.client.NewRequest("POST", u, opt) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + column := &ProjectColumn{} + resp, err := s.client.Do(ctx, req, column) + if err != nil { + return nil, resp, err + } + + return column, resp, nil +} + +// UpdateProjectColumn updates a column of a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/columns/#update-a-project-column +func (s *ProjectsService) UpdateProjectColumn(ctx context.Context, columnID int, opt *ProjectColumnOptions) (*ProjectColumn, *Response, error) { + u := fmt.Sprintf("projects/columns/%v", columnID) + req, err := s.client.NewRequest("PATCH", u, opt) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + column := &ProjectColumn{} + resp, err := s.client.Do(ctx, req, column) + if err != nil { + return nil, resp, err + } + + return column, resp, nil +} + +// DeleteProjectColumn deletes a column from a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/columns/#delete-a-project-column +func (s *ProjectsService) DeleteProjectColumn(ctx context.Context, columnID int) (*Response, error) { + u := fmt.Sprintf("projects/columns/%v", columnID) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + return s.client.Do(ctx, req, nil) +} + +// ProjectColumnMoveOptions specifies the parameters to the +// ProjectsService.MoveProjectColumn method. +type ProjectColumnMoveOptions struct { + // Position can be one of "first", "last", or "after:", where + // is the ID of a column in the same project. (Required.) + Position string `json:"position"` +} + +// MoveProjectColumn moves a column within a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/columns/#move-a-project-column +func (s *ProjectsService) MoveProjectColumn(ctx context.Context, columnID int, opt *ProjectColumnMoveOptions) (*Response, error) { + u := fmt.Sprintf("projects/columns/%v/moves", columnID) + req, err := s.client.NewRequest("POST", u, opt) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + return s.client.Do(ctx, req, nil) +} + +// ProjectCard represents a card in a column of a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/repos/projects/ +type ProjectCard struct { + ColumnURL *string `json:"column_url,omitempty"` + ContentURL *string `json:"content_url,omitempty"` + ID *int `json:"id,omitempty"` + Note *string `json:"note,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + UpdatedAt *Timestamp `json:"updated_at,omitempty"` +} + +// ListProjectCards lists the cards in a column of a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/cards/#list-project-cards +func (s *ProjectsService) ListProjectCards(ctx context.Context, columnID int, opt *ListOptions) ([]*ProjectCard, *Response, error) { + u := fmt.Sprintf("projects/columns/%v/cards", columnID) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + cards := []*ProjectCard{} + resp, err := s.client.Do(ctx, req, &cards) + if err != nil { + return nil, resp, err + } + + return cards, resp, nil +} + +// GetProjectCard gets a card in a column of a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/cards/#get-a-project-card +func (s *ProjectsService) GetProjectCard(ctx context.Context, columnID int) (*ProjectCard, *Response, error) { + u := fmt.Sprintf("projects/columns/cards/%v", columnID) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + card := &ProjectCard{} + resp, err := s.client.Do(ctx, req, card) + if err != nil { + return nil, resp, err + } + + return card, resp, nil +} + +// ProjectCardOptions specifies the parameters to the +// ProjectsService.CreateProjectCard and +// ProjectsService.UpdateProjectCard methods. +type ProjectCardOptions struct { + // The note of the card. Note and ContentID are mutually exclusive. + Note string `json:"note,omitempty"` + // The ID (not Number) of the Issue or Pull Request to associate with this card. + // Note and ContentID are mutually exclusive. + ContentID int `json:"content_id,omitempty"` + // The type of content to associate with this card. Possible values are: "Issue", "PullRequest". + ContentType string `json:"content_type,omitempty"` +} + +// CreateProjectCard creates a card in the specified column of a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/cards/#create-a-project-card +func (s *ProjectsService) CreateProjectCard(ctx context.Context, columnID int, opt *ProjectCardOptions) (*ProjectCard, *Response, error) { + u := fmt.Sprintf("projects/columns/%v/cards", columnID) + req, err := s.client.NewRequest("POST", u, opt) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + card := &ProjectCard{} + resp, err := s.client.Do(ctx, req, card) + if err != nil { + return nil, resp, err + } + + return card, resp, nil +} + +// UpdateProjectCard updates a card of a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/cards/#update-a-project-card +func (s *ProjectsService) UpdateProjectCard(ctx context.Context, cardID int, opt *ProjectCardOptions) (*ProjectCard, *Response, error) { + u := fmt.Sprintf("projects/columns/cards/%v", cardID) + req, err := s.client.NewRequest("PATCH", u, opt) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + card := &ProjectCard{} + resp, err := s.client.Do(ctx, req, card) + if err != nil { + return nil, resp, err + } + + return card, resp, nil +} + +// DeleteProjectCard deletes a card from a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/cards/#delete-a-project-card +func (s *ProjectsService) DeleteProjectCard(ctx context.Context, cardID int) (*Response, error) { + u := fmt.Sprintf("projects/columns/cards/%v", cardID) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + return s.client.Do(ctx, req, nil) +} + +// ProjectCardMoveOptions specifies the parameters to the +// ProjectsService.MoveProjectCard method. +type ProjectCardMoveOptions struct { + // Position can be one of "top", "bottom", or "after:", where + // is the ID of a card in the same project. + Position string `json:"position"` + // ColumnID is the ID of a column in the same project. Note that ColumnID + // is required when using Position "after:" when that card is in + // another column; otherwise it is optional. + ColumnID int `json:"column_id,omitempty"` +} + +// MoveProjectCard moves a card within a GitHub Project. +// +// GitHub API docs: https://developer.github.com/v3/projects/cards/#move-a-project-card +func (s *ProjectsService) MoveProjectCard(ctx context.Context, cardID int, opt *ProjectCardMoveOptions) (*Response, error) { + u := fmt.Sprintf("projects/columns/cards/%v/moves", cardID) + req, err := s.client.NewRequest("POST", u, opt) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + return s.client.Do(ctx, req, nil) +} diff --git a/vendor/github.com/google/go-github/github/pulls.go b/vendor/github.com/google/go-github/github/pulls.go index 71cf2e2484..38b90f8796 100644 --- a/vendor/github.com/google/go-github/github/pulls.go +++ b/vendor/github.com/google/go-github/github/pulls.go @@ -6,6 +6,8 @@ package github import ( + "bytes" + "context" "fmt" "time" ) @@ -13,36 +15,40 @@ import ( // PullRequestsService handles communication with the pull request related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/pulls/ -type PullRequestsService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/pulls/ +type PullRequestsService service // PullRequest represents a GitHub pull request on a repository. type PullRequest struct { - Number *int `json:"number,omitempty"` - State *string `json:"state,omitempty"` - Title *string `json:"title,omitempty"` - Body *string `json:"body,omitempty"` - CreatedAt *time.Time `json:"created_at,omitempty"` - UpdatedAt *time.Time `json:"updated_at,omitempty"` - ClosedAt *time.Time `json:"closed_at,omitempty"` - MergedAt *time.Time `json:"merged_at,omitempty"` - User *User `json:"user,omitempty"` - Merged *bool `json:"merged,omitempty"` - Mergeable *bool `json:"mergeable,omitempty"` - MergedBy *User `json:"merged_by,omitempty"` - Comments *int `json:"comments,omitempty"` - Commits *int `json:"commits,omitempty"` - Additions *int `json:"additions,omitempty"` - Deletions *int `json:"deletions,omitempty"` - ChangedFiles *int `json:"changed_files,omitempty"` - URL *string `json:"url,omitempty"` - HTMLURL *string `json:"html_url,omitempty"` - IssueURL *string `json:"issue_url,omitempty"` - StatusesURL *string `json:"statuses_url,omitempty"` - DiffURL *string `json:"diff_url,omitempty"` - PatchURL *string `json:"patch_url,omitempty"` + ID *int `json:"id,omitempty"` + Number *int `json:"number,omitempty"` + State *string `json:"state,omitempty"` + Title *string `json:"title,omitempty"` + Body *string `json:"body,omitempty"` + CreatedAt *time.Time `json:"created_at,omitempty"` + UpdatedAt *time.Time `json:"updated_at,omitempty"` + ClosedAt *time.Time `json:"closed_at,omitempty"` + MergedAt *time.Time `json:"merged_at,omitempty"` + User *User `json:"user,omitempty"` + Merged *bool `json:"merged,omitempty"` + Mergeable *bool `json:"mergeable,omitempty"` + MergedBy *User `json:"merged_by,omitempty"` + Comments *int `json:"comments,omitempty"` + Commits *int `json:"commits,omitempty"` + Additions *int `json:"additions,omitempty"` + Deletions *int `json:"deletions,omitempty"` + ChangedFiles *int `json:"changed_files,omitempty"` + URL *string `json:"url,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` + IssueURL *string `json:"issue_url,omitempty"` + StatusesURL *string `json:"statuses_url,omitempty"` + DiffURL *string `json:"diff_url,omitempty"` + PatchURL *string `json:"patch_url,omitempty"` + ReviewCommentsURL *string `json:"review_comments_url,omitempty"` + ReviewCommentURL *string `json:"review_comment_url,omitempty"` + Assignee *User `json:"assignee,omitempty"` + Assignees []*User `json:"assignees,omitempty"` + Milestone *Milestone `json:"milestone,omitempty"` Head *PullRequestBranch `json:"head,omitempty"` Base *PullRequestBranch `json:"base,omitempty"` @@ -64,8 +70,8 @@ type PullRequestBranch struct { // PullRequestListOptions specifies the optional parameters to the // PullRequestsService.List method. type PullRequestListOptions struct { - // State filters pull requests based on their state. Possible values are: - // open, closed. Default is "open". + // State filters pull requests based on their state. Possible values are: + // open, closed. Default is "open". State string `url:"state,omitempty"` // Head filters pull requests by head user and branch name in the format of: @@ -89,8 +95,8 @@ type PullRequestListOptions struct { // List the pull requests for the specified repository. // -// GitHub API docs: http://developer.github.com/v3/pulls/#list-pull-requests -func (s *PullRequestsService) List(owner string, repo string, opt *PullRequestListOptions) ([]PullRequest, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/pulls/#list-pull-requests +func (s *PullRequestsService) List(ctx context.Context, owner string, repo string, opt *PullRequestListOptions) ([]*PullRequest, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -102,19 +108,19 @@ func (s *PullRequestsService) List(owner string, repo string, opt *PullRequestLi return nil, nil, err } - pulls := new([]PullRequest) - resp, err := s.client.Do(req, pulls) + var pulls []*PullRequest + resp, err := s.client.Do(ctx, req, &pulls) if err != nil { return nil, resp, err } - return *pulls, resp, err + return pulls, resp, nil } // Get a single pull request. // // GitHub API docs: https://developer.github.com/v3/pulls/#get-a-single-pull-request -func (s *PullRequestsService) Get(owner string, repo string, number int) (*PullRequest, *Response, error) { +func (s *PullRequestsService) Get(ctx context.Context, owner string, repo string, number int) (*PullRequest, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/%d", owner, repo, number) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -122,12 +128,38 @@ func (s *PullRequestsService) Get(owner string, repo string, number int) (*PullR } pull := new(PullRequest) - resp, err := s.client.Do(req, pull) + resp, err := s.client.Do(ctx, req, pull) if err != nil { return nil, resp, err } - return pull, resp, err + return pull, resp, nil +} + +// GetRaw gets raw (diff or patch) format of a pull request. +func (s *PullRequestsService) GetRaw(ctx context.Context, owner string, repo string, number int, opt RawOptions) (string, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d", owner, repo, number) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return "", nil, err + } + + switch opt.Type { + case Diff: + req.Header.Set("Accept", mediaTypeV3Diff) + case Patch: + req.Header.Set("Accept", mediaTypeV3Patch) + default: + return "", nil, fmt.Errorf("unsupported raw type %d", opt.Type) + } + + ret := new(bytes.Buffer) + resp, err := s.client.Do(ctx, req, ret) + if err != nil { + return "", resp, err + } + + return ret.String(), resp, nil } // NewPullRequest represents a new pull request to be created. @@ -142,7 +174,7 @@ type NewPullRequest struct { // Create a new pull request on the specified repository. // // GitHub API docs: https://developer.github.com/v3/pulls/#create-a-pull-request -func (s *PullRequestsService) Create(owner string, repo string, pull *NewPullRequest) (*PullRequest, *Response, error) { +func (s *PullRequestsService) Create(ctx context.Context, owner string, repo string, pull *NewPullRequest) (*PullRequest, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls", owner, repo) req, err := s.client.NewRequest("POST", u, pull) if err != nil { @@ -150,37 +182,62 @@ func (s *PullRequestsService) Create(owner string, repo string, pull *NewPullReq } p := new(PullRequest) - resp, err := s.client.Do(req, p) + resp, err := s.client.Do(ctx, req, p) if err != nil { return nil, resp, err } - return p, resp, err + return p, resp, nil +} + +type pullRequestUpdate struct { + Title *string `json:"title,omitempty"` + Body *string `json:"body,omitempty"` + State *string `json:"state,omitempty"` + Base *string `json:"base,omitempty"` } // Edit a pull request. +// pull must not be nil. +// +// The following fields are editable: Title, Body, State, and Base.Ref. +// Base.Ref updates the base branch of the pull request. // // GitHub API docs: https://developer.github.com/v3/pulls/#update-a-pull-request -func (s *PullRequestsService) Edit(owner string, repo string, number int, pull *PullRequest) (*PullRequest, *Response, error) { +func (s *PullRequestsService) Edit(ctx context.Context, owner string, repo string, number int, pull *PullRequest) (*PullRequest, *Response, error) { + if pull == nil { + return nil, nil, fmt.Errorf("pull must be provided") + } + u := fmt.Sprintf("repos/%v/%v/pulls/%d", owner, repo, number) - req, err := s.client.NewRequest("PATCH", u, pull) + + update := &pullRequestUpdate{ + Title: pull.Title, + Body: pull.Body, + State: pull.State, + } + if pull.Base != nil { + update.Base = pull.Base.Ref + } + + req, err := s.client.NewRequest("PATCH", u, update) if err != nil { return nil, nil, err } p := new(PullRequest) - resp, err := s.client.Do(req, p) + resp, err := s.client.Do(ctx, req, p) if err != nil { return nil, resp, err } - return p, resp, err + return p, resp, nil } // ListCommits lists the commits in a pull request. // // GitHub API docs: https://developer.github.com/v3/pulls/#list-commits-on-a-pull-request -func (s *PullRequestsService) ListCommits(owner string, repo string, number int, opt *ListOptions) ([]RepositoryCommit, *Response, error) { +func (s *PullRequestsService) ListCommits(ctx context.Context, owner string, repo string, number int, opt *ListOptions) ([]*RepositoryCommit, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/%d/commits", owner, repo, number) u, err := addOptions(u, opt) if err != nil { @@ -192,19 +249,19 @@ func (s *PullRequestsService) ListCommits(owner string, repo string, number int, return nil, nil, err } - commits := new([]RepositoryCommit) - resp, err := s.client.Do(req, commits) + var commits []*RepositoryCommit + resp, err := s.client.Do(ctx, req, &commits) if err != nil { return nil, resp, err } - return *commits, resp, err + return commits, resp, nil } // ListFiles lists the files in a pull request. // // GitHub API docs: https://developer.github.com/v3/pulls/#list-pull-requests-files -func (s *PullRequestsService) ListFiles(owner string, repo string, number int, opt *ListOptions) ([]CommitFile, *Response, error) { +func (s *PullRequestsService) ListFiles(ctx context.Context, owner string, repo string, number int, opt *ListOptions) ([]*CommitFile, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/%d/files", owner, repo, number) u, err := addOptions(u, opt) if err != nil { @@ -216,26 +273,26 @@ func (s *PullRequestsService) ListFiles(owner string, repo string, number int, o return nil, nil, err } - commitFiles := new([]CommitFile) - resp, err := s.client.Do(req, commitFiles) + var commitFiles []*CommitFile + resp, err := s.client.Do(ctx, req, &commitFiles) if err != nil { return nil, resp, err } - return *commitFiles, resp, err + return commitFiles, resp, nil } // IsMerged checks if a pull request has been merged. // // GitHub API docs: https://developer.github.com/v3/pulls/#get-if-a-pull-request-has-been-merged -func (s *PullRequestsService) IsMerged(owner string, repo string, number int) (bool, *Response, error) { +func (s *PullRequestsService) IsMerged(ctx context.Context, owner string, repo string, number int) (bool, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/%d/merge", owner, repo, number) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) merged, err := parseBoolResponse(err) return merged, resp, err } @@ -247,29 +304,48 @@ type PullRequestMergeResult struct { Message *string `json:"message,omitempty"` } +// PullRequestOptions lets you define how a pull request will be merged. +type PullRequestOptions struct { + CommitTitle string // Extra detail to append to automatic commit message. (Optional.) + SHA string // SHA that pull request head must match to allow merge. (Optional.) + + // The merge method to use. Possible values include: "merge", "squash", and "rebase" with the default being merge. (Optional.) + MergeMethod string +} + type pullRequestMergeRequest struct { - CommitMessage *string `json:"commit_message"` + CommitMessage string `json:"commit_message"` + CommitTitle string `json:"commit_title,omitempty"` + MergeMethod string `json:"merge_method,omitempty"` + SHA string `json:"sha,omitempty"` } // Merge a pull request (Merge Button™). +// commitMessage is the title for the automatic commit message. // // GitHub API docs: https://developer.github.com/v3/pulls/#merge-a-pull-request-merge-buttontrade -func (s *PullRequestsService) Merge(owner string, repo string, number int, commitMessage string) (*PullRequestMergeResult, *Response, error) { +func (s *PullRequestsService) Merge(ctx context.Context, owner string, repo string, number int, commitMessage string, options *PullRequestOptions) (*PullRequestMergeResult, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/%d/merge", owner, repo, number) - req, err := s.client.NewRequest("PUT", u, &pullRequestMergeRequest{ - CommitMessage: &commitMessage, - }) - + pullRequestBody := &pullRequestMergeRequest{CommitMessage: commitMessage} + if options != nil { + pullRequestBody.CommitTitle = options.CommitTitle + pullRequestBody.MergeMethod = options.MergeMethod + pullRequestBody.SHA = options.SHA + } + req, err := s.client.NewRequest("PUT", u, pullRequestBody) if err != nil { return nil, nil, err } + // TODO: This header will be unnecessary when the API is no longer in preview. + req.Header.Set("Accept", mediaTypeSquashPreview) + mergeResult := new(PullRequestMergeResult) - resp, err := s.client.Do(req, mergeResult) + resp, err := s.client.Do(ctx, req, mergeResult) if err != nil { return nil, resp, err } - return mergeResult, resp, err + return mergeResult, resp, nil } diff --git a/vendor/github.com/google/go-github/github/pulls_comments.go b/vendor/github.com/google/go-github/github/pulls_comments.go index 10d2a64906..bc0bc2d4a2 100644 --- a/vendor/github.com/google/go-github/github/pulls_comments.go +++ b/vendor/github.com/google/go-github/github/pulls_comments.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -13,6 +14,7 @@ import ( // PullRequestComment represents a comment left on a pull request. type PullRequestComment struct { ID *int `json:"id,omitempty"` + InReplyTo *int `json:"in_reply_to,omitempty"` Body *string `json:"body,omitempty"` Path *string `json:"path,omitempty"` DiffHunk *string `json:"diff_hunk,omitempty"` @@ -21,6 +23,7 @@ type PullRequestComment struct { CommitID *string `json:"commit_id,omitempty"` OriginalCommitID *string `json:"original_commit_id,omitempty"` User *User `json:"user,omitempty"` + Reactions *Reactions `json:"reactions,omitempty"` CreatedAt *time.Time `json:"created_at,omitempty"` UpdatedAt *time.Time `json:"updated_at,omitempty"` URL *string `json:"url,omitempty"` @@ -35,10 +38,10 @@ func (p PullRequestComment) String() string { // PullRequestListCommentsOptions specifies the optional parameters to the // PullRequestsService.ListComments method. type PullRequestListCommentsOptions struct { - // Sort specifies how to sort comments. Possible values are: created, updated. + // Sort specifies how to sort comments. Possible values are: created, updated. Sort string `url:"sort,omitempty"` - // Direction in which to sort comments. Possible values are: asc, desc. + // Direction in which to sort comments. Possible values are: asc, desc. Direction string `url:"direction,omitempty"` // Since filters comments by time. @@ -47,12 +50,12 @@ type PullRequestListCommentsOptions struct { ListOptions } -// ListComments lists all comments on the specified pull request. Specifying a +// ListComments lists all comments on the specified pull request. Specifying a // pull request number of 0 will return all comments on all pull requests for // the repository. // // GitHub API docs: https://developer.github.com/v3/pulls/comments/#list-comments-on-a-pull-request -func (s *PullRequestsService) ListComments(owner string, repo string, number int, opt *PullRequestListCommentsOptions) ([]PullRequestComment, *Response, error) { +func (s *PullRequestsService) ListComments(ctx context.Context, owner string, repo string, number int, opt *PullRequestListCommentsOptions) ([]*PullRequestComment, *Response, error) { var u string if number == 0 { u = fmt.Sprintf("repos/%v/%v/pulls/comments", owner, repo) @@ -69,38 +72,44 @@ func (s *PullRequestsService) ListComments(owner string, repo string, number int return nil, nil, err } - comments := new([]PullRequestComment) - resp, err := s.client.Do(req, comments) + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var comments []*PullRequestComment + resp, err := s.client.Do(ctx, req, &comments) if err != nil { return nil, resp, err } - return *comments, resp, err + return comments, resp, nil } // GetComment fetches the specified pull request comment. // // GitHub API docs: https://developer.github.com/v3/pulls/comments/#get-a-single-comment -func (s *PullRequestsService) GetComment(owner string, repo string, number int) (*PullRequestComment, *Response, error) { +func (s *PullRequestsService) GetComment(ctx context.Context, owner string, repo string, number int) (*PullRequestComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/comments/%d", owner, repo, number) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + comment := new(PullRequestComment) - resp, err := s.client.Do(req, comment) + resp, err := s.client.Do(ctx, req, comment) if err != nil { return nil, resp, err } - return comment, resp, err + return comment, resp, nil } // CreateComment creates a new comment on the specified pull request. // // GitHub API docs: https://developer.github.com/v3/pulls/comments/#create-a-comment -func (s *PullRequestsService) CreateComment(owner string, repo string, number int, comment *PullRequestComment) (*PullRequestComment, *Response, error) { +func (s *PullRequestsService) CreateComment(ctx context.Context, owner string, repo string, number int, comment *PullRequestComment) (*PullRequestComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/%d/comments", owner, repo, number) req, err := s.client.NewRequest("POST", u, comment) if err != nil { @@ -108,18 +117,18 @@ func (s *PullRequestsService) CreateComment(owner string, repo string, number in } c := new(PullRequestComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // EditComment updates a pull request comment. // // GitHub API docs: https://developer.github.com/v3/pulls/comments/#edit-a-comment -func (s *PullRequestsService) EditComment(owner string, repo string, number int, comment *PullRequestComment) (*PullRequestComment, *Response, error) { +func (s *PullRequestsService) EditComment(ctx context.Context, owner string, repo string, number int, comment *PullRequestComment) (*PullRequestComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/comments/%d", owner, repo, number) req, err := s.client.NewRequest("PATCH", u, comment) if err != nil { @@ -127,22 +136,22 @@ func (s *PullRequestsService) EditComment(owner string, repo string, number int, } c := new(PullRequestComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // DeleteComment deletes a pull request comment. // // GitHub API docs: https://developer.github.com/v3/pulls/comments/#delete-a-comment -func (s *PullRequestsService) DeleteComment(owner string, repo string, number int) (*Response, error) { +func (s *PullRequestsService) DeleteComment(ctx context.Context, owner string, repo string, number int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/pulls/comments/%d", owner, repo, number) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/pulls_reviewers.go b/vendor/github.com/google/go-github/github/pulls_reviewers.go new file mode 100644 index 0000000000..efa3888964 --- /dev/null +++ b/vendor/github.com/google/go-github/github/pulls_reviewers.go @@ -0,0 +1,84 @@ +// Copyright 2017 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// RequestReviewers creates a review request for the provided GitHub users for the specified pull request. +// +// GitHub API docs: https://developer.github.com/v3/pulls/review_requests/#create-a-review-request +func (s *PullRequestsService) RequestReviewers(ctx context.Context, owner, repo string, number int, logins []string) (*PullRequest, *Response, error) { + u := fmt.Sprintf("repos/%s/%s/pulls/%d/requested_reviewers", owner, repo, number) + + reviewers := struct { + Reviewers []string `json:"reviewers,omitempty"` + }{ + Reviewers: logins, + } + req, err := s.client.NewRequest("POST", u, &reviewers) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + r := new(PullRequest) + resp, err := s.client.Do(ctx, req, r) + if err != nil { + return nil, resp, err + } + + return r, resp, nil +} + +// ListReviewers lists users whose reviews have been requested on the specified pull request. +// +// GitHub API docs: https://developer.github.com/v3/pulls/review_requests/#list-review-requests +func (s *PullRequestsService) ListReviewers(ctx context.Context, owner, repo string, number int) ([]*User, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/requested_reviewers", owner, repo, number) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + var users []*User + resp, err := s.client.Do(ctx, req, &users) + if err != nil { + return nil, resp, err + } + + return users, resp, nil +} + +// RemoveReviewers removes the review request for the provided GitHub users for the specified pull request. +// +// GitHub API docs: https://developer.github.com/v3/pulls/review_requests/#delete-a-review-request +func (s *PullRequestsService) RemoveReviewers(ctx context.Context, owner, repo string, number int, logins []string) (*Response, error) { + u := fmt.Sprintf("repos/%s/%s/pulls/%d/requested_reviewers", owner, repo, number) + + reviewers := struct { + Reviewers []string `json:"reviewers,omitempty"` + }{ + Reviewers: logins, + } + req, err := s.client.NewRequest("DELETE", u, &reviewers) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + return s.client.Do(ctx, req, reviewers) +} diff --git a/vendor/github.com/google/go-github/github/pulls_reviews.go b/vendor/github.com/google/go-github/github/pulls_reviews.go new file mode 100644 index 0000000000..c27b6a8c47 --- /dev/null +++ b/vendor/github.com/google/go-github/github/pulls_reviews.go @@ -0,0 +1,248 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" + "time" +) + +// PullRequestReview represents a review of a pull request. +type PullRequestReview struct { + ID *int `json:"id,omitempty"` + User *User `json:"user,omitempty"` + Body *string `json:"body,omitempty"` + SubmittedAt *time.Time `json:"submitted_at,omitempty"` + CommitID *string `json:"commit_id,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` + PullRequestURL *string `json:"pull_request_url,omitempty"` + State *string `json:"state,omitempty"` +} + +func (p PullRequestReview) String() string { + return Stringify(p) +} + +// DraftReviewComment represents a comment part of the review. +type DraftReviewComment struct { + Path *string `json:"path,omitempty"` + Position *int `json:"position,omitempty"` + Body *string `json:"body,omitempty"` +} + +func (c DraftReviewComment) String() string { + return Stringify(c) +} + +// PullRequestReviewRequest represents a request to create a review. +type PullRequestReviewRequest struct { + Body *string `json:"body,omitempty"` + Event *string `json:"event,omitempty"` + Comments []*DraftReviewComment `json:"comments,omitempty"` +} + +func (r PullRequestReviewRequest) String() string { + return Stringify(r) +} + +// PullRequestReviewDismissalRequest represents a request to dismiss a review. +type PullRequestReviewDismissalRequest struct { + Message *string `json:"message,omitempty"` +} + +func (r PullRequestReviewDismissalRequest) String() string { + return Stringify(r) +} + +// ListReviews lists all reviews on the specified pull request. +// +// TODO: Follow up with GitHub support about an issue with this method's +// returned error format and remove this comment once it's fixed. +// Read more about it here - https://github.com/google/go-github/issues/540 +// +// GitHub API docs: https://developer.github.com/v3/pulls/reviews/#list-reviews-on-a-pull-request +func (s *PullRequestsService) ListReviews(ctx context.Context, owner, repo string, number int) ([]*PullRequestReview, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/reviews", owner, repo, number) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + var reviews []*PullRequestReview + resp, err := s.client.Do(ctx, req, &reviews) + if err != nil { + return nil, resp, err + } + + return reviews, resp, nil +} + +// GetReview fetches the specified pull request review. +// +// TODO: Follow up with GitHub support about an issue with this method's +// returned error format and remove this comment once it's fixed. +// Read more about it here - https://github.com/google/go-github/issues/540 +// +// GitHub API docs: https://developer.github.com/v3/pulls/reviews/#get-a-single-review +func (s *PullRequestsService) GetReview(ctx context.Context, owner, repo string, number, reviewID int) (*PullRequestReview, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/reviews/%d", owner, repo, number, reviewID) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + review := new(PullRequestReview) + resp, err := s.client.Do(ctx, req, review) + if err != nil { + return nil, resp, err + } + + return review, resp, nil +} + +// DeletePendingReview deletes the specified pull request pending review. +// +// TODO: Follow up with GitHub support about an issue with this method's +// returned error format and remove this comment once it's fixed. +// Read more about it here - https://github.com/google/go-github/issues/540 +// +// GitHub API docs: https://developer.github.com/v3/pulls/reviews/#delete-a-pending-review +func (s *PullRequestsService) DeletePendingReview(ctx context.Context, owner, repo string, number, reviewID int) (*PullRequestReview, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/reviews/%d", owner, repo, number, reviewID) + + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + review := new(PullRequestReview) + resp, err := s.client.Do(ctx, req, review) + if err != nil { + return nil, resp, err + } + + return review, resp, nil +} + +// ListReviewComments lists all the comments for the specified review. +// +// TODO: Follow up with GitHub support about an issue with this method's +// returned error format and remove this comment once it's fixed. +// Read more about it here - https://github.com/google/go-github/issues/540 +// +// GitHub API docs: https://developer.github.com/v3/pulls/reviews/#get-a-single-reviews-comments +func (s *PullRequestsService) ListReviewComments(ctx context.Context, owner, repo string, number, reviewID int) ([]*PullRequestComment, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/reviews/%d/comments", owner, repo, number, reviewID) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + var comments []*PullRequestComment + resp, err := s.client.Do(ctx, req, &comments) + if err != nil { + return nil, resp, err + } + + return comments, resp, nil +} + +// CreateReview creates a new review on the specified pull request. +// +// TODO: Follow up with GitHub support about an issue with this method's +// returned error format and remove this comment once it's fixed. +// Read more about it here - https://github.com/google/go-github/issues/540 +// +// GitHub API docs: https://developer.github.com/v3/pulls/reviews/#create-a-pull-request-review +func (s *PullRequestsService) CreateReview(ctx context.Context, owner, repo string, number int, review *PullRequestReviewRequest) (*PullRequestReview, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/reviews", owner, repo, number) + + req, err := s.client.NewRequest("POST", u, review) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + r := new(PullRequestReview) + resp, err := s.client.Do(ctx, req, r) + if err != nil { + return nil, resp, err + } + + return r, resp, nil +} + +// SubmitReview submits a specified review on the specified pull request. +// +// TODO: Follow up with GitHub support about an issue with this method's +// returned error format and remove this comment once it's fixed. +// Read more about it here - https://github.com/google/go-github/issues/540 +// +// GitHub API docs: https://developer.github.com/v3/pulls/reviews/#submit-a-pull-request-review +func (s *PullRequestsService) SubmitReview(ctx context.Context, owner, repo string, number, reviewID int, review *PullRequestReviewRequest) (*PullRequestReview, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/reviews/%d/events", owner, repo, number, reviewID) + + req, err := s.client.NewRequest("POST", u, review) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + r := new(PullRequestReview) + resp, err := s.client.Do(ctx, req, r) + if err != nil { + return nil, resp, err + } + + return r, resp, nil +} + +// DismissReview dismisses a specified review on the specified pull request. +// +// TODO: Follow up with GitHub support about an issue with this method's +// returned error format and remove this comment once it's fixed. +// Read more about it here - https://github.com/google/go-github/issues/540 +// +// GitHub API docs: https://developer.github.com/v3/pulls/reviews/#dismiss-a-pull-request-review +func (s *PullRequestsService) DismissReview(ctx context.Context, owner, repo string, number, reviewID int, review *PullRequestReviewDismissalRequest) (*PullRequestReview, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/%d/reviews/%d/dismissals", owner, repo, number, reviewID) + + req, err := s.client.NewRequest("PUT", u, review) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypePullRequestReviewsPreview) + + r := new(PullRequestReview) + resp, err := s.client.Do(ctx, req, r) + if err != nil { + return nil, resp, err + } + + return r, resp, nil +} diff --git a/vendor/github.com/google/go-github/github/reactions.go b/vendor/github.com/google/go-github/github/reactions.go new file mode 100644 index 0000000000..739413d716 --- /dev/null +++ b/vendor/github.com/google/go-github/github/reactions.go @@ -0,0 +1,273 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// ReactionsService provides access to the reactions-related functions in the +// GitHub API. +// +// GitHub API docs: https://developer.github.com/v3/reactions/ +type ReactionsService service + +// Reaction represents a GitHub reaction. +type Reaction struct { + // ID is the Reaction ID. + ID *int `json:"id,omitempty"` + User *User `json:"user,omitempty"` + // Content is the type of reaction. + // Possible values are: + // "+1", "-1", "laugh", "confused", "heart", "hooray". + Content *string `json:"content,omitempty"` +} + +// Reactions represents a summary of GitHub reactions. +type Reactions struct { + TotalCount *int `json:"total_count,omitempty"` + PlusOne *int `json:"+1,omitempty"` + MinusOne *int `json:"-1,omitempty"` + Laugh *int `json:"laugh,omitempty"` + Confused *int `json:"confused,omitempty"` + Heart *int `json:"heart,omitempty"` + Hooray *int `json:"hooray,omitempty"` + URL *string `json:"url,omitempty"` +} + +func (r Reaction) String() string { + return Stringify(r) +} + +// ListCommentReactions lists the reactions for a commit comment. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#list-reactions-for-a-commit-comment +func (s *ReactionsService) ListCommentReactions(ctx context.Context, owner, repo string, id int, opt *ListOptions) ([]*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/comments/%v/reactions", owner, repo, id) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var m []*Reaction + resp, err := s.client.Do(ctx, req, &m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// CreateCommentReaction creates a reaction for a commit comment. +// Note that if you have already created a reaction of type content, the +// previously created reaction will be returned with Status: 200 OK. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#create-reaction-for-a-commit-comment +func (s ReactionsService) CreateCommentReaction(ctx context.Context, owner, repo string, id int, content string) (*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/comments/%v/reactions", owner, repo, id) + + body := &Reaction{Content: String(content)} + req, err := s.client.NewRequest("POST", u, body) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + m := &Reaction{} + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// ListIssueReactions lists the reactions for an issue. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#list-reactions-for-an-issue +func (s *ReactionsService) ListIssueReactions(ctx context.Context, owner, repo string, number int, opt *ListOptions) ([]*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/issues/%v/reactions", owner, repo, number) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var m []*Reaction + resp, err := s.client.Do(ctx, req, &m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// CreateIssueReaction creates a reaction for an issue. +// Note that if you have already created a reaction of type content, the +// previously created reaction will be returned with Status: 200 OK. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#create-reaction-for-an-issue +func (s ReactionsService) CreateIssueReaction(ctx context.Context, owner, repo string, number int, content string) (*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/issues/%v/reactions", owner, repo, number) + + body := &Reaction{Content: String(content)} + req, err := s.client.NewRequest("POST", u, body) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + m := &Reaction{} + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// ListIssueCommentReactions lists the reactions for an issue comment. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#list-reactions-for-an-issue-comment +func (s *ReactionsService) ListIssueCommentReactions(ctx context.Context, owner, repo string, id int, opt *ListOptions) ([]*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/issues/comments/%v/reactions", owner, repo, id) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var m []*Reaction + resp, err := s.client.Do(ctx, req, &m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// CreateIssueCommentReaction creates a reaction for an issue comment. +// Note that if you have already created a reaction of type content, the +// previously created reaction will be returned with Status: 200 OK. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#create-reaction-for-an-issue-comment +func (s ReactionsService) CreateIssueCommentReaction(ctx context.Context, owner, repo string, id int, content string) (*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/issues/comments/%v/reactions", owner, repo, id) + + body := &Reaction{Content: String(content)} + req, err := s.client.NewRequest("POST", u, body) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + m := &Reaction{} + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// ListPullRequestCommentReactions lists the reactions for a pull request review comment. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#list-reactions-for-an-issue-comment +func (s *ReactionsService) ListPullRequestCommentReactions(ctx context.Context, owner, repo string, id int, opt *ListOptions) ([]*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/comments/%v/reactions", owner, repo, id) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var m []*Reaction + resp, err := s.client.Do(ctx, req, &m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// CreatePullRequestCommentReaction creates a reaction for a pull request review comment. +// Note that if you have already created a reaction of type content, the +// previously created reaction will be returned with Status: 200 OK. +// +// GitHub API docs: https://developer.github.com/v3/reactions/#create-reaction-for-an-issue-comment +func (s ReactionsService) CreatePullRequestCommentReaction(ctx context.Context, owner, repo string, id int, content string) (*Reaction, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pulls/comments/%v/reactions", owner, repo, id) + + body := &Reaction{Content: String(content)} + req, err := s.client.NewRequest("POST", u, body) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + m := &Reaction{} + resp, err := s.client.Do(ctx, req, m) + if err != nil { + return nil, resp, err + } + + return m, resp, nil +} + +// DeleteReaction deletes a reaction. +// +// GitHub API docs: https://developer.github.com/v3/reaction/reactions/#delete-a-reaction-archive +func (s *ReactionsService) DeleteReaction(ctx context.Context, id int) (*Response, error) { + u := fmt.Sprintf("reactions/%v", id) + + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + return s.client.Do(ctx, req, nil) +} diff --git a/vendor/github.com/google/go-github/github/repos.go b/vendor/github.com/google/go-github/github/repos.go index cba160ce66..22dc42de95 100644 --- a/vendor/github.com/google/go-github/github/repos.go +++ b/vendor/github.com/google/go-github/github/repos.go @@ -5,15 +5,17 @@ package github -import "fmt" +import ( + "context" + "fmt" + "strings" +) // RepositoriesService handles communication with the repository related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/repos/ -type RepositoriesService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/repos/ +type RepositoriesService service // Repository represents a GitHub repository. type Repository struct { @@ -48,15 +50,22 @@ type Repository struct { Source *Repository `json:"source,omitempty"` Organization *Organization `json:"organization,omitempty"` Permissions *map[string]bool `json:"permissions,omitempty"` + AllowRebaseMerge *bool `json:"allow_rebase_merge,omitempty"` + AllowSquashMerge *bool `json:"allow_squash_merge,omitempty"` + AllowMergeCommit *bool `json:"allow_merge_commit,omitempty"` // Only provided when using RepositoriesService.Get while in preview License *License `json:"license,omitempty"` // Additional mutable fields when creating and editing a repository - Private *bool `json:"private"` - HasIssues *bool `json:"has_issues"` - HasWiki *bool `json:"has_wiki"` - HasDownloads *bool `json:"has_downloads"` + Private *bool `json:"private"` + HasIssues *bool `json:"has_issues"` + HasWiki *bool `json:"has_wiki"` + HasPages *bool `json:"has_pages"` + HasDownloads *bool `json:"has_downloads"` + LicenseTemplate *string `json:"license_template,omitempty"` + GitignoreTemplate *string `json:"gitignore_template,omitempty"` + // Creating an organization repository. Required for non-owners. TeamID *int `json:"team_id"` @@ -72,6 +81,7 @@ type Repository struct { CompareURL *string `json:"compare_url,omitempty"` ContentsURL *string `json:"contents_url,omitempty"` ContributorsURL *string `json:"contributors_url,omitempty"` + DeploymentsURL *string `json:"deployments_url,omitempty"` DownloadsURL *string `json:"downloads_url,omitempty"` EventsURL *string `json:"events_url,omitempty"` ForksURL *string `json:"forks_url,omitempty"` @@ -110,26 +120,43 @@ func (r Repository) String() string { // RepositoryListOptions specifies the optional parameters to the // RepositoriesService.List method. type RepositoryListOptions struct { - // Type of repositories to list. Possible values are: all, owner, public, - // private, member. Default is "all". + // Visibility of repositories to list. Can be one of all, public, or private. + // Default: all + Visibility string `url:"visibility,omitempty"` + + // List repos of given affiliation[s]. + // Comma-separated list of values. Can include: + // * owner: Repositories that are owned by the authenticated user. + // * collaborator: Repositories that the user has been added to as a + // collaborator. + // * organization_member: Repositories that the user has access to through + // being a member of an organization. This includes every repository on + // every team that the user is on. + // Default: owner,collaborator,organization_member + Affiliation string `url:"affiliation,omitempty"` + + // Type of repositories to list. + // Can be one of all, owner, public, private, member. Default: all + // Will cause a 422 error if used in the same request as visibility or + // affiliation. Type string `url:"type,omitempty"` - // How to sort the repository list. Possible values are: created, updated, - // pushed, full_name. Default is "full_name". + // How to sort the repository list. Can be one of created, updated, pushed, + // full_name. Default: full_name Sort string `url:"sort,omitempty"` - // Direction in which to sort repositories. Possible values are: asc, desc. - // Default is "asc" when sort is "full_name", otherwise default is "desc". + // Direction in which to sort repositories. Can be one of asc or desc. + // Default: when using full_name: asc; otherwise desc Direction string `url:"direction,omitempty"` ListOptions } -// List the repositories for a user. Passing the empty string will list +// List the repositories for a user. Passing the empty string will list // repositories for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/repos/#list-user-repositories -func (s *RepositoriesService) List(user string, opt *RepositoryListOptions) ([]Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#list-user-repositories +func (s *RepositoriesService) List(ctx context.Context, user string, opt *RepositoryListOptions) ([]*Repository, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/repos", user) @@ -149,20 +176,20 @@ func (s *RepositoriesService) List(user string, opt *RepositoryListOptions) ([]R // TODO: remove custom Accept header when license support fully launches req.Header.Set("Accept", mediaTypeLicensesPreview) - repos := new([]Repository) - resp, err := s.client.Do(req, repos) + var repos []*Repository + resp, err := s.client.Do(ctx, req, &repos) if err != nil { return nil, resp, err } - return *repos, resp, err + return repos, resp, nil } // RepositoryListByOrgOptions specifies the optional parameters to the // RepositoriesService.ListByOrg method. type RepositoryListByOrgOptions struct { - // Type of repositories to list. Possible values are: all, public, private, - // forks, sources, member. Default is "all". + // Type of repositories to list. Possible values are: all, public, private, + // forks, sources, member. Default is "all". Type string `url:"type,omitempty"` ListOptions @@ -170,8 +197,8 @@ type RepositoryListByOrgOptions struct { // ListByOrg lists the repositories for an organization. // -// GitHub API docs: http://developer.github.com/v3/repos/#list-organization-repositories -func (s *RepositoriesService) ListByOrg(org string, opt *RepositoryListByOrgOptions) ([]Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#list-organization-repositories +func (s *RepositoriesService) ListByOrg(ctx context.Context, org string, opt *RepositoryListByOrgOptions) ([]*Repository, *Response, error) { u := fmt.Sprintf("orgs/%v/repos", org) u, err := addOptions(u, opt) if err != nil { @@ -186,13 +213,13 @@ func (s *RepositoriesService) ListByOrg(org string, opt *RepositoryListByOrgOpti // TODO: remove custom Accept header when license support fully launches req.Header.Set("Accept", mediaTypeLicensesPreview) - repos := new([]Repository) - resp, err := s.client.Do(req, repos) + var repos []*Repository + resp, err := s.client.Do(ctx, req, &repos) if err != nil { return nil, resp, err } - return *repos, resp, err + return repos, resp, nil } // RepositoryListAllOptions specifies the optional parameters to the @@ -206,8 +233,8 @@ type RepositoryListAllOptions struct { // ListAll lists all GitHub repositories in the order that they were created. // -// GitHub API docs: http://developer.github.com/v3/repos/#list-all-public-repositories -func (s *RepositoriesService) ListAll(opt *RepositoryListAllOptions) ([]Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#list-all-public-repositories +func (s *RepositoriesService) ListAll(ctx context.Context, opt *RepositoryListAllOptions) ([]*Repository, *Response, error) { u, err := addOptions("repositories", opt) if err != nil { return nil, nil, err @@ -218,21 +245,21 @@ func (s *RepositoriesService) ListAll(opt *RepositoryListAllOptions) ([]Reposito return nil, nil, err } - repos := new([]Repository) - resp, err := s.client.Do(req, repos) + var repos []*Repository + resp, err := s.client.Do(ctx, req, &repos) if err != nil { return nil, resp, err } - return *repos, resp, err + return repos, resp, nil } -// Create a new repository. If an organization is specified, the new -// repository will be created under that org. If the empty string is +// Create a new repository. If an organization is specified, the new +// repository will be created under that org. If the empty string is // specified, it will be created for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/repos/#create -func (s *RepositoriesService) Create(org string, repo *Repository) (*Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#create +func (s *RepositoriesService) Create(ctx context.Context, org string, repo *Repository) (*Repository, *Response, error) { var u string if org != "" { u = fmt.Sprintf("orgs/%v/repos", org) @@ -246,67 +273,94 @@ func (s *RepositoriesService) Create(org string, repo *Repository) (*Repository, } r := new(Repository) - resp, err := s.client.Do(req, r) + resp, err := s.client.Do(ctx, req, r) if err != nil { return nil, resp, err } - return r, resp, err + return r, resp, nil } // Get fetches a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/#get -func (s *RepositoriesService) Get(owner, repo string) (*Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#get +func (s *RepositoriesService) Get(ctx context.Context, owner, repo string) (*Repository, *Response, error) { u := fmt.Sprintf("repos/%v/%v", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when the license support fully launches + // https://developer.github.com/v3/licenses/#get-a-repositorys-license + acceptHeaders := []string{mediaTypeLicensesPreview, mediaTypeSquashPreview} + req.Header.Set("Accept", strings.Join(acceptHeaders, ", ")) + + repository := new(Repository) + resp, err := s.client.Do(ctx, req, repository) + if err != nil { + return nil, resp, err + } + + return repository, resp, nil +} + +// GetByID fetches a repository. +// +// Note: GetByID uses the undocumented GitHub API endpoint /repositories/:id. +func (s *RepositoriesService) GetByID(ctx context.Context, id int) (*Repository, *Response, error) { + u := fmt.Sprintf("repositories/%d", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + // TODO: remove custom Accept header when the license support fully launches // https://developer.github.com/v3/licenses/#get-a-repositorys-license req.Header.Set("Accept", mediaTypeLicensesPreview) repository := new(Repository) - resp, err := s.client.Do(req, repository) + resp, err := s.client.Do(ctx, req, repository) if err != nil { return nil, resp, err } - return repository, resp, err + return repository, resp, nil } // Edit updates a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/#edit -func (s *RepositoriesService) Edit(owner, repo string, repository *Repository) (*Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#edit +func (s *RepositoriesService) Edit(ctx context.Context, owner, repo string, repository *Repository) (*Repository, *Response, error) { u := fmt.Sprintf("repos/%v/%v", owner, repo) req, err := s.client.NewRequest("PATCH", u, repository) if err != nil { return nil, nil, err } + // TODO: Remove this preview header after API is fully vetted. + req.Header.Set("Accept", mediaTypeSquashPreview) + r := new(Repository) - resp, err := s.client.Do(req, r) + resp, err := s.client.Do(ctx, req, r) if err != nil { return nil, resp, err } - return r, resp, err + return r, resp, nil } // Delete a repository. // // GitHub API docs: https://developer.github.com/v3/repos/#delete-a-repository -func (s *RepositoriesService) Delete(owner, repo string) (*Response, error) { +func (s *RepositoriesService) Delete(ctx context.Context, owner, repo string) (*Response, error) { u := fmt.Sprintf("repos/%v/%v", owner, repo) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Contributor represents a repository contributor @@ -342,8 +396,8 @@ type ListContributorsOptions struct { // ListContributors lists contributors for a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/#list-contributors -func (s *RepositoriesService) ListContributors(owner string, repository string, opt *ListContributorsOptions) ([]Contributor, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#list-contributors +func (s *RepositoriesService) ListContributors(ctx context.Context, owner string, repository string, opt *ListContributorsOptions) ([]*Contributor, *Response, error) { u := fmt.Sprintf("repos/%v/%v/contributors", owner, repository) u, err := addOptions(u, opt) if err != nil { @@ -355,13 +409,13 @@ func (s *RepositoriesService) ListContributors(owner string, repository string, return nil, nil, err } - contributor := new([]Contributor) - resp, err := s.client.Do(req, contributor) + var contributor []*Contributor + resp, err := s.client.Do(ctx, req, &contributor) if err != nil { return nil, nil, err } - return *contributor, resp, err + return contributor, resp, nil } // ListLanguages lists languages for the specified repository. The returned map @@ -373,8 +427,8 @@ func (s *RepositoriesService) ListContributors(owner string, repository string, // "Python": 7769 // } // -// GitHub API Docs: http://developer.github.com/v3/repos/#list-languages -func (s *RepositoriesService) ListLanguages(owner string, repo string) (map[string]int, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#list-languages +func (s *RepositoriesService) ListLanguages(ctx context.Context, owner string, repo string) (map[string]int, *Response, error) { u := fmt.Sprintf("repos/%v/%v/languages", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -382,18 +436,18 @@ func (s *RepositoriesService) ListLanguages(owner string, repo string) (map[stri } languages := make(map[string]int) - resp, err := s.client.Do(req, &languages) + resp, err := s.client.Do(ctx, req, &languages) if err != nil { return nil, resp, err } - return languages, resp, err + return languages, resp, nil } // ListTeams lists the teams for the specified repository. // // GitHub API docs: https://developer.github.com/v3/repos/#list-teams -func (s *RepositoriesService) ListTeams(owner string, repo string, opt *ListOptions) ([]Team, *Response, error) { +func (s *RepositoriesService) ListTeams(ctx context.Context, owner string, repo string, opt *ListOptions) ([]*Team, *Response, error) { u := fmt.Sprintf("repos/%v/%v/teams", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -405,13 +459,13 @@ func (s *RepositoriesService) ListTeams(owner string, repo string, opt *ListOpti return nil, nil, err } - teams := new([]Team) - resp, err := s.client.Do(req, teams) + var teams []*Team + resp, err := s.client.Do(ctx, req, &teams) if err != nil { return nil, resp, err } - return *teams, resp, err + return teams, resp, nil } // RepositoryTag represents a repository tag. @@ -425,7 +479,7 @@ type RepositoryTag struct { // ListTags lists tags for the specified repository. // // GitHub API docs: https://developer.github.com/v3/repos/#list-tags -func (s *RepositoriesService) ListTags(owner string, repo string, opt *ListOptions) ([]RepositoryTag, *Response, error) { +func (s *RepositoriesService) ListTags(ctx context.Context, owner string, repo string, opt *ListOptions) ([]*RepositoryTag, *Response, error) { u := fmt.Sprintf("repos/%v/%v/tags", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -437,44 +491,77 @@ func (s *RepositoriesService) ListTags(owner string, repo string, opt *ListOptio return nil, nil, err } - tags := new([]RepositoryTag) - resp, err := s.client.Do(req, tags) + var tags []*RepositoryTag + resp, err := s.client.Do(ctx, req, &tags) if err != nil { return nil, resp, err } - return *tags, resp, err + return tags, resp, nil } // Branch represents a repository branch type Branch struct { - Name *string `json:"name,omitempty"` - Commit *Commit `json:"commit,omitempty"` - Protection *Protection `json:"protection,omitempty"` + Name *string `json:"name,omitempty"` + Commit *RepositoryCommit `json:"commit,omitempty"` + Protected *bool `json:"protected,omitempty"` } -// Protection represents a repository branch's protection +// Protection represents a repository branch's protection. type Protection struct { - Enabled *bool `json:"enabled,omitempty"` - RequiredStatusChecks *RequiredStatusChecks `json:"required_status_checks,omitempty"` + RequiredStatusChecks *RequiredStatusChecks `json:"required_status_checks"` + RequiredPullRequestReviews *RequiredPullRequestReviews `json:"required_pull_request_reviews"` + Restrictions *BranchRestrictions `json:"restrictions"` } -// RequiredStatusChecks represents the protection status of a individual branch +// ProtectionRequest represents a request to create/edit a branch's protection. +type ProtectionRequest struct { + RequiredStatusChecks *RequiredStatusChecks `json:"required_status_checks"` + RequiredPullRequestReviews *RequiredPullRequestReviews `json:"required_pull_request_reviews"` + Restrictions *BranchRestrictionsRequest `json:"restrictions"` +} + +// RequiredStatusChecks represents the protection status of a individual branch. type RequiredStatusChecks struct { - // Who required status checks apply to. - // Possible values are: - // off - // non_admins - // everyone - EnforcementLevel *string `json:"enforcement_level,omitempty"` - // The list of status checks which are required - Contexts *[]string `json:"contexts,omitempty"` + // Enforce required status checks for repository administrators. (Required.) + IncludeAdmins bool `json:"include_admins"` + // Require branches to be up to date before merging. (Required.) + Strict bool `json:"strict"` + // The list of status checks to require in order to merge into this + // branch. (Required; use []string{} instead of nil for empty list.) + Contexts []string `json:"contexts"` +} + +// RequiredPullRequestReviews represents the protection configuration for pull requests. +type RequiredPullRequestReviews struct { + // Enforce pull request reviews for repository administrators. (Required.) + IncludeAdmins bool `json:"include_admins"` +} + +// BranchRestrictions represents the restriction that only certain users or +// teams may push to a branch. +type BranchRestrictions struct { + // The list of user logins with push access. + Users []*User `json:"users"` + // The list of team slugs with push access. + Teams []*Team `json:"teams"` +} + +// BranchRestrictionsRequest represents the request to create/edit the +// restriction that only certain users or teams may push to a branch. It is +// separate from BranchRestrictions above because the request structure is +// different from the response structure. +type BranchRestrictionsRequest struct { + // The list of user logins with push access. (Required; use []string{} instead of nil for empty list.) + Users []string `json:"users"` + // The list of team slugs with push access. (Required; use []string{} instead of nil for empty list.) + Teams []string `json:"teams"` } // ListBranches lists branches for the specified repository. // -// GitHub API docs: http://developer.github.com/v3/repos/#list-branches -func (s *RepositoriesService) ListBranches(owner string, repo string, opt *ListOptions) ([]Branch, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/#list-branches +func (s *RepositoriesService) ListBranches(ctx context.Context, owner string, repo string, opt *ListOptions) ([]*Branch, *Response, error) { u := fmt.Sprintf("repos/%v/%v/branches", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -486,55 +573,115 @@ func (s *RepositoriesService) ListBranches(owner string, repo string, opt *ListO return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches req.Header.Set("Accept", mediaTypeProtectedBranchesPreview) - branches := new([]Branch) - resp, err := s.client.Do(req, branches) + var branches []*Branch + resp, err := s.client.Do(ctx, req, &branches) if err != nil { return nil, resp, err } - return *branches, resp, err + return branches, resp, nil } // GetBranch gets the specified branch for a repository. // // GitHub API docs: https://developer.github.com/v3/repos/#get-branch -func (s *RepositoriesService) GetBranch(owner, repo, branch string) (*Branch, *Response, error) { +func (s *RepositoriesService) GetBranch(ctx context.Context, owner, repo, branch string) (*Branch, *Response, error) { u := fmt.Sprintf("repos/%v/%v/branches/%v", owner, repo, branch) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches req.Header.Set("Accept", mediaTypeProtectedBranchesPreview) b := new(Branch) - resp, err := s.client.Do(req, b) + resp, err := s.client.Do(ctx, req, b) if err != nil { return nil, resp, err } - return b, resp, err + return b, resp, nil } -// EditBranch edits the branch (currently only Branch Protection) +// GetBranchProtection gets the protection of a given branch. // -// GitHub API docs: https://developer.github.com/v3/repos/#enabling-and-disabling-branch-protection -func (s *RepositoriesService) EditBranch(owner, repo, branchName string, branch *Branch) (*Branch, *Response, error) { - u := fmt.Sprintf("repos/%v/%v/branches/%v", owner, repo, branchName) - req, err := s.client.NewRequest("PATCH", u, branch) +// GitHub API docs: https://developer.github.com/v3/repos/branches/#get-branch-protection +func (s *RepositoriesService) GetBranchProtection(ctx context.Context, owner, repo, branch string) (*Protection, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/branches/%v/protection", owner, repo, branch) + req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches req.Header.Set("Accept", mediaTypeProtectedBranchesPreview) - b := new(Branch) - resp, err := s.client.Do(req, b) + p := new(Protection) + resp, err := s.client.Do(ctx, req, p) if err != nil { return nil, resp, err } - return b, resp, err + return p, resp, nil +} + +// UpdateBranchProtection updates the protection of a given branch. +// +// GitHub API docs: https://developer.github.com/v3/repos/branches/#update-branch-protection +func (s *RepositoriesService) UpdateBranchProtection(ctx context.Context, owner, repo, branch string, preq *ProtectionRequest) (*Protection, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/branches/%v/protection", owner, repo, branch) + req, err := s.client.NewRequest("PUT", u, preq) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeProtectedBranchesPreview) + + p := new(Protection) + resp, err := s.client.Do(ctx, req, p) + if err != nil { + return nil, resp, err + } + + return p, resp, nil +} + +// RemoveBranchProtection removes the protection of a given branch. +// +// GitHub API docs: https://developer.github.com/v3/repos/branches/#remove-branch-protection +func (s *RepositoriesService) RemoveBranchProtection(ctx context.Context, owner, repo, branch string) (*Response, error) { + u := fmt.Sprintf("repos/%v/%v/branches/%v/protection", owner, repo, branch) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches + req.Header.Set("Accept", mediaTypeProtectedBranchesPreview) + + return s.client.Do(ctx, req, nil) +} + +// License gets the contents of a repository's license if one is detected. +// +// GitHub API docs: https://developer.github.com/v3/licenses/#get-the-contents-of-a-repositorys-license +func (s *RepositoriesService) License(ctx context.Context, owner, repo string) (*RepositoryLicense, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/license", owner, repo) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + r := &RepositoryLicense{} + resp, err := s.client.Do(ctx, req, r) + if err != nil { + return nil, resp, err + } + + return r, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_collaborators.go b/vendor/github.com/google/go-github/github/repos_collaborators.go index 61dc4ef206..ba89b6064a 100644 --- a/vendor/github.com/google/go-github/github/repos_collaborators.go +++ b/vendor/github.com/google/go-github/github/repos_collaborators.go @@ -5,12 +5,15 @@ package github -import "fmt" +import ( + "context" + "fmt" +) -// ListCollaborators lists the Github users that have access to the repository. +// ListCollaborators lists the GitHub users that have access to the repository. // -// GitHub API docs: http://developer.github.com/v3/repos/collaborators/#list -func (s *RepositoriesService) ListCollaborators(owner, repo string, opt *ListOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/collaborators/#list +func (s *RepositoriesService) ListCollaborators(ctx context.Context, owner, repo string, opt *ListOptions) ([]*User, *Response, error) { u := fmt.Sprintf("repos/%v/%v/collaborators", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -22,35 +25,62 @@ func (s *RepositoriesService) ListCollaborators(owner, repo string, opt *ListOpt return nil, nil, err } - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - - users := new([]User) - resp, err := s.client.Do(req, users) + var users []*User + resp, err := s.client.Do(ctx, req, &users) if err != nil { return nil, resp, err } - return *users, resp, err + return users, resp, nil } -// IsCollaborator checks whether the specified Github user has collaborator +// IsCollaborator checks whether the specified GitHub user has collaborator // access to the given repo. // Note: This will return false if the user is not a collaborator OR the user // is not a GitHub user. // -// GitHub API docs: http://developer.github.com/v3/repos/collaborators/#get -func (s *RepositoriesService) IsCollaborator(owner, repo, user string) (bool, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/collaborators/#get +func (s *RepositoriesService) IsCollaborator(ctx context.Context, owner, repo, user string) (bool, *Response, error) { u := fmt.Sprintf("repos/%v/%v/collaborators/%v", owner, repo, user) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) isCollab, err := parseBoolResponse(err) return isCollab, resp, err } +// RepositoryPermissionLevel represents the permission level an organization +// member has for a given repository. +type RepositoryPermissionLevel struct { + // Possible values: "admin", "write", "read", "none" + Permission *string `json:"permission,omitempty"` + + User *User `json:"user,omitempty"` +} + +// GetPermissionLevel retrieves the specific permission level a collaborator has for a given repository. +// GitHub API docs: https://developer.github.com/v3/repos/collaborators/#review-a-users-permission-level +func (s *RepositoriesService) GetPermissionLevel(ctx context.Context, owner, repo, user string) (*RepositoryPermissionLevel, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/collaborators/%v/permission", owner, repo, user) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeOrgMembershipPreview) + + rpl := new(RepositoryPermissionLevel) + resp, err := s.client.Do(ctx, req, rpl) + if err != nil { + return nil, resp, err + } + return rpl, resp, nil +} + // RepositoryAddCollaboratorOptions specifies the optional parameters to the // RepositoriesService.AddCollaborator method. type RepositoryAddCollaboratorOptions struct { @@ -60,36 +90,35 @@ type RepositoryAddCollaboratorOptions struct { // push - team members can pull and push, but not administer this repository // admin - team members can pull, push and administer this repository // - // Default value is "pull". This option is only valid for organization-owned repositories. + // Default value is "push". This option is only valid for organization-owned repositories. Permission string `json:"permission,omitempty"` } -// AddCollaborator adds the specified Github user as collaborator to the given repo. +// AddCollaborator adds the specified GitHub user as collaborator to the given repo. // -// GitHub API docs: http://developer.github.com/v3/repos/collaborators/#add-collaborator -func (s *RepositoriesService) AddCollaborator(owner, repo, user string, opt *RepositoryAddCollaboratorOptions) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/collaborators/#add-user-as-a-collaborator +func (s *RepositoriesService) AddCollaborator(ctx context.Context, owner, repo, user string, opt *RepositoryAddCollaboratorOptions) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/collaborators/%v", owner, repo, user) req, err := s.client.NewRequest("PUT", u, opt) if err != nil { return nil, err } - if opt != nil { - req.Header.Set("Accept", mediaTypeOrgPermissionPreview) - } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeRepositoryInvitationsPreview) - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } -// RemoveCollaborator removes the specified Github user as collaborator from the given repo. +// RemoveCollaborator removes the specified GitHub user as collaborator from the given repo. // Note: Does not return error if a valid user that is not a collaborator is removed. // -// GitHub API docs: http://developer.github.com/v3/repos/collaborators/#remove-collaborator -func (s *RepositoriesService) RemoveCollaborator(owner, repo, user string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/collaborators/#remove-collaborator +func (s *RepositoriesService) RemoveCollaborator(ctx context.Context, owner, repo, user string) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/collaborators/%v", owner, repo, user) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/repos_comments.go b/vendor/github.com/google/go-github/github/repos_comments.go index 2d090bb749..4830ee2206 100644 --- a/vendor/github.com/google/go-github/github/repos_comments.go +++ b/vendor/github.com/google/go-github/github/repos_comments.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -17,6 +18,7 @@ type RepositoryComment struct { ID *int `json:"id,omitempty"` CommitID *string `json:"commit_id,omitempty"` User *User `json:"user,omitempty"` + Reactions *Reactions `json:"reactions,omitempty"` CreatedAt *time.Time `json:"created_at,omitempty"` UpdatedAt *time.Time `json:"updated_at,omitempty"` @@ -33,8 +35,8 @@ func (r RepositoryComment) String() string { // ListComments lists all the comments for the repository. // -// GitHub API docs: http://developer.github.com/v3/repos/comments/#list-commit-comments-for-a-repository -func (s *RepositoriesService) ListComments(owner, repo string, opt *ListOptions) ([]RepositoryComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/comments/#list-commit-comments-for-a-repository +func (s *RepositoriesService) ListComments(ctx context.Context, owner, repo string, opt *ListOptions) ([]*RepositoryComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/comments", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -46,19 +48,22 @@ func (s *RepositoriesService) ListComments(owner, repo string, opt *ListOptions) return nil, nil, err } - comments := new([]RepositoryComment) - resp, err := s.client.Do(req, comments) + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var comments []*RepositoryComment + resp, err := s.client.Do(ctx, req, &comments) if err != nil { return nil, resp, err } - return *comments, resp, err + return comments, resp, nil } // ListCommitComments lists all the comments for a given commit SHA. // -// GitHub API docs: http://developer.github.com/v3/repos/comments/#list-comments-for-a-single-commit -func (s *RepositoriesService) ListCommitComments(owner, repo, sha string, opt *ListOptions) ([]RepositoryComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/comments/#list-comments-for-a-single-commit +func (s *RepositoriesService) ListCommitComments(ctx context.Context, owner, repo, sha string, opt *ListOptions) ([]*RepositoryComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/commits/%v/comments", owner, repo, sha) u, err := addOptions(u, opt) if err != nil { @@ -70,20 +75,23 @@ func (s *RepositoriesService) ListCommitComments(owner, repo, sha string, opt *L return nil, nil, err } - comments := new([]RepositoryComment) - resp, err := s.client.Do(req, comments) + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + + var comments []*RepositoryComment + resp, err := s.client.Do(ctx, req, &comments) if err != nil { return nil, resp, err } - return *comments, resp, err + return comments, resp, nil } // CreateComment creates a comment for the given commit. // Note: GitHub allows for comments to be created for non-existing files and positions. // -// GitHub API docs: http://developer.github.com/v3/repos/comments/#create-a-commit-comment -func (s *RepositoriesService) CreateComment(owner, repo, sha string, comment *RepositoryComment) (*RepositoryComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/comments/#create-a-commit-comment +func (s *RepositoriesService) CreateComment(ctx context.Context, owner, repo, sha string, comment *RepositoryComment) (*RepositoryComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/commits/%v/comments", owner, repo, sha) req, err := s.client.NewRequest("POST", u, comment) if err != nil { @@ -91,37 +99,40 @@ func (s *RepositoriesService) CreateComment(owner, repo, sha string, comment *Re } c := new(RepositoryComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // GetComment gets a single comment from a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/comments/#get-a-single-commit-comment -func (s *RepositoriesService) GetComment(owner, repo string, id int) (*RepositoryComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/comments/#get-a-single-commit-comment +func (s *RepositoriesService) GetComment(ctx context.Context, owner, repo string, id int) (*RepositoryComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/comments/%v", owner, repo, id) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeReactionsPreview) + c := new(RepositoryComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // UpdateComment updates the body of a single comment. // -// GitHub API docs: http://developer.github.com/v3/repos/comments/#update-a-commit-comment -func (s *RepositoriesService) UpdateComment(owner, repo string, id int, comment *RepositoryComment) (*RepositoryComment, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/comments/#update-a-commit-comment +func (s *RepositoriesService) UpdateComment(ctx context.Context, owner, repo string, id int, comment *RepositoryComment) (*RepositoryComment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/comments/%v", owner, repo, id) req, err := s.client.NewRequest("PATCH", u, comment) if err != nil { @@ -129,22 +140,22 @@ func (s *RepositoriesService) UpdateComment(owner, repo string, id int, comment } c := new(RepositoryComment) - resp, err := s.client.Do(req, c) + resp, err := s.client.Do(ctx, req, c) if err != nil { return nil, resp, err } - return c, resp, err + return c, resp, nil } // DeleteComment deletes a single comment from a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/comments/#delete-a-commit-comment -func (s *RepositoriesService) DeleteComment(owner, repo string, id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/comments/#delete-a-commit-comment +func (s *RepositoriesService) DeleteComment(ctx context.Context, owner, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/comments/%v", owner, repo, id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/repos_commits.go b/vendor/github.com/google/go-github/github/repos_commits.go index 6401cb4ab8..e516f1afd0 100644 --- a/vendor/github.com/google/go-github/github/repos_commits.go +++ b/vendor/github.com/google/go-github/github/repos_commits.go @@ -6,6 +6,8 @@ package github import ( + "bytes" + "context" "fmt" "time" ) @@ -14,13 +16,14 @@ import ( // Note that it's wrapping a Commit, so author/committer information is in two places, // but contain different details about them: in RepositoryCommit "github details", in Commit - "git details". type RepositoryCommit struct { - SHA *string `json:"sha,omitempty"` - Commit *Commit `json:"commit,omitempty"` - Author *User `json:"author,omitempty"` - Committer *User `json:"committer,omitempty"` - Parents []Commit `json:"parents,omitempty"` - Message *string `json:"message,omitempty"` - HTMLURL *string `json:"html_url,omitempty"` + SHA *string `json:"sha,omitempty"` + Commit *Commit `json:"commit,omitempty"` + Author *User `json:"author,omitempty"` + Committer *User `json:"committer,omitempty"` + Parents []Commit `json:"parents,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` + URL *string `json:"url,omitempty"` + CommentsURL *string `json:"comments_url,omitempty"` // Details about how many changes were made in this commit. Only filled in during GetCommit! Stats *CommitStats `json:"stats,omitempty"` @@ -32,7 +35,7 @@ func (r RepositoryCommit) String() string { return Stringify(r) } -// CommitStats represents the number of additions / deletions from a file in a given RepositoryCommit. +// CommitStats represents the number of additions / deletions from a file in a given RepositoryCommit or GistCommit. type CommitStats struct { Additions *int `json:"additions,omitempty"` Deletions *int `json:"deletions,omitempty"` @@ -45,13 +48,16 @@ func (c CommitStats) String() string { // CommitFile represents a file modified in a commit. type CommitFile struct { - SHA *string `json:"sha,omitempty"` - Filename *string `json:"filename,omitempty"` - Additions *int `json:"additions,omitempty"` - Deletions *int `json:"deletions,omitempty"` - Changes *int `json:"changes,omitempty"` - Status *string `json:"status,omitempty"` - Patch *string `json:"patch,omitempty"` + SHA *string `json:"sha,omitempty"` + Filename *string `json:"filename,omitempty"` + Additions *int `json:"additions,omitempty"` + Deletions *int `json:"deletions,omitempty"` + Changes *int `json:"changes,omitempty"` + Status *string `json:"status,omitempty"` + Patch *string `json:"patch,omitempty"` + BlobURL *string `json:"blob_url,omitempty"` + RawURL *string `json:"raw_url,omitempty"` + ContentsURL *string `json:"contents_url,omitempty"` } func (c CommitFile) String() string { @@ -102,8 +108,8 @@ type CommitsListOptions struct { // ListCommits lists the commits of a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/commits/#list -func (s *RepositoriesService) ListCommits(owner, repo string, opt *CommitsListOptions) ([]RepositoryCommit, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/commits/#list +func (s *RepositoriesService) ListCommits(ctx context.Context, owner, repo string, opt *CommitsListOptions) ([]*RepositoryCommit, *Response, error) { u := fmt.Sprintf("repos/%v/%v/commits", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -115,21 +121,21 @@ func (s *RepositoriesService) ListCommits(owner, repo string, opt *CommitsListOp return nil, nil, err } - commits := new([]RepositoryCommit) - resp, err := s.client.Do(req, commits) + var commits []*RepositoryCommit + resp, err := s.client.Do(ctx, req, &commits) if err != nil { return nil, resp, err } - return *commits, resp, err + return commits, resp, nil } // GetCommit fetches the specified commit, including all details about it. // todo: support media formats - https://github.com/google/go-github/issues/6 // -// GitHub API docs: http://developer.github.com/v3/repos/commits/#get-a-single-commit -// See also: http://developer.github.com//v3/git/commits/#get-a-single-commit provides the same functionality -func (s *RepositoriesService) GetCommit(owner, repo, sha string) (*RepositoryCommit, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/commits/#get-a-single-commit +// See also: https://developer.github.com//v3/git/commits/#get-a-single-commit provides the same functionality +func (s *RepositoriesService) GetCommit(ctx context.Context, owner, repo, sha string) (*RepositoryCommit, *Response, error) { u := fmt.Sprintf("repos/%v/%v/commits/%v", owner, repo, sha) req, err := s.client.NewRequest("GET", u, nil) @@ -137,20 +143,49 @@ func (s *RepositoriesService) GetCommit(owner, repo, sha string) (*RepositoryCom return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeGitSigningPreview) + commit := new(RepositoryCommit) - resp, err := s.client.Do(req, commit) + resp, err := s.client.Do(ctx, req, commit) if err != nil { return nil, resp, err } - return commit, resp, err + return commit, resp, nil +} + +// GetCommitSHA1 gets the SHA-1 of a commit reference. If a last-known SHA1 is +// supplied and no new commits have occurred, a 304 Unmodified response is returned. +// +// GitHub API docs: https://developer.github.com/v3/repos/commits/#get-the-sha-1-of-a-commit-reference +func (s *RepositoriesService) GetCommitSHA1(ctx context.Context, owner, repo, ref, lastSHA string) (string, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/commits/%v", owner, repo, ref) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return "", nil, err + } + if lastSHA != "" { + req.Header.Set("If-None-Match", `"`+lastSHA+`"`) + } + + req.Header.Set("Accept", mediaTypeV3SHA) + + var buf bytes.Buffer + resp, err := s.client.Do(ctx, req, &buf) + if err != nil { + return "", resp, err + } + + return buf.String(), resp, nil } // CompareCommits compares a range of commits with each other. // todo: support media formats - https://github.com/google/go-github/issues/6 // -// GitHub API docs: http://developer.github.com/v3/repos/commits/index.html#compare-two-commits -func (s *RepositoriesService) CompareCommits(owner, repo string, base, head string) (*CommitsComparison, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/commits/index.html#compare-two-commits +func (s *RepositoriesService) CompareCommits(ctx context.Context, owner, repo string, base, head string) (*CommitsComparison, *Response, error) { u := fmt.Sprintf("repos/%v/%v/compare/%v...%v", owner, repo, base, head) req, err := s.client.NewRequest("GET", u, nil) @@ -159,10 +194,10 @@ func (s *RepositoriesService) CompareCommits(owner, repo string, base, head stri } comp := new(CommitsComparison) - resp, err := s.client.Do(req, comp) + resp, err := s.client.Do(ctx, req, comp) if err != nil { return nil, resp, err } - return comp, resp, err + return comp, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_contents.go b/vendor/github.com/google/go-github/github/repos_contents.go index 6202f19cd0..fa9fd55607 100644 --- a/vendor/github.com/google/go-github/github/repos_contents.go +++ b/vendor/github.com/google/go-github/github/repos_contents.go @@ -4,14 +4,14 @@ // license that can be found in the LICENSE file. // Repository contents API methods. -// http://developer.github.com/v3/repos/contents/ +// GitHub API docs: https://developer.github.com/v3/repos/contents/ package github import ( + "context" "encoding/base64" "encoding/json" - "errors" "fmt" "io" "net/http" @@ -21,11 +21,14 @@ import ( // RepositoryContent represents a file or directory in a github repository. type RepositoryContent struct { - Type *string `json:"type,omitempty"` - Encoding *string `json:"encoding,omitempty"` - Size *int `json:"size,omitempty"` - Name *string `json:"name,omitempty"` - Path *string `json:"path,omitempty"` + Type *string `json:"type,omitempty"` + Encoding *string `json:"encoding,omitempty"` + Size *int `json:"size,omitempty"` + Name *string `json:"name,omitempty"` + Path *string `json:"path,omitempty"` + // Content contains the actual file content, which may be encoded. + // Callers should call GetContent which will decode the content if + // necessary. Content *string `json:"content,omitempty"` SHA *string `json:"sha,omitempty"` URL *string `json:"url,omitempty"` @@ -56,26 +59,36 @@ type RepositoryContentGetOptions struct { Ref string `url:"ref,omitempty"` } +// String converts RepositoryContent to a string. It's primarily for testing. func (r RepositoryContent) String() string { return Stringify(r) } -// Decode decodes the file content if it is base64 encoded. -func (r *RepositoryContent) Decode() ([]byte, error) { - if *r.Encoding != "base64" { - return nil, errors.New("cannot decode non-base64") +// GetContent returns the content of r, decoding it if necessary. +func (r *RepositoryContent) GetContent() (string, error) { + var encoding string + if r.Encoding != nil { + encoding = *r.Encoding } - o, err := base64.StdEncoding.DecodeString(*r.Content) - if err != nil { - return nil, err + + switch encoding { + case "base64": + c, err := base64.StdEncoding.DecodeString(*r.Content) + return string(c), err + case "": + if r.Content == nil { + return "", nil + } + return *r.Content, nil + default: + return "", fmt.Errorf("unsupported content encoding: %v", encoding) } - return o, nil } // GetReadme gets the Readme file for the repository. // -// GitHub API docs: http://developer.github.com/v3/repos/contents/#get-the-readme -func (s *RepositoriesService) GetReadme(owner, repo string, opt *RepositoryContentGetOptions) (*RepositoryContent, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/contents/#get-the-readme +func (s *RepositoriesService) GetReadme(ctx context.Context, owner, repo string, opt *RepositoryContentGetOptions) (*RepositoryContent, *Response, error) { u := fmt.Sprintf("repos/%v/%v/readme", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -86,21 +99,21 @@ func (s *RepositoriesService) GetReadme(owner, repo string, opt *RepositoryConte return nil, nil, err } readme := new(RepositoryContent) - resp, err := s.client.Do(req, readme) + resp, err := s.client.Do(ctx, req, readme) if err != nil { return nil, resp, err } - return readme, resp, err + return readme, resp, nil } // DownloadContents returns an io.ReadCloser that reads the contents of the // specified file. This function will work with files of any size, as opposed // to GetContents which is limited to 1 Mb files. It is the caller's // responsibility to close the ReadCloser. -func (s *RepositoriesService) DownloadContents(owner, repo, filepath string, opt *RepositoryContentGetOptions) (io.ReadCloser, error) { +func (s *RepositoriesService) DownloadContents(ctx context.Context, owner, repo, filepath string, opt *RepositoryContentGetOptions) (io.ReadCloser, error) { dir := path.Dir(filepath) filename := path.Base(filepath) - _, dirContents, _, err := s.GetContents(owner, repo, dir, opt) + _, dirContents, _, err := s.GetContents(ctx, owner, repo, dir, opt) if err != nil { return nil, err } @@ -126,11 +139,9 @@ func (s *RepositoriesService) DownloadContents(owner, repo, filepath string, opt // as possible, both result types will be returned but only one will contain a // value and the other will be nil. // -// GitHub API docs: http://developer.github.com/v3/repos/contents/#get-contents -func (s *RepositoriesService) GetContents(owner, repo, path string, opt *RepositoryContentGetOptions) (fileContent *RepositoryContent, directoryContent []*RepositoryContent, resp *Response, err error) { - // escape characters not allowed in URL path. This actually escapes a - // lot more, but seems to be harmless. - escapedPath := url.QueryEscape(path) +// GitHub API docs: https://developer.github.com/v3/repos/contents/#get-contents +func (s *RepositoriesService) GetContents(ctx context.Context, owner, repo, path string, opt *RepositoryContentGetOptions) (fileContent *RepositoryContent, directoryContent []*RepositoryContent, resp *Response, err error) { + escapedPath := (&url.URL{Path: path}).String() u := fmt.Sprintf("repos/%s/%s/contents/%s", owner, repo, escapedPath) u, err = addOptions(u, opt) if err != nil { @@ -141,7 +152,7 @@ func (s *RepositoriesService) GetContents(owner, repo, path string, opt *Reposit return nil, nil, nil, err } var rawJSON json.RawMessage - resp, err = s.client.Do(req, &rawJSON) + resp, err = s.client.Do(ctx, req, &rawJSON) if err != nil { return nil, nil, resp, err } @@ -159,55 +170,55 @@ func (s *RepositoriesService) GetContents(owner, repo, path string, opt *Reposit // CreateFile creates a new file in a repository at the given path and returns // the commit and file metadata. // -// GitHub API docs: http://developer.github.com/v3/repos/contents/#create-a-file -func (s *RepositoriesService) CreateFile(owner, repo, path string, opt *RepositoryContentFileOptions) (*RepositoryContentResponse, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/contents/#create-a-file +func (s *RepositoriesService) CreateFile(ctx context.Context, owner, repo, path string, opt *RepositoryContentFileOptions) (*RepositoryContentResponse, *Response, error) { u := fmt.Sprintf("repos/%s/%s/contents/%s", owner, repo, path) req, err := s.client.NewRequest("PUT", u, opt) if err != nil { return nil, nil, err } createResponse := new(RepositoryContentResponse) - resp, err := s.client.Do(req, createResponse) + resp, err := s.client.Do(ctx, req, createResponse) if err != nil { return nil, resp, err } - return createResponse, resp, err + return createResponse, resp, nil } // UpdateFile updates a file in a repository at the given path and returns the // commit and file metadata. Requires the blob SHA of the file being updated. // -// GitHub API docs: http://developer.github.com/v3/repos/contents/#update-a-file -func (s *RepositoriesService) UpdateFile(owner, repo, path string, opt *RepositoryContentFileOptions) (*RepositoryContentResponse, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/contents/#update-a-file +func (s *RepositoriesService) UpdateFile(ctx context.Context, owner, repo, path string, opt *RepositoryContentFileOptions) (*RepositoryContentResponse, *Response, error) { u := fmt.Sprintf("repos/%s/%s/contents/%s", owner, repo, path) req, err := s.client.NewRequest("PUT", u, opt) if err != nil { return nil, nil, err } updateResponse := new(RepositoryContentResponse) - resp, err := s.client.Do(req, updateResponse) + resp, err := s.client.Do(ctx, req, updateResponse) if err != nil { return nil, resp, err } - return updateResponse, resp, err + return updateResponse, resp, nil } // DeleteFile deletes a file from a repository and returns the commit. // Requires the blob SHA of the file to be deleted. // -// GitHub API docs: http://developer.github.com/v3/repos/contents/#delete-a-file -func (s *RepositoriesService) DeleteFile(owner, repo, path string, opt *RepositoryContentFileOptions) (*RepositoryContentResponse, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/contents/#delete-a-file +func (s *RepositoriesService) DeleteFile(ctx context.Context, owner, repo, path string, opt *RepositoryContentFileOptions) (*RepositoryContentResponse, *Response, error) { u := fmt.Sprintf("repos/%s/%s/contents/%s", owner, repo, path) req, err := s.client.NewRequest("DELETE", u, opt) if err != nil { return nil, nil, err } deleteResponse := new(RepositoryContentResponse) - resp, err := s.client.Do(req, deleteResponse) + resp, err := s.client.Do(ctx, req, deleteResponse) if err != nil { return nil, resp, err } - return deleteResponse, resp, err + return deleteResponse, resp, nil } // archiveFormat is used to define the archive type when calling GetArchiveLink. @@ -225,8 +236,8 @@ const ( // repository. The archiveFormat can be specified by either the github.Tarball // or github.Zipball constant. // -// GitHub API docs: http://developer.github.com/v3/repos/contents/#get-archive-link -func (s *RepositoriesService) GetArchiveLink(owner, repo string, archiveformat archiveFormat, opt *RepositoryContentGetOptions) (*url.URL, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/contents/#get-archive-link +func (s *RepositoriesService) GetArchiveLink(ctx context.Context, owner, repo string, archiveformat archiveFormat, opt *RepositoryContentGetOptions) (*url.URL, *Response, error) { u := fmt.Sprintf("repos/%s/%s/%s", owner, repo, archiveformat) if opt != nil && opt.Ref != "" { u += fmt.Sprintf("/%s", opt.Ref) @@ -238,12 +249,16 @@ func (s *RepositoriesService) GetArchiveLink(owner, repo string, archiveformat a var resp *http.Response // Use http.DefaultTransport if no custom Transport is configured if s.client.client.Transport == nil { - resp, err = http.DefaultTransport.RoundTrip(req) + resp, err = http.DefaultTransport.RoundTrip(req.WithContext(ctx)) } else { - resp, err = s.client.client.Transport.RoundTrip(req) + resp, err = s.client.client.Transport.RoundTrip(req.WithContext(ctx)) } - if err != nil || resp.StatusCode != http.StatusFound { - return nil, newResponse(resp), err + if err != nil { + return nil, nil, err + } + resp.Body.Close() + if resp.StatusCode != http.StatusFound { + return nil, newResponse(resp), fmt.Errorf("unexpected status code: %s", resp.Status) } parsedURL, err := url.Parse(resp.Header.Get("Location")) return parsedURL, newResponse(resp), err diff --git a/vendor/github.com/google/go-github/github/repos_deployments.go b/vendor/github.com/google/go-github/github/repos_deployments.go index fea43de9ea..9054ca9472 100644 --- a/vendor/github.com/google/go-github/github/repos_deployments.go +++ b/vendor/github.com/google/go-github/github/repos_deployments.go @@ -6,6 +6,7 @@ package github import ( + "context" "encoding/json" "fmt" ) @@ -29,13 +30,15 @@ type Deployment struct { // DeploymentRequest represents a deployment request type DeploymentRequest struct { - Ref *string `json:"ref,omitempty"` - Task *string `json:"task,omitempty"` - AutoMerge *bool `json:"auto_merge,omitempty"` - RequiredContexts *[]string `json:"required_contexts,omitempty"` - Payload *string `json:"payload,omitempty"` - Environment *string `json:"environment,omitempty"` - Description *string `json:"description,omitempty"` + Ref *string `json:"ref,omitempty"` + Task *string `json:"task,omitempty"` + AutoMerge *bool `json:"auto_merge,omitempty"` + RequiredContexts *[]string `json:"required_contexts,omitempty"` + Payload *string `json:"payload,omitempty"` + Environment *string `json:"environment,omitempty"` + Description *string `json:"description,omitempty"` + TransientEnvironment *bool `json:"transient_environment,omitempty"` + ProductionEnvironment *bool `json:"production_environment,omitempty"` } // DeploymentsListOptions specifies the optional parameters to the @@ -59,7 +62,7 @@ type DeploymentsListOptions struct { // ListDeployments lists the deployments of a repository. // // GitHub API docs: https://developer.github.com/v3/repos/deployments/#list-deployments -func (s *RepositoriesService) ListDeployments(owner, repo string, opt *DeploymentsListOptions) ([]Deployment, *Response, error) { +func (s *RepositoriesService) ListDeployments(ctx context.Context, owner, repo string, opt *DeploymentsListOptions) ([]*Deployment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/deployments", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -71,19 +74,39 @@ func (s *RepositoriesService) ListDeployments(owner, repo string, opt *Deploymen return nil, nil, err } - deployments := new([]Deployment) - resp, err := s.client.Do(req, deployments) + var deployments []*Deployment + resp, err := s.client.Do(ctx, req, &deployments) if err != nil { return nil, resp, err } - return *deployments, resp, err + return deployments, resp, nil +} + +// GetDeployment returns a single deployment of a repository. +// +// GitHub API docs: https://developer.github.com/v3/repos/deployments/#get-a-single-deployment +func (s *RepositoriesService) GetDeployment(ctx context.Context, owner, repo string, deploymentID int) (*Deployment, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/deployments/%v", owner, repo, deploymentID) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + deployment := new(Deployment) + resp, err := s.client.Do(ctx, req, deployment) + if err != nil { + return nil, resp, err + } + + return deployment, resp, nil } // CreateDeployment creates a new deployment for a repository. // // GitHub API docs: https://developer.github.com/v3/repos/deployments/#create-a-deployment -func (s *RepositoriesService) CreateDeployment(owner, repo string, request *DeploymentRequest) (*Deployment, *Response, error) { +func (s *RepositoriesService) CreateDeployment(ctx context.Context, owner, repo string, request *DeploymentRequest) (*Deployment, *Response, error) { u := fmt.Sprintf("repos/%v/%v/deployments", owner, repo) req, err := s.client.NewRequest("POST", u, request) @@ -91,19 +114,24 @@ func (s *RepositoriesService) CreateDeployment(owner, repo string, request *Depl return nil, nil, err } + // TODO: remove custom Accept header when deployment support fully launches + req.Header.Set("Accept", mediaTypeDeploymentStatusPreview) + d := new(Deployment) - resp, err := s.client.Do(req, d) + resp, err := s.client.Do(ctx, req, d) if err != nil { return nil, resp, err } - return d, resp, err + return d, resp, nil } // DeploymentStatus represents the status of a // particular deployment. type DeploymentStatus struct { - ID *int `json:"id,omitempty"` + ID *int `json:"id,omitempty"` + // State is the deployment state. + // Possible values are: "pending", "success", "failure", "error", "inactive". State *string `json:"state,omitempty"` Creator *User `json:"creator,omitempty"` Description *string `json:"description,omitempty"` @@ -116,15 +144,17 @@ type DeploymentStatus struct { // DeploymentStatusRequest represents a deployment request type DeploymentStatusRequest struct { - State *string `json:"state,omitempty"` - TargetURL *string `json:"target_url,omitempty"` - Description *string `json:"description,omitempty"` + State *string `json:"state,omitempty"` + LogURL *string `json:"log_url,omitempty"` + Description *string `json:"description,omitempty"` + EnvironmentURL *string `json:"environment_url,omitempty"` + AutoInactive *bool `json:"auto_inactive,omitempty"` } // ListDeploymentStatuses lists the statuses of a given deployment of a repository. // // GitHub API docs: https://developer.github.com/v3/repos/deployments/#list-deployment-statuses -func (s *RepositoriesService) ListDeploymentStatuses(owner, repo string, deployment int, opt *ListOptions) ([]DeploymentStatus, *Response, error) { +func (s *RepositoriesService) ListDeploymentStatuses(ctx context.Context, owner, repo string, deployment int, opt *ListOptions) ([]*DeploymentStatus, *Response, error) { u := fmt.Sprintf("repos/%v/%v/deployments/%v/statuses", owner, repo, deployment) u, err := addOptions(u, opt) if err != nil { @@ -136,19 +166,42 @@ func (s *RepositoriesService) ListDeploymentStatuses(owner, repo string, deploym return nil, nil, err } - statuses := new([]DeploymentStatus) - resp, err := s.client.Do(req, statuses) + var statuses []*DeploymentStatus + resp, err := s.client.Do(ctx, req, &statuses) if err != nil { return nil, resp, err } - return *statuses, resp, err + return statuses, resp, nil +} + +// GetDeploymentStatus returns a single deployment status of a repository. +// +// GitHub API docs: https://developer.github.com/v3/repos/deployments/#get-a-single-deployment-status +func (s *RepositoriesService) GetDeploymentStatus(ctx context.Context, owner, repo string, deploymentID, deploymentStatusID int) (*DeploymentStatus, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/deployments/%v/statuses/%v", owner, repo, deploymentID, deploymentStatusID) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when deployment support fully launches + req.Header.Set("Accept", mediaTypeDeploymentStatusPreview) + + d := new(DeploymentStatus) + resp, err := s.client.Do(ctx, req, d) + if err != nil { + return nil, resp, err + } + + return d, resp, nil } // CreateDeploymentStatus creates a new status for a deployment. // // GitHub API docs: https://developer.github.com/v3/repos/deployments/#create-a-deployment-status -func (s *RepositoriesService) CreateDeploymentStatus(owner, repo string, deployment int, request *DeploymentStatusRequest) (*DeploymentStatus, *Response, error) { +func (s *RepositoriesService) CreateDeploymentStatus(ctx context.Context, owner, repo string, deployment int, request *DeploymentStatusRequest) (*DeploymentStatus, *Response, error) { u := fmt.Sprintf("repos/%v/%v/deployments/%v/statuses", owner, repo, deployment) req, err := s.client.NewRequest("POST", u, request) @@ -156,11 +209,14 @@ func (s *RepositoriesService) CreateDeploymentStatus(owner, repo string, deploym return nil, nil, err } + // TODO: remove custom Accept header when deployment support fully launches + req.Header.Set("Accept", mediaTypeDeploymentStatusPreview) + d := new(DeploymentStatus) - resp, err := s.client.Do(req, d) + resp, err := s.client.Do(ctx, req, d) if err != nil { return nil, resp, err } - return d, resp, err + return d, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_forks.go b/vendor/github.com/google/go-github/github/repos_forks.go index 1fec8292c1..6b5e4eabba 100644 --- a/vendor/github.com/google/go-github/github/repos_forks.go +++ b/vendor/github.com/google/go-github/github/repos_forks.go @@ -5,13 +5,16 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // RepositoryListForksOptions specifies the optional parameters to the // RepositoriesService.ListForks method. type RepositoryListForksOptions struct { - // How to sort the forks list. Possible values are: newest, oldest, - // watchers. Default is "newest". + // How to sort the forks list. Possible values are: newest, oldest, + // watchers. Default is "newest". Sort string `url:"sort,omitempty"` ListOptions @@ -19,8 +22,8 @@ type RepositoryListForksOptions struct { // ListForks lists the forks of the specified repository. // -// GitHub API docs: http://developer.github.com/v3/repos/forks/#list-forks -func (s *RepositoriesService) ListForks(owner, repo string, opt *RepositoryListForksOptions) ([]Repository, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/forks/#list-forks +func (s *RepositoriesService) ListForks(ctx context.Context, owner, repo string, opt *RepositoryListForksOptions) ([]*Repository, *Response, error) { u := fmt.Sprintf("repos/%v/%v/forks", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -32,13 +35,13 @@ func (s *RepositoriesService) ListForks(owner, repo string, opt *RepositoryListF return nil, nil, err } - repos := new([]Repository) - resp, err := s.client.Do(req, repos) + var repos []*Repository + resp, err := s.client.Do(ctx, req, &repos) if err != nil { return nil, resp, err } - return *repos, resp, err + return repos, resp, nil } // RepositoryCreateForkOptions specifies the optional parameters to the @@ -50,8 +53,14 @@ type RepositoryCreateForkOptions struct { // CreateFork creates a fork of the specified repository. // -// GitHub API docs: http://developer.github.com/v3/repos/forks/#list-forks -func (s *RepositoriesService) CreateFork(owner, repo string, opt *RepositoryCreateForkOptions) (*Repository, *Response, error) { +// This method might return an *AcceptedError and a status code of +// 202. This is because this is the status that GitHub returns to signify that +// it is now computing creating the fork in a background task. +// A follow up request, after a delay of a second or so, should result +// in a successful request. +// +// GitHub API docs: https://developer.github.com/v3/repos/forks/#create-a-fork +func (s *RepositoriesService) CreateFork(ctx context.Context, owner, repo string, opt *RepositoryCreateForkOptions) (*Repository, *Response, error) { u := fmt.Sprintf("repos/%v/%v/forks", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -64,10 +73,10 @@ func (s *RepositoriesService) CreateFork(owner, repo string, opt *RepositoryCrea } fork := new(Repository) - resp, err := s.client.Do(req, fork) + resp, err := s.client.Do(ctx, req, fork) if err != nil { return nil, resp, err } - return fork, resp, err + return fork, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_hooks.go b/vendor/github.com/google/go-github/github/repos_hooks.go index 4370c16069..67ce96ac34 100644 --- a/vendor/github.com/google/go-github/github/repos_hooks.go +++ b/vendor/github.com/google/go-github/github/repos_hooks.go @@ -6,14 +6,15 @@ package github import ( + "context" "fmt" "time" ) // WebHookPayload represents the data that is received from GitHub when a push -// event hook is triggered. The format of these payloads pre-date most of the +// event hook is triggered. The format of these payloads pre-date most of the // GitHub v3 API, so there are lots of minor incompatibilities with the types -// defined in the rest of the API. Therefore, several types are duplicated +// defined in the rest of the API. Therefore, several types are duplicated // here to account for these differences. // // GitHub API docs: https://help.github.com/articles/post-receive-hooks @@ -55,7 +56,7 @@ func (w WebHookCommit) String() string { } // WebHookAuthor represents the author or committer of a commit, as specified -// in a WebHookCommit. The commit author may not correspond to a GitHub User. +// in a WebHookCommit. The commit author may not correspond to a GitHub User. type WebHookAuthor struct { Email *string `json:"email,omitempty"` Name *string `json:"name,omitempty"` @@ -85,8 +86,8 @@ func (h Hook) String() string { // CreateHook creates a Hook for the specified repository. // Name and Config are required fields. // -// GitHub API docs: http://developer.github.com/v3/repos/hooks/#create-a-hook -func (s *RepositoriesService) CreateHook(owner, repo string, hook *Hook) (*Hook, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/hooks/#create-a-hook +func (s *RepositoriesService) CreateHook(ctx context.Context, owner, repo string, hook *Hook) (*Hook, *Response, error) { u := fmt.Sprintf("repos/%v/%v/hooks", owner, repo) req, err := s.client.NewRequest("POST", u, hook) if err != nil { @@ -94,18 +95,18 @@ func (s *RepositoriesService) CreateHook(owner, repo string, hook *Hook) (*Hook, } h := new(Hook) - resp, err := s.client.Do(req, h) + resp, err := s.client.Do(ctx, req, h) if err != nil { return nil, resp, err } - return h, resp, err + return h, resp, nil } // ListHooks lists all Hooks for the specified repository. // -// GitHub API docs: http://developer.github.com/v3/repos/hooks/#list -func (s *RepositoriesService) ListHooks(owner, repo string, opt *ListOptions) ([]Hook, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/hooks/#list +func (s *RepositoriesService) ListHooks(ctx context.Context, owner, repo string, opt *ListOptions) ([]*Hook, *Response, error) { u := fmt.Sprintf("repos/%v/%v/hooks", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -117,80 +118,75 @@ func (s *RepositoriesService) ListHooks(owner, repo string, opt *ListOptions) ([ return nil, nil, err } - hooks := new([]Hook) - resp, err := s.client.Do(req, hooks) + var hooks []*Hook + resp, err := s.client.Do(ctx, req, &hooks) if err != nil { return nil, resp, err } - return *hooks, resp, err + return hooks, resp, nil } // GetHook returns a single specified Hook. // -// GitHub API docs: http://developer.github.com/v3/repos/hooks/#get-single-hook -func (s *RepositoriesService) GetHook(owner, repo string, id int) (*Hook, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/hooks/#get-single-hook +func (s *RepositoriesService) GetHook(ctx context.Context, owner, repo string, id int) (*Hook, *Response, error) { u := fmt.Sprintf("repos/%v/%v/hooks/%d", owner, repo, id) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } hook := new(Hook) - resp, err := s.client.Do(req, hook) + resp, err := s.client.Do(ctx, req, hook) return hook, resp, err } // EditHook updates a specified Hook. // -// GitHub API docs: http://developer.github.com/v3/repos/hooks/#edit-a-hook -func (s *RepositoriesService) EditHook(owner, repo string, id int, hook *Hook) (*Hook, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/hooks/#edit-a-hook +func (s *RepositoriesService) EditHook(ctx context.Context, owner, repo string, id int, hook *Hook) (*Hook, *Response, error) { u := fmt.Sprintf("repos/%v/%v/hooks/%d", owner, repo, id) req, err := s.client.NewRequest("PATCH", u, hook) if err != nil { return nil, nil, err } h := new(Hook) - resp, err := s.client.Do(req, h) + resp, err := s.client.Do(ctx, req, h) return h, resp, err } // DeleteHook deletes a specified Hook. // -// GitHub API docs: http://developer.github.com/v3/repos/hooks/#delete-a-hook -func (s *RepositoriesService) DeleteHook(owner, repo string, id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/hooks/#delete-a-hook +func (s *RepositoriesService) DeleteHook(ctx context.Context, owner, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/hooks/%d", owner, repo, id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // PingHook triggers a 'ping' event to be sent to the Hook. // // GitHub API docs: https://developer.github.com/v3/repos/hooks/#ping-a-hook -func (s *RepositoriesService) PingHook(owner, repo string, id int) (*Response, error) { +func (s *RepositoriesService) PingHook(ctx context.Context, owner, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/hooks/%d/pings", owner, repo, id) req, err := s.client.NewRequest("POST", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // TestHook triggers a test Hook by github. // -// GitHub API docs: http://developer.github.com/v3/repos/hooks/#test-a-push-hook -func (s *RepositoriesService) TestHook(owner, repo string, id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/hooks/#test-a-push-hook +func (s *RepositoriesService) TestHook(ctx context.Context, owner, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/hooks/%d/tests", owner, repo, id) req, err := s.client.NewRequest("POST", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) -} - -// ListServiceHooks is deprecated. Use Client.ListServiceHooks instead. -func (s *RepositoriesService) ListServiceHooks() ([]ServiceHook, *Response, error) { - return s.client.ListServiceHooks() + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/repos_invitations.go b/vendor/github.com/google/go-github/github/repos_invitations.go new file mode 100644 index 0000000000..a803a12da1 --- /dev/null +++ b/vendor/github.com/google/go-github/github/repos_invitations.go @@ -0,0 +1,94 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// RepositoryInvitation represents an invitation to collaborate on a repo. +type RepositoryInvitation struct { + ID *int `json:"id,omitempty"` + Repo *Repository `json:"repository,omitempty"` + Invitee *User `json:"invitee,omitempty"` + Inviter *User `json:"inviter,omitempty"` + + // Permissions represents the permissions that the associated user will have + // on the repository. Possible values are: "read", "write", "admin". + Permissions *string `json:"permissions,omitempty"` + CreatedAt *Timestamp `json:"created_at,omitempty"` + URL *string `json:"url,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` +} + +// ListInvitations lists all currently-open repository invitations. +// +// GitHub API docs: https://developer.github.com/v3/repos/invitations/#list-invitations-for-a-repository +func (s *RepositoriesService) ListInvitations(ctx context.Context, owner, repo string, opt *ListOptions) ([]*RepositoryInvitation, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/invitations", owner, repo) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeRepositoryInvitationsPreview) + + invites := []*RepositoryInvitation{} + resp, err := s.client.Do(ctx, req, &invites) + if err != nil { + return nil, resp, err + } + + return invites, resp, nil +} + +// DeleteInvitation deletes a repository invitation. +// +// GitHub API docs: https://developer.github.com/v3/repos/invitations/#delete-a-repository-invitation +func (s *RepositoriesService) DeleteInvitation(ctx context.Context, owner, repo string, invitationID int) (*Response, error) { + u := fmt.Sprintf("repos/%v/%v/invitations/%v", owner, repo, invitationID) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeRepositoryInvitationsPreview) + + return s.client.Do(ctx, req, nil) +} + +// UpdateInvitation updates the permissions associated with a repository +// invitation. +// +// permissions represents the permissions that the associated user will have +// on the repository. Possible values are: "read", "write", "admin". +// +// GitHub API docs: https://developer.github.com/v3/repos/invitations/#update-a-repository-invitation +func (s *RepositoriesService) UpdateInvitation(ctx context.Context, owner, repo string, invitationID int, permissions string) (*RepositoryInvitation, *Response, error) { + opts := &struct { + Permissions string `json:"permissions"` + }{Permissions: permissions} + u := fmt.Sprintf("repos/%v/%v/invitations/%v", owner, repo, invitationID) + req, err := s.client.NewRequest("PATCH", u, opts) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeRepositoryInvitationsPreview) + + invite := &RepositoryInvitation{} + resp, err := s.client.Do(ctx, req, invite) + return invite, resp, err +} diff --git a/vendor/github.com/google/go-github/github/repos_keys.go b/vendor/github.com/google/go-github/github/repos_keys.go index 0d12ec9a71..f5a865813a 100644 --- a/vendor/github.com/google/go-github/github/repos_keys.go +++ b/vendor/github.com/google/go-github/github/repos_keys.go @@ -5,14 +5,17 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // The Key type is defined in users_keys.go // ListKeys lists the deploy keys for a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/keys/#list -func (s *RepositoriesService) ListKeys(owner string, repo string, opt *ListOptions) ([]Key, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/keys/#list +func (s *RepositoriesService) ListKeys(ctx context.Context, owner string, repo string, opt *ListOptions) ([]*Key, *Response, error) { u := fmt.Sprintf("repos/%v/%v/keys", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -24,19 +27,19 @@ func (s *RepositoriesService) ListKeys(owner string, repo string, opt *ListOptio return nil, nil, err } - keys := new([]Key) - resp, err := s.client.Do(req, keys) + var keys []*Key + resp, err := s.client.Do(ctx, req, &keys) if err != nil { return nil, resp, err } - return *keys, resp, err + return keys, resp, nil } // GetKey fetches a single deploy key. // -// GitHub API docs: http://developer.github.com/v3/repos/keys/#get -func (s *RepositoriesService) GetKey(owner string, repo string, id int) (*Key, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/keys/#get +func (s *RepositoriesService) GetKey(ctx context.Context, owner string, repo string, id int) (*Key, *Response, error) { u := fmt.Sprintf("repos/%v/%v/keys/%v", owner, repo, id) req, err := s.client.NewRequest("GET", u, nil) @@ -45,18 +48,18 @@ func (s *RepositoriesService) GetKey(owner string, repo string, id int) (*Key, * } key := new(Key) - resp, err := s.client.Do(req, key) + resp, err := s.client.Do(ctx, req, key) if err != nil { return nil, resp, err } - return key, resp, err + return key, resp, nil } // CreateKey adds a deploy key for a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/keys/#create -func (s *RepositoriesService) CreateKey(owner string, repo string, key *Key) (*Key, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/keys/#create +func (s *RepositoriesService) CreateKey(ctx context.Context, owner string, repo string, key *Key) (*Key, *Response, error) { u := fmt.Sprintf("repos/%v/%v/keys", owner, repo) req, err := s.client.NewRequest("POST", u, key) @@ -65,18 +68,18 @@ func (s *RepositoriesService) CreateKey(owner string, repo string, key *Key) (*K } k := new(Key) - resp, err := s.client.Do(req, k) + resp, err := s.client.Do(ctx, req, k) if err != nil { return nil, resp, err } - return k, resp, err + return k, resp, nil } // EditKey edits a deploy key. // -// GitHub API docs: http://developer.github.com/v3/repos/keys/#edit -func (s *RepositoriesService) EditKey(owner string, repo string, id int, key *Key) (*Key, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/keys/#edit +func (s *RepositoriesService) EditKey(ctx context.Context, owner string, repo string, id int, key *Key) (*Key, *Response, error) { u := fmt.Sprintf("repos/%v/%v/keys/%v", owner, repo, id) req, err := s.client.NewRequest("PATCH", u, key) @@ -85,18 +88,18 @@ func (s *RepositoriesService) EditKey(owner string, repo string, id int, key *Ke } k := new(Key) - resp, err := s.client.Do(req, k) + resp, err := s.client.Do(ctx, req, k) if err != nil { return nil, resp, err } - return k, resp, err + return k, resp, nil } // DeleteKey deletes a deploy key. // -// GitHub API docs: http://developer.github.com/v3/repos/keys/#delete -func (s *RepositoriesService) DeleteKey(owner string, repo string, id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/keys/#delete +func (s *RepositoriesService) DeleteKey(ctx context.Context, owner string, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%v/%v/keys/%v", owner, repo, id) req, err := s.client.NewRequest("DELETE", u, nil) @@ -104,5 +107,5 @@ func (s *RepositoriesService) DeleteKey(owner string, repo string, id int) (*Res return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/repos_merging.go b/vendor/github.com/google/go-github/github/repos_merging.go index 31f8313ea7..04383c1ae3 100644 --- a/vendor/github.com/google/go-github/github/repos_merging.go +++ b/vendor/github.com/google/go-github/github/repos_merging.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" ) @@ -20,7 +21,7 @@ type RepositoryMergeRequest struct { // Merge a branch in the specified repository. // // GitHub API docs: https://developer.github.com/v3/repos/merging/#perform-a-merge -func (s *RepositoriesService) Merge(owner, repo string, request *RepositoryMergeRequest) (*RepositoryCommit, *Response, error) { +func (s *RepositoriesService) Merge(ctx context.Context, owner, repo string, request *RepositoryMergeRequest) (*RepositoryCommit, *Response, error) { u := fmt.Sprintf("repos/%v/%v/merges", owner, repo) req, err := s.client.NewRequest("POST", u, request) if err != nil { @@ -28,10 +29,10 @@ func (s *RepositoriesService) Merge(owner, repo string, request *RepositoryMerge } commit := new(RepositoryCommit) - resp, err := s.client.Do(req, commit) + resp, err := s.client.Do(ctx, req, commit) if err != nil { return nil, resp, err } - return commit, resp, err + return commit, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_pages.go b/vendor/github.com/google/go-github/github/repos_pages.go index 2384eaf6be..3d19b43db5 100644 --- a/vendor/github.com/google/go-github/github/repos_pages.go +++ b/vendor/github.com/google/go-github/github/repos_pages.go @@ -5,7 +5,10 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // Pages represents a GitHub Pages site configuration. type Pages struct { @@ -13,6 +16,7 @@ type Pages struct { Status *string `json:"status,omitempty"` CNAME *string `json:"cname,omitempty"` Custom404 *bool `json:"custom_404,omitempty"` + HTMLURL *string `json:"html_url,omitempty"` } // PagesError represents a build error for a GitHub Pages site. @@ -29,51 +33,54 @@ type PagesBuild struct { Commit *string `json:"commit,omitempty"` Duration *int `json:"duration,omitempty"` CreatedAt *Timestamp `json:"created_at,omitempty"` - UpdatedAt *Timestamp `json:"created_at,omitempty"` + UpdatedAt *Timestamp `json:"updated_at,omitempty"` } // GetPagesInfo fetches information about a GitHub Pages site. // // GitHub API docs: https://developer.github.com/v3/repos/pages/#get-information-about-a-pages-site -func (s *RepositoriesService) GetPagesInfo(owner string, repo string) (*Pages, *Response, error) { +func (s *RepositoriesService) GetPagesInfo(ctx context.Context, owner, repo string) (*Pages, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pages", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypePagesPreview) + site := new(Pages) - resp, err := s.client.Do(req, site) + resp, err := s.client.Do(ctx, req, site) if err != nil { return nil, resp, err } - return site, resp, err + return site, resp, nil } // ListPagesBuilds lists the builds for a GitHub Pages site. // // GitHub API docs: https://developer.github.com/v3/repos/pages/#list-pages-builds -func (s *RepositoriesService) ListPagesBuilds(owner string, repo string) ([]PagesBuild, *Response, error) { +func (s *RepositoriesService) ListPagesBuilds(ctx context.Context, owner, repo string) ([]*PagesBuild, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pages/builds", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } - var pages []PagesBuild - resp, err := s.client.Do(req, &pages) + var pages []*PagesBuild + resp, err := s.client.Do(ctx, req, &pages) if err != nil { return nil, resp, err } - return pages, resp, err + return pages, resp, nil } // GetLatestPagesBuild fetches the latest build information for a GitHub pages site. // // GitHub API docs: https://developer.github.com/v3/repos/pages/#list-latest-pages-build -func (s *RepositoriesService) GetLatestPagesBuild(owner string, repo string) (*PagesBuild, *Response, error) { +func (s *RepositoriesService) GetLatestPagesBuild(ctx context.Context, owner, repo string) (*PagesBuild, *Response, error) { u := fmt.Sprintf("repos/%v/%v/pages/builds/latest", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -81,10 +88,51 @@ func (s *RepositoriesService) GetLatestPagesBuild(owner string, repo string) (*P } build := new(PagesBuild) - resp, err := s.client.Do(req, build) + resp, err := s.client.Do(ctx, req, build) if err != nil { return nil, resp, err } - return build, resp, err + return build, resp, nil +} + +// GetPageBuild fetches the specific build information for a GitHub pages site. +// +// GitHub API docs: https://developer.github.com/v3/repos/pages/#list-a-specific-pages-build +func (s *RepositoriesService) GetPageBuild(ctx context.Context, owner, repo string, id int) (*PagesBuild, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pages/builds/%v", owner, repo, id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + build := new(PagesBuild) + resp, err := s.client.Do(ctx, req, build) + if err != nil { + return nil, resp, err + } + + return build, resp, nil +} + +// RequestPageBuild requests a build of a GitHub Pages site without needing to push new commit. +// +// GitHub API docs: https://developer.github.com/v3/repos/pages/#request-a-page-build +func (s *RepositoriesService) RequestPageBuild(ctx context.Context, owner, repo string) (*PagesBuild, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/pages/builds", owner, repo) + req, err := s.client.NewRequest("POST", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypePagesPreview) + + build := new(PagesBuild) + resp, err := s.client.Do(ctx, req, build) + if err != nil { + return nil, resp, err + } + + return build, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_projects.go b/vendor/github.com/google/go-github/github/repos_projects.go new file mode 100644 index 0000000000..9e1a4dbb2d --- /dev/null +++ b/vendor/github.com/google/go-github/github/repos_projects.go @@ -0,0 +1,60 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// ListProjects lists the projects for a repo. +// +// GitHub API docs: https://developer.github.com/v3/projects/#list-repository-projects +func (s *RepositoriesService) ListProjects(ctx context.Context, owner, repo string, opt *ListOptions) ([]*Project, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/projects", owner, repo) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + projects := []*Project{} + resp, err := s.client.Do(ctx, req, &projects) + if err != nil { + return nil, resp, err + } + + return projects, resp, nil +} + +// CreateProject creates a GitHub Project for the specified repository. +// +// GitHub API docs: https://developer.github.com/v3/projects/#create-a-repository-project +func (s *RepositoriesService) CreateProject(ctx context.Context, owner, repo string, opt *ProjectOptions) (*Project, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/projects", owner, repo) + req, err := s.client.NewRequest("POST", u, opt) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeProjectsPreview) + + project := &Project{} + resp, err := s.client.Do(ctx, req, project) + if err != nil { + return nil, resp, err + } + + return project, resp, nil +} diff --git a/vendor/github.com/google/go-github/github/repos_releases.go b/vendor/github.com/google/go-github/github/repos_releases.go index 9f6133dcd1..49fec8324e 100644 --- a/vendor/github.com/google/go-github/github/repos_releases.go +++ b/vendor/github.com/google/go-github/github/repos_releases.go @@ -6,12 +6,15 @@ package github import ( + "context" "errors" "fmt" "io" "mime" + "net/http" "os" "path/filepath" + "strings" ) // RepositoryRelease represents a GitHub release in a repository. @@ -32,14 +35,14 @@ type RepositoryRelease struct { UploadURL *string `json:"upload_url,omitempty"` ZipballURL *string `json:"zipball_url,omitempty"` TarballURL *string `json:"tarball_url,omitempty"` - Author *CommitAuthor `json:"author,omitempty"` + Author *User `json:"author,omitempty"` } func (r RepositoryRelease) String() string { return Stringify(r) } -// ReleaseAsset represents a Github release asset in a repository. +// ReleaseAsset represents a GitHub release asset in a repository. type ReleaseAsset struct { ID *int `json:"id,omitempty"` URL *string `json:"url,omitempty"` @@ -61,8 +64,8 @@ func (r ReleaseAsset) String() string { // ListReleases lists the releases for a repository. // -// GitHub API docs: http://developer.github.com/v3/repos/releases/#list-releases-for-a-repository -func (s *RepositoriesService) ListReleases(owner, repo string, opt *ListOptions) ([]RepositoryRelease, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#list-releases-for-a-repository +func (s *RepositoriesService) ListReleases(ctx context.Context, owner, repo string, opt *ListOptions) ([]*RepositoryRelease, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases", owner, repo) u, err := addOptions(u, opt) if err != nil { @@ -74,56 +77,56 @@ func (s *RepositoriesService) ListReleases(owner, repo string, opt *ListOptions) return nil, nil, err } - releases := new([]RepositoryRelease) - resp, err := s.client.Do(req, releases) + var releases []*RepositoryRelease + resp, err := s.client.Do(ctx, req, &releases) if err != nil { return nil, resp, err } - return *releases, resp, err + return releases, resp, nil } // GetRelease fetches a single release. // -// GitHub API docs: http://developer.github.com/v3/repos/releases/#get-a-single-release -func (s *RepositoriesService) GetRelease(owner, repo string, id int) (*RepositoryRelease, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#get-a-single-release +func (s *RepositoriesService) GetRelease(ctx context.Context, owner, repo string, id int) (*RepositoryRelease, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/%d", owner, repo, id) - return s.getSingleRelease(u) + return s.getSingleRelease(ctx, u) } // GetLatestRelease fetches the latest published release for the repository. // // GitHub API docs: https://developer.github.com/v3/repos/releases/#get-the-latest-release -func (s *RepositoriesService) GetLatestRelease(owner, repo string) (*RepositoryRelease, *Response, error) { +func (s *RepositoriesService) GetLatestRelease(ctx context.Context, owner, repo string) (*RepositoryRelease, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/latest", owner, repo) - return s.getSingleRelease(u) + return s.getSingleRelease(ctx, u) } // GetReleaseByTag fetches a release with the specified tag. // // GitHub API docs: https://developer.github.com/v3/repos/releases/#get-a-release-by-tag-name -func (s *RepositoriesService) GetReleaseByTag(owner, repo, tag string) (*RepositoryRelease, *Response, error) { +func (s *RepositoriesService) GetReleaseByTag(ctx context.Context, owner, repo, tag string) (*RepositoryRelease, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/tags/%s", owner, repo, tag) - return s.getSingleRelease(u) + return s.getSingleRelease(ctx, u) } -func (s *RepositoriesService) getSingleRelease(url string) (*RepositoryRelease, *Response, error) { +func (s *RepositoriesService) getSingleRelease(ctx context.Context, url string) (*RepositoryRelease, *Response, error) { req, err := s.client.NewRequest("GET", url, nil) if err != nil { return nil, nil, err } release := new(RepositoryRelease) - resp, err := s.client.Do(req, release) + resp, err := s.client.Do(ctx, req, release) if err != nil { return nil, resp, err } - return release, resp, err + return release, resp, nil } // CreateRelease adds a new release for a repository. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#create-a-release -func (s *RepositoriesService) CreateRelease(owner, repo string, release *RepositoryRelease) (*RepositoryRelease, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#create-a-release +func (s *RepositoriesService) CreateRelease(ctx context.Context, owner, repo string, release *RepositoryRelease) (*RepositoryRelease, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases", owner, repo) req, err := s.client.NewRequest("POST", u, release) @@ -132,17 +135,17 @@ func (s *RepositoriesService) CreateRelease(owner, repo string, release *Reposit } r := new(RepositoryRelease) - resp, err := s.client.Do(req, r) + resp, err := s.client.Do(ctx, req, r) if err != nil { return nil, resp, err } - return r, resp, err + return r, resp, nil } // EditRelease edits a repository release. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#edit-a-release -func (s *RepositoriesService) EditRelease(owner, repo string, id int, release *RepositoryRelease) (*RepositoryRelease, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#edit-a-release +func (s *RepositoriesService) EditRelease(ctx context.Context, owner, repo string, id int, release *RepositoryRelease) (*RepositoryRelease, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/%d", owner, repo, id) req, err := s.client.NewRequest("PATCH", u, release) @@ -151,30 +154,30 @@ func (s *RepositoriesService) EditRelease(owner, repo string, id int, release *R } r := new(RepositoryRelease) - resp, err := s.client.Do(req, r) + resp, err := s.client.Do(ctx, req, r) if err != nil { return nil, resp, err } - return r, resp, err + return r, resp, nil } // DeleteRelease delete a single release from a repository. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#delete-a-release -func (s *RepositoriesService) DeleteRelease(owner, repo string, id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#delete-a-release +func (s *RepositoriesService) DeleteRelease(ctx context.Context, owner, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/%d", owner, repo, id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // ListReleaseAssets lists the release's assets. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#list-assets-for-a-release -func (s *RepositoriesService) ListReleaseAssets(owner, repo string, id int, opt *ListOptions) ([]ReleaseAsset, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#list-assets-for-a-release +func (s *RepositoriesService) ListReleaseAssets(ctx context.Context, owner, repo string, id int, opt *ListOptions) ([]*ReleaseAsset, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/%d/assets", owner, repo, id) u, err := addOptions(u, opt) if err != nil { @@ -186,18 +189,18 @@ func (s *RepositoriesService) ListReleaseAssets(owner, repo string, id int, opt return nil, nil, err } - assets := new([]ReleaseAsset) - resp, err := s.client.Do(req, assets) + var assets []*ReleaseAsset + resp, err := s.client.Do(ctx, req, &assets) if err != nil { - return nil, resp, nil + return nil, resp, err } - return *assets, resp, err + return assets, resp, nil } // GetReleaseAsset fetches a single release asset. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#get-a-single-release-asset -func (s *RepositoriesService) GetReleaseAsset(owner, repo string, id int) (*ReleaseAsset, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#get-a-single-release-asset +func (s *RepositoriesService) GetReleaseAsset(ctx context.Context, owner, repo string, id int) (*ReleaseAsset, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/assets/%d", owner, repo, id) req, err := s.client.NewRequest("GET", u, nil) @@ -206,40 +209,61 @@ func (s *RepositoriesService) GetReleaseAsset(owner, repo string, id int) (*Rele } asset := new(ReleaseAsset) - resp, err := s.client.Do(req, asset) + resp, err := s.client.Do(ctx, req, asset) if err != nil { - return nil, resp, nil + return nil, resp, err } - return asset, resp, err + return asset, resp, nil } -// DownloadReleaseAsset downloads a release asset. +// DownloadReleaseAsset downloads a release asset or returns a redirect URL. // // DownloadReleaseAsset returns an io.ReadCloser that reads the contents of the // specified release asset. It is the caller's responsibility to close the ReadCloser. +// If a redirect is returned, the redirect URL will be returned as a string instead +// of the io.ReadCloser. Exactly one of rc and redirectURL will be zero. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#get-a-single-release-asset -func (s *RepositoriesService) DownloadReleaseAsset(owner, repo string, id int) (io.ReadCloser, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#get-a-single-release-asset +func (s *RepositoriesService) DownloadReleaseAsset(ctx context.Context, owner, repo string, id int) (rc io.ReadCloser, redirectURL string, err error) { u := fmt.Sprintf("repos/%s/%s/releases/assets/%d", owner, repo, id) req, err := s.client.NewRequest("GET", u, nil) if err != nil { - return nil, err + return nil, "", err } req.Header.Set("Accept", defaultMediaType) - resp, err := s.client.client.Do(req) + s.client.clientMu.Lock() + defer s.client.clientMu.Unlock() + + var loc string + saveRedirect := s.client.client.CheckRedirect + s.client.client.CheckRedirect = func(req *http.Request, via []*http.Request) error { + loc = req.URL.String() + return errors.New("disable redirect") + } + defer func() { s.client.client.CheckRedirect = saveRedirect }() + + resp, err := s.client.client.Do(req.WithContext(ctx)) if err != nil { - return nil, err + if !strings.Contains(err.Error(), "disable redirect") { + return nil, "", err + } + return nil, loc, nil // Intentionally return no error with valid redirect URL. } - return resp.Body, nil + if err := CheckResponse(resp); err != nil { + resp.Body.Close() + return nil, "", err + } + + return resp.Body, "", nil } // EditReleaseAsset edits a repository release asset. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#edit-a-release-asset -func (s *RepositoriesService) EditReleaseAsset(owner, repo string, id int, release *ReleaseAsset) (*ReleaseAsset, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#edit-a-release-asset +func (s *RepositoriesService) EditReleaseAsset(ctx context.Context, owner, repo string, id int, release *ReleaseAsset) (*ReleaseAsset, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/assets/%d", owner, repo, id) req, err := s.client.NewRequest("PATCH", u, release) @@ -248,31 +272,31 @@ func (s *RepositoriesService) EditReleaseAsset(owner, repo string, id int, relea } asset := new(ReleaseAsset) - resp, err := s.client.Do(req, asset) + resp, err := s.client.Do(ctx, req, asset) if err != nil { return nil, resp, err } - return asset, resp, err + return asset, resp, nil } // DeleteReleaseAsset delete a single release asset from a repository. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#delete-a-release-asset -func (s *RepositoriesService) DeleteReleaseAsset(owner, repo string, id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#delete-a-release-asset +func (s *RepositoriesService) DeleteReleaseAsset(ctx context.Context, owner, repo string, id int) (*Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/assets/%d", owner, repo, id) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // UploadReleaseAsset creates an asset by uploading a file into a release repository. // To upload assets that cannot be represented by an os.File, call NewUploadRequest directly. // -// GitHub API docs : http://developer.github.com/v3/repos/releases/#upload-a-release-asset -func (s *RepositoriesService) UploadReleaseAsset(owner, repo string, id int, opt *UploadOptions, file *os.File) (*ReleaseAsset, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/releases/#upload-a-release-asset +func (s *RepositoriesService) UploadReleaseAsset(ctx context.Context, owner, repo string, id int, opt *UploadOptions, file *os.File) (*ReleaseAsset, *Response, error) { u := fmt.Sprintf("repos/%s/%s/releases/%d/assets", owner, repo, id) u, err := addOptions(u, opt) if err != nil { @@ -294,9 +318,9 @@ func (s *RepositoriesService) UploadReleaseAsset(owner, repo string, id int, opt } asset := new(ReleaseAsset) - resp, err := s.client.Do(req, asset) + resp, err := s.client.Do(ctx, req, asset) if err != nil { return nil, resp, err } - return asset, resp, err + return asset, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_stats.go b/vendor/github.com/google/go-github/github/repos_stats.go index 3474b550da..30fc7bd340 100644 --- a/vendor/github.com/google/go-github/github/repos_stats.go +++ b/vendor/github.com/google/go-github/github/repos_stats.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -39,26 +40,26 @@ func (w WeeklyStats) String() string { // deletions and commit counts. // // If this is the first time these statistics are requested for the given -// repository, this method will return a non-nil error and a status code of -// 202. This is because this is the status that github returns to signify that +// repository, this method will return an *AcceptedError and a status code of +// 202. This is because this is the status that GitHub returns to signify that // it is now computing the requested statistics. A follow up request, after a // delay of a second or so, should result in a successful request. // -// GitHub API Docs: https://developer.github.com/v3/repos/statistics/#contributors -func (s *RepositoriesService) ListContributorsStats(owner, repo string) ([]ContributorStats, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/statistics/#contributors +func (s *RepositoriesService) ListContributorsStats(ctx context.Context, owner, repo string) ([]*ContributorStats, *Response, error) { u := fmt.Sprintf("repos/%v/%v/stats/contributors", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } - var contributorStats []ContributorStats - resp, err := s.client.Do(req, &contributorStats) + var contributorStats []*ContributorStats + resp, err := s.client.Do(ctx, req, &contributorStats) if err != nil { return nil, resp, err } - return contributorStats, resp, err + return contributorStats, resp, nil } // WeeklyCommitActivity represents the weekly commit activity for a repository. @@ -78,34 +79,40 @@ func (w WeeklyCommitActivity) String() string { // starting on Sunday. // // If this is the first time these statistics are requested for the given -// repository, this method will return a non-nil error and a status code of -// 202. This is because this is the status that github returns to signify that +// repository, this method will return an *AcceptedError and a status code of +// 202. This is because this is the status that GitHub returns to signify that // it is now computing the requested statistics. A follow up request, after a // delay of a second or so, should result in a successful request. // -// GitHub API Docs: https://developer.github.com/v3/repos/statistics/#commit-activity -func (s *RepositoriesService) ListCommitActivity(owner, repo string) ([]WeeklyCommitActivity, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/statistics/#commit-activity +func (s *RepositoriesService) ListCommitActivity(ctx context.Context, owner, repo string) ([]*WeeklyCommitActivity, *Response, error) { u := fmt.Sprintf("repos/%v/%v/stats/commit_activity", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { return nil, nil, err } - var weeklyCommitActivity []WeeklyCommitActivity - resp, err := s.client.Do(req, &weeklyCommitActivity) + var weeklyCommitActivity []*WeeklyCommitActivity + resp, err := s.client.Do(ctx, req, &weeklyCommitActivity) if err != nil { return nil, resp, err } - return weeklyCommitActivity, resp, err + return weeklyCommitActivity, resp, nil } // ListCodeFrequency returns a weekly aggregate of the number of additions and -// deletions pushed to a repository. Returned WeeklyStats will contain +// deletions pushed to a repository. Returned WeeklyStats will contain // additions and deletions, but not total commits. // -// GitHub API Docs: https://developer.github.com/v3/repos/statistics/#code-frequency -func (s *RepositoriesService) ListCodeFrequency(owner, repo string) ([]WeeklyStats, *Response, error) { +// If this is the first time these statistics are requested for the given +// repository, this method will return an *AcceptedError and a status code of +// 202. This is because this is the status that GitHub returns to signify that +// it is now computing the requested statistics. A follow up request, after a +// delay of a second or so, should result in a successful request. +// +// GitHub API docs: https://developer.github.com/v3/repos/statistics/#code-frequency +func (s *RepositoriesService) ListCodeFrequency(ctx context.Context, owner, repo string) ([]*WeeklyStats, *Response, error) { u := fmt.Sprintf("repos/%v/%v/stats/code_frequency", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -113,15 +120,15 @@ func (s *RepositoriesService) ListCodeFrequency(owner, repo string) ([]WeeklySta } var weeks [][]int - resp, err := s.client.Do(req, &weeks) + resp, err := s.client.Do(ctx, req, &weeks) // convert int slices into WeeklyStats - var stats []WeeklyStats + var stats []*WeeklyStats for _, week := range weeks { if len(week) != 3 { continue } - stat := WeeklyStats{ + stat := &WeeklyStats{ Week: &Timestamp{time.Unix(int64(week[0]), 0)}, Additions: Int(week[1]), Deletions: Int(week[2]), @@ -152,14 +159,13 @@ func (r RepositoryParticipation) String() string { // The array order is oldest week (index 0) to most recent week. // // If this is the first time these statistics are requested for the given -// repository, this method will return a non-nil error and a status code -// of 202. This is because this is the status that github returns to -// signify that it is now computing the requested statistics. A follow -// up request, after a delay of a second or so, should result in a -// successful request. +// repository, this method will return an *AcceptedError and a status code of +// 202. This is because this is the status that GitHub returns to signify that +// it is now computing the requested statistics. A follow up request, after a +// delay of a second or so, should result in a successful request. // -// GitHub API Docs: https://developer.github.com/v3/repos/statistics/#participation -func (s *RepositoriesService) ListParticipation(owner, repo string) (*RepositoryParticipation, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/statistics/#participation +func (s *RepositoriesService) ListParticipation(ctx context.Context, owner, repo string) (*RepositoryParticipation, *Response, error) { u := fmt.Sprintf("repos/%v/%v/stats/participation", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -167,12 +173,12 @@ func (s *RepositoriesService) ListParticipation(owner, repo string) (*Repository } participation := new(RepositoryParticipation) - resp, err := s.client.Do(req, participation) + resp, err := s.client.Do(ctx, req, participation) if err != nil { return nil, resp, err } - return participation, resp, err + return participation, resp, nil } // PunchCard represents the number of commits made during a given hour of a @@ -185,8 +191,14 @@ type PunchCard struct { // ListPunchCard returns the number of commits per hour in each day. // -// GitHub API Docs: https://developer.github.com/v3/repos/statistics/#punch-card -func (s *RepositoriesService) ListPunchCard(owner, repo string) ([]PunchCard, *Response, error) { +// If this is the first time these statistics are requested for the given +// repository, this method will return an *AcceptedError and a status code of +// 202. This is because this is the status that GitHub returns to signify that +// it is now computing the requested statistics. A follow up request, after a +// delay of a second or so, should result in a successful request. +// +// GitHub API docs: https://developer.github.com/v3/repos/statistics/#punch-card +func (s *RepositoriesService) ListPunchCard(ctx context.Context, owner, repo string) ([]*PunchCard, *Response, error) { u := fmt.Sprintf("repos/%v/%v/stats/punch_card", owner, repo) req, err := s.client.NewRequest("GET", u, nil) if err != nil { @@ -194,15 +206,15 @@ func (s *RepositoriesService) ListPunchCard(owner, repo string) ([]PunchCard, *R } var results [][]int - resp, err := s.client.Do(req, &results) + resp, err := s.client.Do(ctx, req, &results) // convert int slices into Punchcards - var cards []PunchCard + var cards []*PunchCard for _, result := range results { if len(result) != 3 { continue } - card := PunchCard{ + card := &PunchCard{ Day: Int(result[0]), Hour: Int(result[1]), Commits: Int(result[2]), diff --git a/vendor/github.com/google/go-github/github/repos_statuses.go b/vendor/github.com/google/go-github/github/repos_statuses.go index 7a6ee7c630..6db501076c 100644 --- a/vendor/github.com/google/go-github/github/repos_statuses.go +++ b/vendor/github.com/google/go-github/github/repos_statuses.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" "time" ) @@ -15,11 +16,11 @@ type RepoStatus struct { ID *int `json:"id,omitempty"` URL *string `json:"url,omitempty"` - // State is the current state of the repository. Possible values are: + // State is the current state of the repository. Possible values are: // pending, success, error, or failure. State *string `json:"state,omitempty"` - // TargetURL is the URL of the page representing this status. It will be + // TargetURL is the URL of the page representing this status. It will be // linked from the GitHub UI to allow users to see the source of the status. TargetURL *string `json:"target_url,omitempty"` @@ -39,10 +40,10 @@ func (r RepoStatus) String() string { } // ListStatuses lists the statuses of a repository at the specified -// reference. ref can be a SHA, a branch name, or a tag name. +// reference. ref can be a SHA, a branch name, or a tag name. // -// GitHub API docs: http://developer.github.com/v3/repos/statuses/#list-statuses-for-a-specific-ref -func (s *RepositoriesService) ListStatuses(owner, repo, ref string, opt *ListOptions) ([]RepoStatus, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/statuses/#list-statuses-for-a-specific-ref +func (s *RepositoriesService) ListStatuses(ctx context.Context, owner, repo, ref string, opt *ListOptions) ([]*RepoStatus, *Response, error) { u := fmt.Sprintf("repos/%v/%v/commits/%v/statuses", owner, repo, ref) u, err := addOptions(u, opt) if err != nil { @@ -54,20 +55,20 @@ func (s *RepositoriesService) ListStatuses(owner, repo, ref string, opt *ListOpt return nil, nil, err } - statuses := new([]RepoStatus) - resp, err := s.client.Do(req, statuses) + var statuses []*RepoStatus + resp, err := s.client.Do(ctx, req, &statuses) if err != nil { return nil, resp, err } - return *statuses, resp, err + return statuses, resp, nil } // CreateStatus creates a new status for a repository at the specified -// reference. Ref can be a SHA, a branch name, or a tag name. +// reference. Ref can be a SHA, a branch name, or a tag name. // -// GitHub API docs: http://developer.github.com/v3/repos/statuses/#create-a-status -func (s *RepositoriesService) CreateStatus(owner, repo, ref string, status *RepoStatus) (*RepoStatus, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/repos/statuses/#create-a-status +func (s *RepositoriesService) CreateStatus(ctx context.Context, owner, repo, ref string, status *RepoStatus) (*RepoStatus, *Response, error) { u := fmt.Sprintf("repos/%v/%v/statuses/%v", owner, repo, ref) req, err := s.client.NewRequest("POST", u, status) if err != nil { @@ -75,17 +76,17 @@ func (s *RepositoriesService) CreateStatus(owner, repo, ref string, status *Repo } repoStatus := new(RepoStatus) - resp, err := s.client.Do(req, repoStatus) + resp, err := s.client.Do(ctx, req, repoStatus) if err != nil { return nil, resp, err } - return repoStatus, resp, err + return repoStatus, resp, nil } // CombinedStatus represents the combined status of a repository at a particular reference. type CombinedStatus struct { - // State is the combined state of the repository. Possible values are: + // State is the combined state of the repository. Possible values are: // failure, pending, or success. State *string `json:"state,omitempty"` @@ -103,10 +104,10 @@ func (s CombinedStatus) String() string { } // GetCombinedStatus returns the combined status of a repository at the specified -// reference. ref can be a SHA, a branch name, or a tag name. +// reference. ref can be a SHA, a branch name, or a tag name. // // GitHub API docs: https://developer.github.com/v3/repos/statuses/#get-the-combined-status-for-a-specific-ref -func (s *RepositoriesService) GetCombinedStatus(owner, repo, ref string, opt *ListOptions) (*CombinedStatus, *Response, error) { +func (s *RepositoriesService) GetCombinedStatus(ctx context.Context, owner, repo, ref string, opt *ListOptions) (*CombinedStatus, *Response, error) { u := fmt.Sprintf("repos/%v/%v/commits/%v/status", owner, repo, ref) u, err := addOptions(u, opt) if err != nil { @@ -119,10 +120,10 @@ func (s *RepositoriesService) GetCombinedStatus(owner, repo, ref string, opt *Li } status := new(CombinedStatus) - resp, err := s.client.Do(req, status) + resp, err := s.client.Do(ctx, req, status) if err != nil { return nil, resp, err } - return status, resp, err + return status, resp, nil } diff --git a/vendor/github.com/google/go-github/github/repos_traffic.go b/vendor/github.com/google/go-github/github/repos_traffic.go new file mode 100644 index 0000000000..fb1c97648a --- /dev/null +++ b/vendor/github.com/google/go-github/github/repos_traffic.go @@ -0,0 +1,141 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" +) + +// TrafficReferrer represent information about traffic from a referrer . +type TrafficReferrer struct { + Referrer *string `json:"referrer,omitempty"` + Count *int `json:"count,omitempty"` + Uniques *int `json:"uniques,omitempty"` +} + +// TrafficPath represent information about the traffic on a path of the repo. +type TrafficPath struct { + Path *string `json:"path,omitempty"` + Title *string `json:"title,omitempty"` + Count *int `json:"count,omitempty"` + Uniques *int `json:"uniques,omitempty"` +} + +// TrafficData represent information about a specific timestamp in views or clones list. +type TrafficData struct { + Timestamp *Timestamp `json:"timestamp,omitempty"` + Count *int `json:"count,omitempty"` + Uniques *int `json:"uniques,omitempty"` +} + +// TrafficViews represent information about the number of views in the last 14 days. +type TrafficViews struct { + Views []*TrafficData `json:"views,omitempty"` + Count *int `json:"count,omitempty"` + Uniques *int `json:"uniques,omitempty"` +} + +// TrafficClones represent information about the number of clones in the last 14 days. +type TrafficClones struct { + Clones []*TrafficData `json:"clones,omitempty"` + Count *int `json:"count,omitempty"` + Uniques *int `json:"uniques,omitempty"` +} + +// TrafficBreakdownOptions specifies the parameters to methods that support breakdown per day or week. +// Can be one of: day, week. Default: day. +type TrafficBreakdownOptions struct { + Per string `url:"per,omitempty"` +} + +// ListTrafficReferrers list the top 10 referrers over the last 14 days. +// +// GitHub API docs: https://developer.github.com/v3/repos/traffic/#list-referrers +func (s *RepositoriesService) ListTrafficReferrers(ctx context.Context, owner, repo string) ([]*TrafficReferrer, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/traffic/popular/referrers", owner, repo) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + var trafficReferrers []*TrafficReferrer + resp, err := s.client.Do(ctx, req, &trafficReferrers) + if err != nil { + return nil, resp, err + } + + return trafficReferrers, resp, nil +} + +// ListTrafficPaths list the top 10 popular content over the last 14 days. +// +// GitHub API docs: https://developer.github.com/v3/repos/traffic/#list-paths +func (s *RepositoriesService) ListTrafficPaths(ctx context.Context, owner, repo string) ([]*TrafficPath, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/traffic/popular/paths", owner, repo) + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + var paths []*TrafficPath + resp, err := s.client.Do(ctx, req, &paths) + if err != nil { + return nil, resp, err + } + + return paths, resp, nil +} + +// ListTrafficViews get total number of views for the last 14 days and breaks it down either per day or week. +// +// GitHub API docs: https://developer.github.com/v3/repos/traffic/#views +func (s *RepositoriesService) ListTrafficViews(ctx context.Context, owner, repo string, opt *TrafficBreakdownOptions) (*TrafficViews, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/traffic/views", owner, repo) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + trafficViews := new(TrafficViews) + resp, err := s.client.Do(ctx, req, &trafficViews) + if err != nil { + return nil, resp, err + } + + return trafficViews, resp, nil +} + +// ListTrafficClones get total number of clones for the last 14 days and breaks it down either per day or week for the last 14 days. +// +// GitHub API docs: https://developer.github.com/v3/repos/traffic/#views +func (s *RepositoriesService) ListTrafficClones(ctx context.Context, owner, repo string, opt *TrafficBreakdownOptions) (*TrafficClones, *Response, error) { + u := fmt.Sprintf("repos/%v/%v/traffic/clones", owner, repo) + u, err := addOptions(u, opt) + if err != nil { + return nil, nil, err + } + + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + trafficClones := new(TrafficClones) + resp, err := s.client.Do(ctx, req, &trafficClones) + if err != nil { + return nil, resp, err + } + + return trafficClones, resp, nil +} diff --git a/vendor/github.com/google/go-github/github/search.go b/vendor/github.com/google/go-github/github/search.go index d9e9b419a8..0fdaad9192 100644 --- a/vendor/github.com/google/go-github/github/search.go +++ b/vendor/github.com/google/go-github/github/search.go @@ -6,6 +6,7 @@ package github import ( + "context" "fmt" qs "github.com/google/go-querystring/query" @@ -14,15 +15,14 @@ import ( // SearchService provides access to the search related functions // in the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/search/ -type SearchService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/search/ +type SearchService service // SearchOptions specifies optional parameters to the SearchService methods. type SearchOptions struct { - // How to sort the search results. Possible values are: + // How to sort the search results. Possible values are: // - for repositories: stars, fork, updated + // - for commits: author-date, committer-date // - for code: indexed // - for issues: comments, created, updated // - for users: followers, repositories, joined @@ -42,46 +42,80 @@ type SearchOptions struct { // RepositoriesSearchResult represents the result of a repositories search. type RepositoriesSearchResult struct { - Total *int `json:"total_count,omitempty"` - Repositories []Repository `json:"items,omitempty"` + Total *int `json:"total_count,omitempty"` + IncompleteResults *bool `json:"incomplete_results,omitempty"` + Repositories []Repository `json:"items,omitempty"` } // Repositories searches repositories via various criteria. // -// GitHub API docs: http://developer.github.com/v3/search/#search-repositories -func (s *SearchService) Repositories(query string, opt *SearchOptions) (*RepositoriesSearchResult, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/search/#search-repositories +func (s *SearchService) Repositories(ctx context.Context, query string, opt *SearchOptions) (*RepositoriesSearchResult, *Response, error) { result := new(RepositoriesSearchResult) - resp, err := s.search("repositories", query, opt, result) + resp, err := s.search(ctx, "repositories", query, opt, result) + return result, resp, err +} + +// CommitsSearchResult represents the result of a commits search. +type CommitsSearchResult struct { + Total *int `json:"total_count,omitempty"` + IncompleteResults *bool `json:"incomplete_results,omitempty"` + Commits []*CommitResult `json:"items,omitempty"` +} + +// CommitResult represents a commit object as returned in commit search endpoint response. +type CommitResult struct { + Hash *string `json:"hash,omitempty"` + Message *string `json:"message,omitempty"` + AuthorID *int `json:"author_id,omitempty"` + AuthorName *string `json:"author_name,omitempty"` + AuthorEmail *string `json:"author_email,omitempty"` + AuthorDate *Timestamp `json:"author_date,omitempty"` + CommitterID *int `json:"committer_id,omitempty"` + CommitterName *string `json:"committer_name,omitempty"` + CommitterEmail *string `json:"committer_email,omitempty"` + CommitterDate *Timestamp `json:"committer_date,omitempty"` + Repository *Repository `json:"repository,omitempty"` +} + +// Commits searches commits via various criteria. +// +// GitHub API docs: https://developer.github.com/v3/search/#search-commits +func (s *SearchService) Commits(ctx context.Context, query string, opt *SearchOptions) (*CommitsSearchResult, *Response, error) { + result := new(CommitsSearchResult) + resp, err := s.search(ctx, "commits", query, opt, result) return result, resp, err } // IssuesSearchResult represents the result of an issues search. type IssuesSearchResult struct { - Total *int `json:"total_count,omitempty"` - Issues []Issue `json:"items,omitempty"` + Total *int `json:"total_count,omitempty"` + IncompleteResults *bool `json:"incomplete_results,omitempty"` + Issues []Issue `json:"items,omitempty"` } // Issues searches issues via various criteria. // -// GitHub API docs: http://developer.github.com/v3/search/#search-issues -func (s *SearchService) Issues(query string, opt *SearchOptions) (*IssuesSearchResult, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/search/#search-issues +func (s *SearchService) Issues(ctx context.Context, query string, opt *SearchOptions) (*IssuesSearchResult, *Response, error) { result := new(IssuesSearchResult) - resp, err := s.search("issues", query, opt, result) + resp, err := s.search(ctx, "issues", query, opt, result) return result, resp, err } -// UsersSearchResult represents the result of an issues search. +// UsersSearchResult represents the result of a users search. type UsersSearchResult struct { - Total *int `json:"total_count,omitempty"` - Users []User `json:"items,omitempty"` + Total *int `json:"total_count,omitempty"` + IncompleteResults *bool `json:"incomplete_results,omitempty"` + Users []User `json:"items,omitempty"` } // Users searches users via various criteria. // -// GitHub API docs: http://developer.github.com/v3/search/#search-users -func (s *SearchService) Users(query string, opt *SearchOptions) (*UsersSearchResult, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/search/#search-users +func (s *SearchService) Users(ctx context.Context, query string, opt *SearchOptions) (*UsersSearchResult, *Response, error) { result := new(UsersSearchResult) - resp, err := s.search("users", query, opt, result) + resp, err := s.search(ctx, "users", query, opt, result) return result, resp, err } @@ -104,10 +138,11 @@ func (tm TextMatch) String() string { return Stringify(tm) } -// CodeSearchResult represents the result of an code search. +// CodeSearchResult represents the result of a code search. type CodeSearchResult struct { - Total *int `json:"total_count,omitempty"` - CodeResults []CodeResult `json:"items,omitempty"` + Total *int `json:"total_count,omitempty"` + IncompleteResults *bool `json:"incomplete_results,omitempty"` + CodeResults []CodeResult `json:"items,omitempty"` } // CodeResult represents a single search result. @@ -126,21 +161,21 @@ func (c CodeResult) String() string { // Code searches code via various criteria. // -// GitHub API docs: http://developer.github.com/v3/search/#search-code -func (s *SearchService) Code(query string, opt *SearchOptions) (*CodeSearchResult, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/search/#search-code +func (s *SearchService) Code(ctx context.Context, query string, opt *SearchOptions) (*CodeSearchResult, *Response, error) { result := new(CodeSearchResult) - resp, err := s.search("code", query, opt, result) + resp, err := s.search(ctx, "code", query, opt, result) return result, resp, err } // Helper function that executes search queries against different -// GitHub search types (repositories, code, issues, users) -func (s *SearchService) search(searchType string, query string, opt *SearchOptions, result interface{}) (*Response, error) { +// GitHub search types (repositories, commits, code, issues, users) +func (s *SearchService) search(ctx context.Context, searchType string, query string, opt *SearchOptions, result interface{}) (*Response, error) { params, err := qs.Values(opt) if err != nil { return nil, err } - params.Add("q", query) + params.Set("q", query) u := fmt.Sprintf("search/%s?%s", searchType, params.Encode()) req, err := s.client.NewRequest("GET", u, nil) @@ -148,11 +183,16 @@ func (s *SearchService) search(searchType string, query string, opt *SearchOptio return nil, err } - if opt.TextMatch { + switch { + case searchType == "commits": + // Accept header for search commits preview endpoint + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeCommitSearchPreview) + case opt != nil && opt.TextMatch: // Accept header defaults to "application/vnd.github.v3+json" // We change it here to fetch back text-match metadata req.Header.Set("Accept", "application/vnd.github.v3.text-match+json") } - return s.client.Do(req, result) + return s.client.Do(ctx, req, result) } diff --git a/vendor/github.com/google/go-github/github/strings.go b/vendor/github.com/google/go-github/github/strings.go index 38577236c3..431e1cc6c1 100644 --- a/vendor/github.com/google/go-github/github/strings.go +++ b/vendor/github.com/google/go-github/github/strings.go @@ -16,7 +16,7 @@ import ( var timestampType = reflect.TypeOf(Timestamp{}) // Stringify attempts to create a reasonable string representation of types in -// the GitHub library. It does things like resolve pointers to their values +// the GitHub library. It does things like resolve pointers to their values // and omits struct fields with nil values. func Stringify(message interface{}) string { var buf bytes.Buffer diff --git a/vendor/github.com/google/go-github/github/users.go b/vendor/github.com/google/go-github/github/users.go index d8c74e2d61..d74439c7b0 100644 --- a/vendor/github.com/google/go-github/github/users.go +++ b/vendor/github.com/google/go-github/github/users.go @@ -5,15 +5,16 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // UsersService handles communication with the user related // methods of the GitHub API. // -// GitHub API docs: http://developer.github.com/v3/users/ -type UsersService struct { - client *Client -} +// GitHub API docs: https://developer.github.com/v3/users/ +type UsersService service // User represents a GitHub user. type User struct { @@ -70,11 +71,11 @@ func (u User) String() string { return Stringify(u) } -// Get fetches a user. Passing the empty string will fetch the authenticated +// Get fetches a user. Passing the empty string will fetch the authenticated // user. // -// GitHub API docs: http://developer.github.com/v3/users/#get-a-single-user -func (s *UsersService) Get(user string) (*User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/#get-a-single-user +func (s *UsersService) Get(ctx context.Context, user string) (*User, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v", user) @@ -87,18 +88,37 @@ func (s *UsersService) Get(user string) (*User, *Response, error) { } uResp := new(User) - resp, err := s.client.Do(req, uResp) + resp, err := s.client.Do(ctx, req, uResp) if err != nil { return nil, resp, err } - return uResp, resp, err + return uResp, resp, nil +} + +// GetByID fetches a user. +// +// Note: GetByID uses the undocumented GitHub API endpoint /user/:id. +func (s *UsersService) GetByID(ctx context.Context, id int) (*User, *Response, error) { + u := fmt.Sprintf("user/%d", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + user := new(User) + resp, err := s.client.Do(ctx, req, user) + if err != nil { + return nil, resp, err + } + + return user, resp, nil } // Edit the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/users/#update-the-authenticated-user -func (s *UsersService) Edit(user *User) (*User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/#update-the-authenticated-user +func (s *UsersService) Edit(ctx context.Context, user *User) (*User, *Response, error) { u := "user" req, err := s.client.NewRequest("PATCH", u, user) if err != nil { @@ -106,12 +126,12 @@ func (s *UsersService) Edit(user *User) (*User, *Response, error) { } uResp := new(User) - resp, err := s.client.Do(req, uResp) + resp, err := s.client.Do(ctx, req, uResp) if err != nil { return nil, resp, err } - return uResp, resp, err + return uResp, resp, nil } // UserListOptions specifies optional parameters to the UsersService.ListAll @@ -119,12 +139,16 @@ func (s *UsersService) Edit(user *User) (*User, *Response, error) { type UserListOptions struct { // ID of the last user seen Since int `url:"since,omitempty"` + + ListOptions } // ListAll lists all GitHub users. // -// GitHub API docs: http://developer.github.com/v3/users/#get-all-users -func (s *UsersService) ListAll(opt *UserListOptions) ([]User, *Response, error) { +// To paginate through all users, populate 'Since' with the ID of the last user. +// +// GitHub API docs: https://developer.github.com/v3/users/#get-all-users +func (s *UsersService) ListAll(ctx context.Context, opt *UserListOptions) ([]*User, *Response, error) { u, err := addOptions("users", opt) if err != nil { return nil, nil, err @@ -135,11 +159,67 @@ func (s *UsersService) ListAll(opt *UserListOptions) ([]User, *Response, error) return nil, nil, err } - users := new([]User) - resp, err := s.client.Do(req, users) + var users []*User + resp, err := s.client.Do(ctx, req, &users) if err != nil { return nil, resp, err } - return *users, resp, err + return users, resp, nil +} + +// ListInvitations lists all currently-open repository invitations for the +// authenticated user. +// +// GitHub API docs: https://developer.github.com/v3/repos/invitations/#list-a-users-repository-invitations +func (s *UsersService) ListInvitations(ctx context.Context) ([]*RepositoryInvitation, *Response, error) { + req, err := s.client.NewRequest("GET", "user/repository_invitations", nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeRepositoryInvitationsPreview) + + invites := []*RepositoryInvitation{} + resp, err := s.client.Do(ctx, req, &invites) + if err != nil { + return nil, resp, err + } + + return invites, resp, nil +} + +// AcceptInvitation accepts the currently-open repository invitation for the +// authenticated user. +// +// GitHub API docs: https://developer.github.com/v3/repos/invitations/#accept-a-repository-invitation +func (s *UsersService) AcceptInvitation(ctx context.Context, invitationID int) (*Response, error) { + u := fmt.Sprintf("user/repository_invitations/%v", invitationID) + req, err := s.client.NewRequest("PATCH", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeRepositoryInvitationsPreview) + + return s.client.Do(ctx, req, nil) +} + +// DeclineInvitation declines the currently-open repository invitation for the +// authenticated user. +// +// GitHub API docs: https://developer.github.com/v3/repos/invitations/#decline-a-repository-invitation +func (s *UsersService) DeclineInvitation(ctx context.Context, invitationID int) (*Response, error) { + u := fmt.Sprintf("user/repository_invitations/%v", invitationID) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeRepositoryInvitationsPreview) + + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/users_administration.go b/vendor/github.com/google/go-github/github/users_administration.go index dc1dcb8949..e042398d8c 100644 --- a/vendor/github.com/google/go-github/github/users_administration.go +++ b/vendor/github.com/google/go-github/github/users_administration.go @@ -5,12 +5,15 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // PromoteSiteAdmin promotes a user to a site administrator of a GitHub Enterprise instance. // // GitHub API docs: https://developer.github.com/v3/users/administration/#promote-an-ordinary-user-to-a-site-administrator -func (s *UsersService) PromoteSiteAdmin(user string) (*Response, error) { +func (s *UsersService) PromoteSiteAdmin(ctx context.Context, user string) (*Response, error) { u := fmt.Sprintf("users/%v/site_admin", user) req, err := s.client.NewRequest("PUT", u, nil) @@ -18,13 +21,13 @@ func (s *UsersService) PromoteSiteAdmin(user string) (*Response, error) { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // DemoteSiteAdmin demotes a user from site administrator of a GitHub Enterprise instance. // // GitHub API docs: https://developer.github.com/v3/users/administration/#demote-a-site-administrator-to-an-ordinary-user -func (s *UsersService) DemoteSiteAdmin(user string) (*Response, error) { +func (s *UsersService) DemoteSiteAdmin(ctx context.Context, user string) (*Response, error) { u := fmt.Sprintf("users/%v/site_admin", user) req, err := s.client.NewRequest("DELETE", u, nil) @@ -32,13 +35,13 @@ func (s *UsersService) DemoteSiteAdmin(user string) (*Response, error) { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Suspend a user on a GitHub Enterprise instance. // // GitHub API docs: https://developer.github.com/v3/users/administration/#suspend-a-user -func (s *UsersService) Suspend(user string) (*Response, error) { +func (s *UsersService) Suspend(ctx context.Context, user string) (*Response, error) { u := fmt.Sprintf("users/%v/suspended", user) req, err := s.client.NewRequest("PUT", u, nil) @@ -46,13 +49,13 @@ func (s *UsersService) Suspend(user string) (*Response, error) { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Unsuspend a user on a GitHub Enterprise instance. // // GitHub API docs: https://developer.github.com/v3/users/administration/#unsuspend-a-user -func (s *UsersService) Unsuspend(user string) (*Response, error) { +func (s *UsersService) Unsuspend(ctx context.Context, user string) (*Response, error) { u := fmt.Sprintf("users/%v/suspended", user) req, err := s.client.NewRequest("DELETE", u, nil) @@ -60,5 +63,5 @@ func (s *UsersService) Unsuspend(user string) (*Response, error) { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/users_emails.go b/vendor/github.com/google/go-github/github/users_emails.go index 755319123b..0bbd4627e3 100644 --- a/vendor/github.com/google/go-github/github/users_emails.go +++ b/vendor/github.com/google/go-github/github/users_emails.go @@ -5,6 +5,8 @@ package github +import "context" + // UserEmail represents user's email address type UserEmail struct { Email *string `json:"email,omitempty"` @@ -14,8 +16,8 @@ type UserEmail struct { // ListEmails lists all email addresses for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/users/emails/#list-email-addresses-for-a-user -func (s *UsersService) ListEmails(opt *ListOptions) ([]UserEmail, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/emails/#list-email-addresses-for-a-user +func (s *UsersService) ListEmails(ctx context.Context, opt *ListOptions) ([]*UserEmail, *Response, error) { u := "user/emails" u, err := addOptions(u, opt) if err != nil { @@ -27,43 +29,43 @@ func (s *UsersService) ListEmails(opt *ListOptions) ([]UserEmail, *Response, err return nil, nil, err } - emails := new([]UserEmail) - resp, err := s.client.Do(req, emails) + var emails []*UserEmail + resp, err := s.client.Do(ctx, req, &emails) if err != nil { return nil, resp, err } - return *emails, resp, err + return emails, resp, nil } // AddEmails adds email addresses of the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/users/emails/#add-email-addresses -func (s *UsersService) AddEmails(emails []string) ([]UserEmail, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/emails/#add-email-addresses +func (s *UsersService) AddEmails(ctx context.Context, emails []string) ([]*UserEmail, *Response, error) { u := "user/emails" req, err := s.client.NewRequest("POST", u, emails) if err != nil { return nil, nil, err } - e := new([]UserEmail) - resp, err := s.client.Do(req, e) + var e []*UserEmail + resp, err := s.client.Do(ctx, req, &e) if err != nil { return nil, resp, err } - return *e, resp, err + return e, resp, nil } // DeleteEmails deletes email addresses from authenticated user. // -// GitHub API docs: http://developer.github.com/v3/users/emails/#delete-email-addresses -func (s *UsersService) DeleteEmails(emails []string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/emails/#delete-email-addresses +func (s *UsersService) DeleteEmails(ctx context.Context, emails []string) (*Response, error) { u := "user/emails" req, err := s.client.NewRequest("DELETE", u, emails) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/users_followers.go b/vendor/github.com/google/go-github/github/users_followers.go index 7ecbed9fdf..c2224096a6 100644 --- a/vendor/github.com/google/go-github/github/users_followers.go +++ b/vendor/github.com/google/go-github/github/users_followers.go @@ -5,13 +5,16 @@ package github -import "fmt" +import ( + "context" + "fmt" +) -// ListFollowers lists the followers for a user. Passing the empty string will +// ListFollowers lists the followers for a user. Passing the empty string will // fetch followers for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/users/followers/#list-followers-of-a-user -func (s *UsersService) ListFollowers(user string, opt *ListOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/followers/#list-followers-of-a-user +func (s *UsersService) ListFollowers(ctx context.Context, user string, opt *ListOptions) ([]*User, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/followers", user) @@ -28,20 +31,20 @@ func (s *UsersService) ListFollowers(user string, opt *ListOptions) ([]User, *Re return nil, nil, err } - users := new([]User) - resp, err := s.client.Do(req, users) + var users []*User + resp, err := s.client.Do(ctx, req, &users) if err != nil { return nil, resp, err } - return *users, resp, err + return users, resp, nil } -// ListFollowing lists the people that a user is following. Passing the empty +// ListFollowing lists the people that a user is following. Passing the empty // string will list people the authenticated user is following. // -// GitHub API docs: http://developer.github.com/v3/users/followers/#list-users-followed-by-another-user -func (s *UsersService) ListFollowing(user string, opt *ListOptions) ([]User, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/followers/#list-users-followed-by-another-user +func (s *UsersService) ListFollowing(ctx context.Context, user string, opt *ListOptions) ([]*User, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/following", user) @@ -58,20 +61,20 @@ func (s *UsersService) ListFollowing(user string, opt *ListOptions) ([]User, *Re return nil, nil, err } - users := new([]User) - resp, err := s.client.Do(req, users) + var users []*User + resp, err := s.client.Do(ctx, req, &users) if err != nil { return nil, resp, err } - return *users, resp, err + return users, resp, nil } -// IsFollowing checks if "user" is following "target". Passing the empty +// IsFollowing checks if "user" is following "target". Passing the empty // string for "user" will check if the authenticated user is following "target". // -// GitHub API docs: http://developer.github.com/v3/users/followers/#check-if-you-are-following-a-user -func (s *UsersService) IsFollowing(user, target string) (bool, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/followers/#check-if-you-are-following-a-user +func (s *UsersService) IsFollowing(ctx context.Context, user, target string) (bool, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/following/%v", user, target) @@ -84,33 +87,33 @@ func (s *UsersService) IsFollowing(user, target string) (bool, *Response, error) return false, nil, err } - resp, err := s.client.Do(req, nil) + resp, err := s.client.Do(ctx, req, nil) following, err := parseBoolResponse(err) return following, resp, err } // Follow will cause the authenticated user to follow the specified user. // -// GitHub API docs: http://developer.github.com/v3/users/followers/#follow-a-user -func (s *UsersService) Follow(user string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/followers/#follow-a-user +func (s *UsersService) Follow(ctx context.Context, user string) (*Response, error) { u := fmt.Sprintf("user/following/%v", user) req, err := s.client.NewRequest("PUT", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } // Unfollow will cause the authenticated user to unfollow the specified user. // -// GitHub API docs: http://developer.github.com/v3/users/followers/#unfollow-a-user -func (s *UsersService) Unfollow(user string) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/followers/#unfollow-a-user +func (s *UsersService) Unfollow(ctx context.Context, user string) (*Response, error) { u := fmt.Sprintf("user/following/%v", user) req, err := s.client.NewRequest("DELETE", u, nil) if err != nil { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/google/go-github/github/users_gpg_keys.go b/vendor/github.com/google/go-github/github/users_gpg_keys.go new file mode 100644 index 0000000000..35cce02092 --- /dev/null +++ b/vendor/github.com/google/go-github/github/users_gpg_keys.go @@ -0,0 +1,128 @@ +// Copyright 2016 The go-github AUTHORS. All rights reserved. +// +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package github + +import ( + "context" + "fmt" + "time" +) + +// GPGKey represents a GitHub user's public GPG key used to verify GPG signed commits and tags. +// +// https://developer.github.com/changes/2016-04-04-git-signing-api-preview/ +type GPGKey struct { + ID *int `json:"id,omitempty"` + PrimaryKeyID *int `json:"primary_key_id,omitempty"` + KeyID *string `json:"key_id,omitempty"` + PublicKey *string `json:"public_key,omitempty"` + Emails []GPGEmail `json:"emails,omitempty"` + Subkeys []GPGKey `json:"subkeys,omitempty"` + CanSign *bool `json:"can_sign,omitempty"` + CanEncryptComms *bool `json:"can_encrypt_comms,omitempty"` + CanEncryptStorage *bool `json:"can_encrypt_storage,omitempty"` + CanCertify *bool `json:"can_certify,omitempty"` + CreatedAt *time.Time `json:"created_at,omitempty"` + ExpiresAt *time.Time `json:"expires_at,omitempty"` +} + +// String stringifies a GPGKey. +func (k GPGKey) String() string { + return Stringify(k) +} + +// GPGEmail represents an email address associated to a GPG key. +type GPGEmail struct { + Email *string `json:"email,omitempty"` + Verified *bool `json:"verified,omitempty"` +} + +// ListGPGKeys lists the current user's GPG keys. It requires authentication +// via Basic Auth or via OAuth with at least read:gpg_key scope. +// +// GitHub API docs: https://developer.github.com/v3/users/gpg_keys/#list-your-gpg-keys +func (s *UsersService) ListGPGKeys(ctx context.Context) ([]*GPGKey, *Response, error) { + req, err := s.client.NewRequest("GET", "user/gpg_keys", nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeGitSigningPreview) + + var keys []*GPGKey + resp, err := s.client.Do(ctx, req, &keys) + if err != nil { + return nil, resp, err + } + + return keys, resp, nil +} + +// GetGPGKey gets extended details for a single GPG key. It requires authentication +// via Basic Auth or via OAuth with at least read:gpg_key scope. +// +// GitHub API docs: https://developer.github.com/v3/users/gpg_keys/#get-a-single-gpg-key +func (s *UsersService) GetGPGKey(ctx context.Context, id int) (*GPGKey, *Response, error) { + u := fmt.Sprintf("user/gpg_keys/%v", id) + req, err := s.client.NewRequest("GET", u, nil) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeGitSigningPreview) + + key := &GPGKey{} + resp, err := s.client.Do(ctx, req, key) + if err != nil { + return nil, resp, err + } + + return key, resp, nil +} + +// CreateGPGKey creates a GPG key. It requires authenticatation via Basic Auth +// or OAuth with at least write:gpg_key scope. +// +// GitHub API docs: https://developer.github.com/v3/users/gpg_keys/#create-a-gpg-key +func (s *UsersService) CreateGPGKey(ctx context.Context, armoredPublicKey string) (*GPGKey, *Response, error) { + gpgKey := &struct { + ArmoredPublicKey string `json:"armored_public_key"` + }{ArmoredPublicKey: armoredPublicKey} + req, err := s.client.NewRequest("POST", "user/gpg_keys", gpgKey) + if err != nil { + return nil, nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeGitSigningPreview) + + key := &GPGKey{} + resp, err := s.client.Do(ctx, req, key) + if err != nil { + return nil, resp, err + } + + return key, resp, nil +} + +// DeleteGPGKey deletes a GPG key. It requires authentication via Basic Auth or +// via OAuth with at least admin:gpg_key scope. +// +// GitHub API docs: https://developer.github.com/v3/users/gpg_keys/#delete-a-gpg-key +func (s *UsersService) DeleteGPGKey(ctx context.Context, id int) (*Response, error) { + u := fmt.Sprintf("user/gpg_keys/%v", id) + req, err := s.client.NewRequest("DELETE", u, nil) + if err != nil { + return nil, err + } + + // TODO: remove custom Accept header when this API fully launches. + req.Header.Set("Accept", mediaTypeGitSigningPreview) + + return s.client.Do(ctx, req, nil) +} diff --git a/vendor/github.com/google/go-github/github/users_keys.go b/vendor/github.com/google/go-github/github/users_keys.go index dcbd773774..97ed4b8611 100644 --- a/vendor/github.com/google/go-github/github/users_keys.go +++ b/vendor/github.com/google/go-github/github/users_keys.go @@ -5,25 +5,29 @@ package github -import "fmt" +import ( + "context" + "fmt" +) // Key represents a public SSH key used to authenticate a user or deploy script. type Key struct { - ID *int `json:"id,omitempty"` - Key *string `json:"key,omitempty"` - URL *string `json:"url,omitempty"` - Title *string `json:"title,omitempty"` + ID *int `json:"id,omitempty"` + Key *string `json:"key,omitempty"` + URL *string `json:"url,omitempty"` + Title *string `json:"title,omitempty"` + ReadOnly *bool `json:"read_only,omitempty"` } func (k Key) String() string { return Stringify(k) } -// ListKeys lists the verified public keys for a user. Passing the empty +// ListKeys lists the verified public keys for a user. Passing the empty // string will fetch keys for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/users/keys/#list-public-keys-for-a-user -func (s *UsersService) ListKeys(user string, opt *ListOptions) ([]Key, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/keys/#list-public-keys-for-a-user +func (s *UsersService) ListKeys(ctx context.Context, user string, opt *ListOptions) ([]*Key, *Response, error) { var u string if user != "" { u = fmt.Sprintf("users/%v/keys", user) @@ -40,19 +44,19 @@ func (s *UsersService) ListKeys(user string, opt *ListOptions) ([]Key, *Response return nil, nil, err } - keys := new([]Key) - resp, err := s.client.Do(req, keys) + var keys []*Key + resp, err := s.client.Do(ctx, req, &keys) if err != nil { return nil, resp, err } - return *keys, resp, err + return keys, resp, nil } // GetKey fetches a single public key. // -// GitHub API docs: http://developer.github.com/v3/users/keys/#get-a-single-public-key -func (s *UsersService) GetKey(id int) (*Key, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/keys/#get-a-single-public-key +func (s *UsersService) GetKey(ctx context.Context, id int) (*Key, *Response, error) { u := fmt.Sprintf("user/keys/%v", id) req, err := s.client.NewRequest("GET", u, nil) @@ -61,18 +65,18 @@ func (s *UsersService) GetKey(id int) (*Key, *Response, error) { } key := new(Key) - resp, err := s.client.Do(req, key) + resp, err := s.client.Do(ctx, req, key) if err != nil { return nil, resp, err } - return key, resp, err + return key, resp, nil } // CreateKey adds a public key for the authenticated user. // -// GitHub API docs: http://developer.github.com/v3/users/keys/#create-a-public-key -func (s *UsersService) CreateKey(key *Key) (*Key, *Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/keys/#create-a-public-key +func (s *UsersService) CreateKey(ctx context.Context, key *Key) (*Key, *Response, error) { u := "user/keys" req, err := s.client.NewRequest("POST", u, key) @@ -81,18 +85,18 @@ func (s *UsersService) CreateKey(key *Key) (*Key, *Response, error) { } k := new(Key) - resp, err := s.client.Do(req, k) + resp, err := s.client.Do(ctx, req, k) if err != nil { return nil, resp, err } - return k, resp, err + return k, resp, nil } // DeleteKey deletes a public key. // -// GitHub API docs: http://developer.github.com/v3/users/keys/#delete-a-public-key -func (s *UsersService) DeleteKey(id int) (*Response, error) { +// GitHub API docs: https://developer.github.com/v3/users/keys/#delete-a-public-key +func (s *UsersService) DeleteKey(ctx context.Context, id int) (*Response, error) { u := fmt.Sprintf("user/keys/%v", id) req, err := s.client.NewRequest("DELETE", u, nil) @@ -100,5 +104,5 @@ func (s *UsersService) DeleteKey(id int) (*Response, error) { return nil, err } - return s.client.Do(req, nil) + return s.client.Do(ctx, req, nil) } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/requests.go index 1aff4942ae..e3c7df3044 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/requests.go @@ -214,3 +214,40 @@ func ExtendSize(client *gophercloud.ServiceClient, id string, opts ExtendSizeOpt }) return } + +// UploadImageOptsBuilder allows extensions to add additional parameters to the +// UploadImage request. +type UploadImageOptsBuilder interface { + ToVolumeUploadImageMap() (map[string]interface{}, error) +} + +// UploadImageOpts contains options for uploading a Volume to image storage. +type UploadImageOpts struct { + // Container format, may be bare, ofv, ova, etc. + ContainerFormat string `json:"container_format,omitempty"` + // Disk format, may be raw, qcow2, vhd, vdi, vmdk, etc. + DiskFormat string `json:"disk_format,omitempty"` + // The name of image that will be stored in glance + ImageName string `json:"image_name,omitempty"` + // Force image creation, usable if volume attached to instance + Force bool `json:"force,omitempty"` +} + +// ToVolumeUploadImageMap assembles a request body based on the contents of a +// UploadImageOpts. +func (opts UploadImageOpts) ToVolumeUploadImageMap() (map[string]interface{}, error) { + return gophercloud.BuildRequestBody(opts, "os-volume_upload_image") +} + +// UploadImage will upload image base on the values in UploadImageOptsBuilder +func UploadImage(client *gophercloud.ServiceClient, id string, opts UploadImageOptsBuilder) (r UploadImageResult) { + b, err := opts.ToVolumeUploadImageMap() + if err != nil { + r.Err = err + return + } + _, r.Err = client.Post(uploadURL(client, id), b, nil, &gophercloud.RequestOpts{ + OkCodes: []int{202}, + }) + return +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/results.go index b5695b7654..634b04d8d6 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/results.go @@ -17,6 +17,11 @@ type DetachResult struct { gophercloud.ErrResult } +// UploadImageResult contains the response body and error from a UploadImage request. +type UploadImageResult struct { + gophercloud.ErrResult +} + // ReserveResult contains the response body and error from a Get request. type ReserveResult struct { gophercloud.ErrResult diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/urls.go b/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/urls.go index a172549bf9..5efd2b25c0 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/urls.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions/urls.go @@ -14,6 +14,10 @@ func detachURL(c *gophercloud.ServiceClient, id string) string { return attachURL(c, id) } +func uploadURL(c *gophercloud.ServiceClient, id string) string { + return attachURL(c, id) +} + func reserveURL(c *gophercloud.ServiceClient, id string) string { return attachURL(c, id) } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/client.go b/vendor/github.com/gophercloud/gophercloud/openstack/client.go index 6e61944a12..2d30cc60ad 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/client.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/client.go @@ -310,6 +310,19 @@ func NewDBV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (* return &gophercloud.ServiceClient{ProviderClient: client, Endpoint: url}, nil } +// NewDNSV2 creates a ServiceClient that may be used to access the v2 DNS service. +func NewDNSV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { + eo.ApplyDefaults("dns") + url, err := client.EndpointLocator(eo) + if err != nil { + return nil, err + } + return &gophercloud.ServiceClient{ + ProviderClient: client, + Endpoint: url, + ResourceBase: url + "v2/"}, nil +} + // NewImageServiceV2 creates a ServiceClient that may be used to access the v2 image service. func NewImageServiceV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { eo.ApplyDefaults("image") diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/availabilityzones/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/availabilityzones/results.go new file mode 100644 index 0000000000..96a6a50b3d --- /dev/null +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/availabilityzones/results.go @@ -0,0 +1,12 @@ +package availabilityzones + +// ServerExt is an extension to the base Server object +type ServerExt struct { + // AvailabilityZone is the availabilty zone the server is in. + AvailabilityZone string `json:"OS-EXT-AZ:availability_zone"` +} + +// UnmarshalJSON to override default +func (r *ServerExt) UnmarshalJSON(b []byte) error { + return nil +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go index ef133ff809..d5d571c3d6 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go @@ -54,6 +54,47 @@ func ListDetail(client *gophercloud.ServiceClient, opts ListOptsBuilder) paginat }) } +type CreateOptsBuilder interface { + ToFlavorCreateMap() (map[string]interface{}, error) +} + +// CreateOpts is passed to Create to create a flavor +// Source: +// https://github.com/openstack/nova/blob/stable/newton/nova/api/openstack/compute/schemas/flavor_manage.py#L20 +type CreateOpts struct { + Name string `json:"name" required:"true"` + // memory size, in MBs + RAM int `json:"ram" required:"true"` + VCPUs int `json:"vcpus" required:"true"` + // disk size, in GBs + Disk *int `json:"disk" required:"true"` + ID string `json:"id,omitempty"` + // non-zero, positive + Swap *int `json:"swap,omitempty"` + RxTxFactor float64 `json:"rxtx_factor,omitempty"` + IsPublic *bool `json:"os-flavor-access:is_public,omitempty"` + // ephemeral disk size, in GBs, non-zero, positive + Ephemeral *int `json:"OS-FLV-EXT-DATA:ephemeral,omitempty"` +} + +// ToFlavorCreateMap satisfies the CreateOptsBuilder interface +func (opts *CreateOpts) ToFlavorCreateMap() (map[string]interface{}, error) { + return gophercloud.BuildRequestBody(opts, "flavor") +} + +// Create a flavor +func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r CreateResult) { + b, err := opts.ToFlavorCreateMap() + if err != nil { + r.Err = err + return + } + _, r.Err = client.Post(createURL(client), b, &r.Body, &gophercloud.RequestOpts{ + OkCodes: []int{200, 201}, + }) + return +} + // Get instructs OpenStack to provide details on a single flavor, identified by its ID. // Use ExtractFlavor to convert its result into a Flavor. func Get(client *gophercloud.ServiceClient, id string) (r GetResult) { diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go index a49de0da7c..18b8434055 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go @@ -8,13 +8,21 @@ import ( "github.com/gophercloud/gophercloud/pagination" ) -// GetResult temporarily holds the response from a Get call. -type GetResult struct { +type commonResult struct { gophercloud.Result } -// Extract provides access to the individual Flavor returned by the Get function. -func (r GetResult) Extract() (*Flavor, error) { +type CreateResult struct { + commonResult +} + +// GetResult temporarily holds the response from a Get call. +type GetResult struct { + commonResult +} + +// Extract provides access to the individual Flavor returned by the Get and Create functions. +func (r commonResult) Extract() (*Flavor, error) { var s struct { Flavor *Flavor `json:"flavor"` } @@ -40,41 +48,32 @@ type Flavor struct { VCPUs int `json:"vcpus"` } -func (f *Flavor) UnmarshalJSON(b []byte) error { - var flavor struct { - ID string `json:"id"` - Disk int `json:"disk"` - RAM int `json:"ram"` - Name string `json:"name"` - RxTxFactor float64 `json:"rxtx_factor"` - Swap interface{} `json:"swap"` - VCPUs int `json:"vcpus"` +func (r *Flavor) UnmarshalJSON(b []byte) error { + type tmp Flavor + var s struct { + tmp + Swap interface{} `json:"swap"` } - err := json.Unmarshal(b, &flavor) + err := json.Unmarshal(b, &s) if err != nil { return err } - f.ID = flavor.ID - f.Disk = flavor.Disk - f.RAM = flavor.RAM - f.Name = flavor.Name - f.RxTxFactor = flavor.RxTxFactor - f.VCPUs = flavor.VCPUs + *r = Flavor(s.tmp) - switch t := flavor.Swap.(type) { + switch t := s.Swap.(type) { case float64: - f.Swap = int(t) + r.Swap = int(t) case string: switch t { case "": - f.Swap = 0 + r.Swap = 0 default: swap, err := strconv.ParseFloat(t, 64) if err != nil { return err } - f.Swap = int(swap) + r.Swap = int(swap) } } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go index ee0dfdbe39..2fc21796f7 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go @@ -11,3 +11,7 @@ func getURL(client *gophercloud.ServiceClient, id string) string { func listURL(client *gophercloud.ServiceClient) string { return client.ServiceURL("flavors", "detail") } + +func createURL(client *gophercloud.ServiceClient) string { + return client.ServiceURL("flavors") +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go index a55b8f160a..f9ebc69e98 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go @@ -45,8 +45,8 @@ type Image struct { Status string Updated string - - Metadata map[string]string + + Metadata map[string]interface{} } // ImagePage contains a single page of results from a List operation. diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go index c79a6e6f6b..9618637317 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go @@ -401,11 +401,10 @@ type RebuildOptsBuilder interface { // operation type RebuildOpts struct { // The server's admin password - AdminPass string `json:"adminPass" required:"true"` + AdminPass string `json:"adminPass,omitempty"` // The ID of the image you want your server to be provisioned on ImageID string `json:"imageRef"` ImageName string `json:"-"` - //ImageName string `json:"-"` // Name to set the server to Name string `json:"name,omitempty"` // AccessIPv4 [optional] provides a new IPv4 address for the instance. diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go index c121a6be7d..1ae1e91c78 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go @@ -19,11 +19,17 @@ type serverResult struct { // Extract interprets any serverResult as a Server, if possible. func (r serverResult) Extract() (*Server, error) { - var s struct { - Server *Server `json:"server"` - } + var s Server err := r.ExtractInto(&s) - return s.Server, err + return &s, err +} + +func (r serverResult) ExtractInto(v interface{}) error { + return r.Result.ExtractIntoStructPtr(v, "server") +} + +func ExtractServersInto(r pagination.Page, v interface{}) error { + return r.(ServerPage).Result.ExtractIntoSlicePtr(v, "servers") } // CreateResult temporarily contains the response from a Create call. @@ -221,11 +227,9 @@ func (r ServerPage) NextPageURL() (string, error) { // ExtractServers interprets the results of a single page from a List() call, producing a slice of Server entities. func ExtractServers(r pagination.Page) ([]Server, error) { - var s struct { - Servers []Server `json:"servers"` - } - err := (r.(ServerPage)).ExtractInto(&s) - return s.Servers, err + var s []Server + err := ExtractServersInto(r, &s) + return s, err } // MetadataResult contains the result of a call for (potentially) multiple key-value pairs. diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go index 32f09ee95a..044b5cb95f 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go @@ -99,7 +99,7 @@ type CreateOpts struct { // properties is a set of properties, if any, that // are associated with the image. - Properties map[string]string `json:"-,omitempty"` + Properties map[string]string `json:"-"` } // ToImageCreateMap assembles a request body based on the contents of diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/requests.go index 21a3b266c2..83930874c5 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/requests.go @@ -21,6 +21,7 @@ type ListOpts struct { Marker string `q:"marker"` SortKey string `q:"sort_key"` SortDir string `q:"sort_dir"` + RouterID string `q:"router_id"` } // List returns a Pager which allows you to iterate over a collection of diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/results.go index 838ca2ca64..29d5b5662b 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips/results.go @@ -34,6 +34,9 @@ type FloatingIP struct { // The condition of the API resource. Status string `json:"status"` + + //The ID of the router used for this Floating-IP + RouterID string `json:"router_id"` } type commonResult struct { diff --git a/vendor/github.com/gophercloud/gophercloud/pagination/pager.go b/vendor/github.com/gophercloud/gophercloud/pagination/pager.go index 1b5192ad61..6f1609ef2e 100644 --- a/vendor/github.com/gophercloud/gophercloud/pagination/pager.go +++ b/vendor/github.com/gophercloud/gophercloud/pagination/pager.go @@ -145,27 +145,24 @@ func (p Pager) AllPages() (Page, error) { // Switch on the page body type. Recognized types are `map[string]interface{}`, // `[]byte`, and `[]interface{}`. - switch testPage.GetBody().(type) { + switch pb := testPage.GetBody().(type) { case map[string]interface{}: // key is the map key for the page body if the body type is `map[string]interface{}`. var key string // Iterate over the pages to concatenate the bodies. err = p.EachPage(func(page Page) (bool, error) { b := page.GetBody().(map[string]interface{}) - for k := range b { + for k, v := range b { // If it's a linked page, we don't want the `links`, we want the other one. if !strings.HasSuffix(k, "links") { - key = k + // check the field's type. we only want []interface{} (which is really []map[string]interface{}) + switch vt := v.(type) { + case []interface{}: + key = k + pagesSlice = append(pagesSlice, vt...) + } } } - switch keyType := b[key].(type) { - case map[string]interface{}: - pagesSlice = append(pagesSlice, keyType) - case []interface{}: - pagesSlice = append(pagesSlice, b[key].([]interface{})...) - default: - return false, fmt.Errorf("Unsupported page body type: %+v", keyType) - } return true, nil }) if err != nil { @@ -216,7 +213,7 @@ func (p Pager) AllPages() (Page, error) { default: err := gophercloud.ErrUnexpectedType{} err.Expected = "map[string]interface{}/[]byte/[]interface{}" - err.Actual = fmt.Sprintf("%v", reflect.TypeOf(testPage.GetBody())) + err.Actual = fmt.Sprintf("%T", pb) return nil, err } diff --git a/vendor/github.com/hashicorp/consul/acl/acl.go b/vendor/github.com/hashicorp/consul/acl/acl.go new file mode 100644 index 0000000000..3ade9d4055 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/acl/acl.go @@ -0,0 +1,672 @@ +package acl + +import ( + "github.com/armon/go-radix" +) + +var ( + // allowAll is a singleton policy which allows all + // non-management actions + allowAll ACL + + // denyAll is a singleton policy which denies all actions + denyAll ACL + + // manageAll is a singleton policy which allows all + // actions, including management + manageAll ACL +) + +func init() { + // Setup the singletons + allowAll = &StaticACL{ + allowManage: false, + defaultAllow: true, + } + denyAll = &StaticACL{ + allowManage: false, + defaultAllow: false, + } + manageAll = &StaticACL{ + allowManage: true, + defaultAllow: true, + } +} + +// ACL is the interface for policy enforcement. +type ACL interface { + // ACLList checks for permission to list all the ACLs + ACLList() bool + + // ACLModify checks for permission to manipulate ACLs + ACLModify() bool + + // AgentRead checks for permission to read from agent endpoints for a + // given node. + AgentRead(string) bool + + // AgentWrite checks for permission to make changes via agent endpoints + // for a given node. + AgentWrite(string) bool + + // EventRead determines if a specific event can be queried. + EventRead(string) bool + + // EventWrite determines if a specific event may be fired. + EventWrite(string) bool + + // KeyRead checks for permission to read a given key + KeyRead(string) bool + + // KeyWrite checks for permission to write a given key + KeyWrite(string) bool + + // KeyWritePrefix checks for permission to write to an + // entire key prefix. This means there must be no sub-policies + // that deny a write. + KeyWritePrefix(string) bool + + // KeyringRead determines if the encryption keyring used in + // the gossip layer can be read. + KeyringRead() bool + + // KeyringWrite determines if the keyring can be manipulated + KeyringWrite() bool + + // NodeRead checks for permission to read (discover) a given node. + NodeRead(string) bool + + // NodeWrite checks for permission to create or update (register) a + // given node. + NodeWrite(string) bool + + // OperatorRead determines if the read-only Consul operator functions + // can be used. + OperatorRead() bool + + // OperatorWrite determines if the state-changing Consul operator + // functions can be used. + OperatorWrite() bool + + // PrepardQueryRead determines if a specific prepared query can be read + // to show its contents (this is not used for execution). + PreparedQueryRead(string) bool + + // PreparedQueryWrite determines if a specific prepared query can be + // created, modified, or deleted. + PreparedQueryWrite(string) bool + + // ServiceRead checks for permission to read a given service + ServiceRead(string) bool + + // ServiceWrite checks for permission to create or update a given + // service + ServiceWrite(string) bool + + // SessionRead checks for permission to read sessions for a given node. + SessionRead(string) bool + + // SessionWrite checks for permission to create sessions for a given + // node. + SessionWrite(string) bool + + // Snapshot checks for permission to take and restore snapshots. + Snapshot() bool +} + +// StaticACL is used to implement a base ACL policy. It either +// allows or denies all requests. This can be used as a parent +// ACL to act in a blacklist or whitelist mode. +type StaticACL struct { + allowManage bool + defaultAllow bool +} + +func (s *StaticACL) ACLList() bool { + return s.allowManage +} + +func (s *StaticACL) ACLModify() bool { + return s.allowManage +} + +func (s *StaticACL) AgentRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) AgentWrite(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) EventRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) EventWrite(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) KeyRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) KeyWrite(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) KeyWritePrefix(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) KeyringRead() bool { + return s.defaultAllow +} + +func (s *StaticACL) KeyringWrite() bool { + return s.defaultAllow +} + +func (s *StaticACL) NodeRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) NodeWrite(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) OperatorRead() bool { + return s.defaultAllow +} + +func (s *StaticACL) OperatorWrite() bool { + return s.defaultAllow +} + +func (s *StaticACL) PreparedQueryRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) PreparedQueryWrite(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) ServiceRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) ServiceWrite(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) SessionRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) SessionWrite(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) Snapshot() bool { + return s.allowManage +} + +// AllowAll returns an ACL rule that allows all operations +func AllowAll() ACL { + return allowAll +} + +// DenyAll returns an ACL rule that denies all operations +func DenyAll() ACL { + return denyAll +} + +// ManageAll returns an ACL rule that can manage all resources +func ManageAll() ACL { + return manageAll +} + +// RootACL returns a possible ACL if the ID matches a root policy +func RootACL(id string) ACL { + switch id { + case "allow": + return allowAll + case "deny": + return denyAll + case "manage": + return manageAll + default: + return nil + } +} + +// PolicyACL is used to wrap a set of ACL policies to provide +// the ACL interface. +type PolicyACL struct { + // parent is used to resolve policy if we have + // no matching rule. + parent ACL + + // agentRules contains the agent policies + agentRules *radix.Tree + + // keyRules contains the key policies + keyRules *radix.Tree + + // nodeRules contains the node policies + nodeRules *radix.Tree + + // serviceRules contains the service policies + serviceRules *radix.Tree + + // sessionRules contains the session policies + sessionRules *radix.Tree + + // eventRules contains the user event policies + eventRules *radix.Tree + + // preparedQueryRules contains the prepared query policies + preparedQueryRules *radix.Tree + + // keyringRule contains the keyring policies. The keyring has + // a very simple yes/no without prefix matching, so here we + // don't need to use a radix tree. + keyringRule string + + // operatorRule contains the operator policies. + operatorRule string +} + +// New is used to construct a policy based ACL from a set of policies +// and a parent policy to resolve missing cases. +func New(parent ACL, policy *Policy) (*PolicyACL, error) { + p := &PolicyACL{ + parent: parent, + agentRules: radix.New(), + keyRules: radix.New(), + nodeRules: radix.New(), + serviceRules: radix.New(), + sessionRules: radix.New(), + eventRules: radix.New(), + preparedQueryRules: radix.New(), + } + + // Load the agent policy + for _, ap := range policy.Agents { + p.agentRules.Insert(ap.Node, ap.Policy) + } + + // Load the key policy + for _, kp := range policy.Keys { + p.keyRules.Insert(kp.Prefix, kp.Policy) + } + + // Load the node policy + for _, np := range policy.Nodes { + p.nodeRules.Insert(np.Name, np.Policy) + } + + // Load the service policy + for _, sp := range policy.Services { + p.serviceRules.Insert(sp.Name, sp.Policy) + } + + // Load the session policy + for _, sp := range policy.Sessions { + p.sessionRules.Insert(sp.Node, sp.Policy) + } + + // Load the event policy + for _, ep := range policy.Events { + p.eventRules.Insert(ep.Event, ep.Policy) + } + + // Load the prepared query policy + for _, pq := range policy.PreparedQueries { + p.preparedQueryRules.Insert(pq.Prefix, pq.Policy) + } + + // Load the keyring policy + p.keyringRule = policy.Keyring + + // Load the operator policy + p.operatorRule = policy.Operator + + return p, nil +} + +// ACLList checks if listing of ACLs is allowed +func (p *PolicyACL) ACLList() bool { + return p.parent.ACLList() +} + +// ACLModify checks if modification of ACLs is allowed +func (p *PolicyACL) ACLModify() bool { + return p.parent.ACLModify() +} + +// AgentRead checks for permission to read from agent endpoints for a given +// node. +func (p *PolicyACL) AgentRead(node string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.agentRules.LongestPrefix(node) + + if ok { + switch rule { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.AgentRead(node) +} + +// AgentWrite checks for permission to make changes via agent endpoints for a +// given node. +func (p *PolicyACL) AgentWrite(node string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.agentRules.LongestPrefix(node) + + if ok { + switch rule { + case PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.AgentWrite(node) +} + +// Snapshot checks if taking and restoring snapshots is allowed. +func (p *PolicyACL) Snapshot() bool { + return p.parent.Snapshot() +} + +// EventRead is used to determine if the policy allows for a +// specific user event to be read. +func (p *PolicyACL) EventRead(name string) bool { + // Longest-prefix match on event names + if _, rule, ok := p.eventRules.LongestPrefix(name); ok { + switch rule { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // Nothing matched, use parent + return p.parent.EventRead(name) +} + +// EventWrite is used to determine if new events can be created +// (fired) by the policy. +func (p *PolicyACL) EventWrite(name string) bool { + // Longest-prefix match event names + if _, rule, ok := p.eventRules.LongestPrefix(name); ok { + return rule == PolicyWrite + } + + // No match, use parent + return p.parent.EventWrite(name) +} + +// KeyRead returns if a key is allowed to be read +func (p *PolicyACL) KeyRead(key string) bool { + // Look for a matching rule + _, rule, ok := p.keyRules.LongestPrefix(key) + if ok { + switch rule.(string) { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.KeyRead(key) +} + +// KeyWrite returns if a key is allowed to be written +func (p *PolicyACL) KeyWrite(key string) bool { + // Look for a matching rule + _, rule, ok := p.keyRules.LongestPrefix(key) + if ok { + switch rule.(string) { + case PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.KeyWrite(key) +} + +// KeyWritePrefix returns if a prefix is allowed to be written +func (p *PolicyACL) KeyWritePrefix(prefix string) bool { + // Look for a matching rule that denies + _, rule, ok := p.keyRules.LongestPrefix(prefix) + if ok && rule.(string) != PolicyWrite { + return false + } + + // Look if any of our children have a deny policy + deny := false + p.keyRules.WalkPrefix(prefix, func(path string, rule interface{}) bool { + // We have a rule to prevent a write in a sub-directory! + if rule.(string) != PolicyWrite { + deny = true + return true + } + return false + }) + + // Deny the write if any sub-rules may be violated + if deny { + return false + } + + // If we had a matching rule, done + if ok { + return true + } + + // No matching rule, use the parent. + return p.parent.KeyWritePrefix(prefix) +} + +// KeyringRead is used to determine if the keyring can be +// read by the current ACL token. +func (p *PolicyACL) KeyringRead() bool { + switch p.keyringRule { + case PolicyRead, PolicyWrite: + return true + case PolicyDeny: + return false + default: + return p.parent.KeyringRead() + } +} + +// KeyringWrite determines if the keyring can be manipulated. +func (p *PolicyACL) KeyringWrite() bool { + if p.keyringRule == PolicyWrite { + return true + } + return p.parent.KeyringWrite() +} + +// OperatorRead determines if the read-only operator functions are allowed. +func (p *PolicyACL) OperatorRead() bool { + switch p.operatorRule { + case PolicyRead, PolicyWrite: + return true + case PolicyDeny: + return false + default: + return p.parent.OperatorRead() + } +} + +// NodeRead checks if reading (discovery) of a node is allowed +func (p *PolicyACL) NodeRead(name string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.nodeRules.LongestPrefix(name) + + if ok { + switch rule { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.NodeRead(name) +} + +// NodeWrite checks if writing (registering) a node is allowed +func (p *PolicyACL) NodeWrite(name string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.nodeRules.LongestPrefix(name) + + if ok { + switch rule { + case PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.NodeWrite(name) +} + +// OperatorWrite determines if the state-changing operator functions are +// allowed. +func (p *PolicyACL) OperatorWrite() bool { + if p.operatorRule == PolicyWrite { + return true + } + return p.parent.OperatorWrite() +} + +// PreparedQueryRead checks if reading (listing) of a prepared query is +// allowed - this isn't execution, just listing its contents. +func (p *PolicyACL) PreparedQueryRead(prefix string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.preparedQueryRules.LongestPrefix(prefix) + + if ok { + switch rule { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.PreparedQueryRead(prefix) +} + +// PreparedQueryWrite checks if writing (creating, updating, or deleting) of a +// prepared query is allowed. +func (p *PolicyACL) PreparedQueryWrite(prefix string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.preparedQueryRules.LongestPrefix(prefix) + + if ok { + switch rule { + case PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.PreparedQueryWrite(prefix) +} + +// ServiceRead checks if reading (discovery) of a service is allowed +func (p *PolicyACL) ServiceRead(name string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.serviceRules.LongestPrefix(name) + + if ok { + switch rule { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.ServiceRead(name) +} + +// ServiceWrite checks if writing (registering) a service is allowed +func (p *PolicyACL) ServiceWrite(name string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.serviceRules.LongestPrefix(name) + + if ok { + switch rule { + case PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.ServiceWrite(name) +} + +// SessionRead checks for permission to read sessions for a given node. +func (p *PolicyACL) SessionRead(node string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.sessionRules.LongestPrefix(node) + + if ok { + switch rule { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.SessionRead(node) +} + +// SessionWrite checks for permission to create sessions for a given node. +func (p *PolicyACL) SessionWrite(node string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.sessionRules.LongestPrefix(node) + + if ok { + switch rule { + case PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.SessionWrite(node) +} diff --git a/vendor/github.com/hashicorp/consul/acl/cache.go b/vendor/github.com/hashicorp/consul/acl/cache.go new file mode 100644 index 0000000000..0387f9fbe9 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/acl/cache.go @@ -0,0 +1,177 @@ +package acl + +import ( + "crypto/md5" + "fmt" + + "github.com/hashicorp/golang-lru" +) + +// FaultFunc is a function used to fault in the parent, +// rules for an ACL given its ID +type FaultFunc func(id string) (string, string, error) + +// aclEntry allows us to store the ACL with it's policy ID +type aclEntry struct { + ACL ACL + Parent string + RuleID string +} + +// Cache is used to implement policy and ACL caching +type Cache struct { + faultfn FaultFunc + aclCache *lru.TwoQueueCache // Cache id -> acl + policyCache *lru.TwoQueueCache // Cache policy -> acl + ruleCache *lru.TwoQueueCache // Cache rules -> policy +} + +// NewCache constructs a new policy and ACL cache of a given size +func NewCache(size int, faultfn FaultFunc) (*Cache, error) { + if size <= 0 { + return nil, fmt.Errorf("Must provide positive cache size") + } + + rc, err := lru.New2Q(size) + if err != nil { + return nil, err + } + + pc, err := lru.New2Q(size) + if err != nil { + return nil, err + } + + ac, err := lru.New2Q(size) + if err != nil { + return nil, err + } + + c := &Cache{ + faultfn: faultfn, + aclCache: ac, + policyCache: pc, + ruleCache: rc, + } + return c, nil +} + +// GetPolicy is used to get a potentially cached policy set. +// If not cached, it will be parsed, and then cached. +func (c *Cache) GetPolicy(rules string) (*Policy, error) { + return c.getPolicy(RuleID(rules), rules) +} + +// getPolicy is an internal method to get a cached policy, +// but it assumes a pre-computed ID +func (c *Cache) getPolicy(id, rules string) (*Policy, error) { + raw, ok := c.ruleCache.Get(id) + if ok { + return raw.(*Policy), nil + } + policy, err := Parse(rules) + if err != nil { + return nil, err + } + policy.ID = id + c.ruleCache.Add(id, policy) + return policy, nil + +} + +// RuleID is used to generate an ID for a rule +func RuleID(rules string) string { + return fmt.Sprintf("%x", md5.Sum([]byte(rules))) +} + +// policyID returns the cache ID for a policy +func (c *Cache) policyID(parent, ruleID string) string { + return parent + ":" + ruleID +} + +// GetACLPolicy is used to get the potentially cached ACL +// policy. If not cached, it will be generated and then cached. +func (c *Cache) GetACLPolicy(id string) (string, *Policy, error) { + // Check for a cached acl + if raw, ok := c.aclCache.Get(id); ok { + cached := raw.(aclEntry) + if raw, ok := c.ruleCache.Get(cached.RuleID); ok { + return cached.Parent, raw.(*Policy), nil + } + } + + // Fault in the rules + parent, rules, err := c.faultfn(id) + if err != nil { + return "", nil, err + } + + // Get cached + policy, err := c.GetPolicy(rules) + return parent, policy, err +} + +// GetACL is used to get a potentially cached ACL policy. +// If not cached, it will be generated and then cached. +func (c *Cache) GetACL(id string) (ACL, error) { + // Look for the ACL directly + raw, ok := c.aclCache.Get(id) + if ok { + return raw.(aclEntry).ACL, nil + } + + // Get the rules + parentID, rules, err := c.faultfn(id) + if err != nil { + return nil, err + } + ruleID := RuleID(rules) + + // Check for a compiled ACL + policyID := c.policyID(parentID, ruleID) + var compiled ACL + if raw, ok := c.policyCache.Get(policyID); ok { + compiled = raw.(ACL) + } else { + // Get the policy + policy, err := c.getPolicy(ruleID, rules) + if err != nil { + return nil, err + } + + // Get the parent ACL + parent := RootACL(parentID) + if parent == nil { + parent, err = c.GetACL(parentID) + if err != nil { + return nil, err + } + } + + // Compile the ACL + acl, err := New(parent, policy) + if err != nil { + return nil, err + } + + // Cache the compiled ACL + c.policyCache.Add(policyID, acl) + compiled = acl + } + + // Cache and return the ACL + c.aclCache.Add(id, aclEntry{compiled, parentID, ruleID}) + return compiled, nil +} + +// ClearACL is used to clear the ACL cache if any +func (c *Cache) ClearACL(id string) { + c.aclCache.Remove(id) +} + +// Purge is used to clear all the ACL caches. The +// rule and policy caches are not purged, since they +// are content-hashed anyways. +func (c *Cache) Purge() { + c.aclCache.Purge() +} diff --git a/vendor/github.com/hashicorp/consul/acl/policy.go b/vendor/github.com/hashicorp/consul/acl/policy.go new file mode 100644 index 0000000000..f7781b81e7 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/acl/policy.go @@ -0,0 +1,191 @@ +package acl + +import ( + "fmt" + + "github.com/hashicorp/hcl" +) + +const ( + PolicyDeny = "deny" + PolicyRead = "read" + PolicyWrite = "write" +) + +// Policy is used to represent the policy specified by +// an ACL configuration. +type Policy struct { + ID string `hcl:"-"` + Agents []*AgentPolicy `hcl:"agent,expand"` + Keys []*KeyPolicy `hcl:"key,expand"` + Nodes []*NodePolicy `hcl:"node,expand"` + Services []*ServicePolicy `hcl:"service,expand"` + Sessions []*SessionPolicy `hcl:"session,expand"` + Events []*EventPolicy `hcl:"event,expand"` + PreparedQueries []*PreparedQueryPolicy `hcl:"query,expand"` + Keyring string `hcl:"keyring"` + Operator string `hcl:"operator"` +} + +// AgentPolicy represents a policy for working with agent endpoints on nodes +// with specific name prefixes. +type AgentPolicy struct { + Node string `hcl:",key"` + Policy string +} + +func (a *AgentPolicy) GoString() string { + return fmt.Sprintf("%#v", *a) +} + +// KeyPolicy represents a policy for a key +type KeyPolicy struct { + Prefix string `hcl:",key"` + Policy string +} + +func (k *KeyPolicy) GoString() string { + return fmt.Sprintf("%#v", *k) +} + +// NodePolicy represents a policy for a node +type NodePolicy struct { + Name string `hcl:",key"` + Policy string +} + +func (n *NodePolicy) GoString() string { + return fmt.Sprintf("%#v", *n) +} + +// ServicePolicy represents a policy for a service +type ServicePolicy struct { + Name string `hcl:",key"` + Policy string +} + +func (s *ServicePolicy) GoString() string { + return fmt.Sprintf("%#v", *s) +} + +// SessionPolicy represents a policy for making sessions tied to specific node +// name prefixes. +type SessionPolicy struct { + Node string `hcl:",key"` + Policy string +} + +func (s *SessionPolicy) GoString() string { + return fmt.Sprintf("%#v", *s) +} + +// EventPolicy represents a user event policy. +type EventPolicy struct { + Event string `hcl:",key"` + Policy string +} + +func (e *EventPolicy) GoString() string { + return fmt.Sprintf("%#v", *e) +} + +// PreparedQueryPolicy represents a prepared query policy. +type PreparedQueryPolicy struct { + Prefix string `hcl:",key"` + Policy string +} + +func (p *PreparedQueryPolicy) GoString() string { + return fmt.Sprintf("%#v", *p) +} + +// isPolicyValid makes sure the given string matches one of the valid policies. +func isPolicyValid(policy string) bool { + switch policy { + case PolicyDeny: + return true + case PolicyRead: + return true + case PolicyWrite: + return true + default: + return false + } +} + +// Parse is used to parse the specified ACL rules into an +// intermediary set of policies, before being compiled into +// the ACL +func Parse(rules string) (*Policy, error) { + // Decode the rules + p := &Policy{} + if rules == "" { + // Hot path for empty rules + return p, nil + } + + if err := hcl.Decode(p, rules); err != nil { + return nil, fmt.Errorf("Failed to parse ACL rules: %v", err) + } + + // Validate the agent policy + for _, ap := range p.Agents { + if !isPolicyValid(ap.Policy) { + return nil, fmt.Errorf("Invalid agent policy: %#v", ap) + } + } + + // Validate the key policy + for _, kp := range p.Keys { + if !isPolicyValid(kp.Policy) { + return nil, fmt.Errorf("Invalid key policy: %#v", kp) + } + } + + // Validate the node policies + for _, np := range p.Nodes { + if !isPolicyValid(np.Policy) { + return nil, fmt.Errorf("Invalid node policy: %#v", np) + } + } + + // Validate the service policies + for _, sp := range p.Services { + if !isPolicyValid(sp.Policy) { + return nil, fmt.Errorf("Invalid service policy: %#v", sp) + } + } + + // Validate the session policies + for _, sp := range p.Sessions { + if !isPolicyValid(sp.Policy) { + return nil, fmt.Errorf("Invalid session policy: %#v", sp) + } + } + + // Validate the user event policies + for _, ep := range p.Events { + if !isPolicyValid(ep.Policy) { + return nil, fmt.Errorf("Invalid event policy: %#v", ep) + } + } + + // Validate the prepared query policies + for _, pq := range p.PreparedQueries { + if !isPolicyValid(pq.Policy) { + return nil, fmt.Errorf("Invalid query policy: %#v", pq) + } + } + + // Validate the keyring policy - this one is allowed to be empty + if p.Keyring != "" && !isPolicyValid(p.Keyring) { + return nil, fmt.Errorf("Invalid keyring policy: %#v", p.Keyring) + } + + // Validate the operator policy - this one is allowed to be empty + if p.Operator != "" && !isPolicyValid(p.Operator) { + return nil, fmt.Errorf("Invalid operator policy: %#v", p.Operator) + } + + return p, nil +} diff --git a/vendor/github.com/hashicorp/consul/consul/structs/operator.go b/vendor/github.com/hashicorp/consul/consul/structs/operator.go new file mode 100644 index 0000000000..d564400bf9 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/consul/structs/operator.go @@ -0,0 +1,57 @@ +package structs + +import ( + "github.com/hashicorp/raft" +) + +// RaftServer has information about a server in the Raft configuration. +type RaftServer struct { + // ID is the unique ID for the server. These are currently the same + // as the address, but they will be changed to a real GUID in a future + // release of Consul. + ID raft.ServerID + + // Node is the node name of the server, as known by Consul, or this + // will be set to "(unknown)" otherwise. + Node string + + // Address is the IP:port of the server, used for Raft communications. + Address raft.ServerAddress + + // Leader is true if this server is the current cluster leader. + Leader bool + + // Voter is true if this server has a vote in the cluster. This might + // be false if the server is staging and still coming online, or if + // it's a non-voting server, which will be added in a future release of + // Consul. + Voter bool +} + +// RaftConfigrationResponse is returned when querying for the current Raft +// configuration. +type RaftConfigurationResponse struct { + // Servers has the list of servers in the Raft configuration. + Servers []*RaftServer + + // Index has the Raft index of this configuration. + Index uint64 +} + +// RaftPeerByAddressRequest is used by the Operator endpoint to apply a Raft +// operation on a specific Raft peer by address in the form of "IP:port". +type RaftPeerByAddressRequest struct { + // Datacenter is the target this request is intended for. + Datacenter string + + // Address is the peer to remove, in the form "IP:port". + Address raft.ServerAddress + + // WriteRequest holds the ACL token to go along with this request. + WriteRequest +} + +// RequestDatacenter returns the datacenter for a given request. +func (op *RaftPeerByAddressRequest) RequestDatacenter() string { + return op.Datacenter +} diff --git a/vendor/github.com/hashicorp/consul/consul/structs/prepared_query.go b/vendor/github.com/hashicorp/consul/consul/structs/prepared_query.go new file mode 100644 index 0000000000..af535f010b --- /dev/null +++ b/vendor/github.com/hashicorp/consul/consul/structs/prepared_query.go @@ -0,0 +1,257 @@ +package structs + +// QueryDatacenterOptions sets options about how we fail over if there are no +// healthy nodes in the local datacenter. +type QueryDatacenterOptions struct { + // NearestN is set to the number of remote datacenters to try, based on + // network coordinates. + NearestN int + + // Datacenters is a fixed list of datacenters to try after NearestN. We + // never try a datacenter multiple times, so those are subtracted from + // this list before proceeding. + Datacenters []string +} + +// QueryDNSOptions controls settings when query results are served over DNS. +type QueryDNSOptions struct { + // TTL is the time to live for the served DNS results. + TTL string +} + +// ServiceQuery is used to query for a set of healthy nodes offering a specific +// service. +type ServiceQuery struct { + // Service is the service to query. + Service string + + // Failover controls what we do if there are no healthy nodes in the + // local datacenter. + Failover QueryDatacenterOptions + + // If OnlyPassing is true then we will only include nodes with passing + // health checks (critical AND warning checks will cause a node to be + // discarded) + OnlyPassing bool + + // Near allows the query to always prefer the node nearest the given + // node. If the node does not exist, results are returned in their + // normal randomly-shuffled order. Supplying the magic "_agent" value + // is supported to sort near the agent which initiated the request. + Near string + + // Tags are a set of required and/or disallowed tags. If a tag is in + // this list it must be present. If the tag is preceded with "!" then + // it is disallowed. + Tags []string + + // NodeMeta is a map of required node metadata fields. If a key/value + // pair is in this map it must be present on the node in order for the + // service entry to be returned. + NodeMeta map[string]string +} + +const ( + // QueryTemplateTypeNamePrefixMatch uses the Name field of the query as + // a prefix to select the template. + QueryTemplateTypeNamePrefixMatch = "name_prefix_match" +) + +// QueryTemplateOptions controls settings if this query is a template. +type QueryTemplateOptions struct { + // Type, if non-empty, means that this query is a template. This is + // set to one of the QueryTemplateType* constants above. + Type string + + // Regexp is an optional regular expression to use to parse the full + // name, once the prefix match has selected a template. This can be + // used to extract parts of the name and choose a service name, set + // tags, etc. + Regexp string +} + +// PreparedQuery defines a complete prepared query, and is the structure we +// maintain in the state store. +type PreparedQuery struct { + // ID is this UUID-based ID for the query, always generated by Consul. + ID string + + // Name is an optional friendly name for the query supplied by the + // user. NOTE - if this feature is used then it will reduce the security + // of any read ACL associated with this query/service since this name + // can be used to locate nodes with supplying any ACL. + Name string + + // Session is an optional session to tie this query's lifetime to. If + // this is omitted then the query will not expire. + Session string + + // Token is the ACL token used when the query was created, and it is + // used when a query is subsequently executed. This token, or a token + // with management privileges, must be used to change the query later. + Token string + + // Template is used to configure this query as a template, which will + // respond to queries based on the Name, and then will be rendered + // before it is executed. + Template QueryTemplateOptions + + // Service defines a service query (leaving things open for other types + // later). + Service ServiceQuery + + // DNS has options that control how the results of this query are + // served over DNS. + DNS QueryDNSOptions + + RaftIndex +} + +// GetACLPrefix returns the prefix to look up the prepared_query ACL policy for +// this query, and whether the prefix applies to this query. You always need to +// check the ok value before using the prefix. +func (pq *PreparedQuery) GetACLPrefix() (string, bool) { + if pq.Name != "" || pq.Template.Type != "" { + return pq.Name, true + } + + return "", false +} + +type PreparedQueries []*PreparedQuery + +type IndexedPreparedQueries struct { + Queries PreparedQueries + QueryMeta +} + +type PreparedQueryOp string + +const ( + PreparedQueryCreate PreparedQueryOp = "create" + PreparedQueryUpdate PreparedQueryOp = "update" + PreparedQueryDelete PreparedQueryOp = "delete" +) + +// QueryRequest is used to create or change prepared queries. +type PreparedQueryRequest struct { + // Datacenter is the target this request is intended for. + Datacenter string + + // Op is the operation to apply. + Op PreparedQueryOp + + // Query is the query itself. + Query *PreparedQuery + + // WriteRequest holds the ACL token to go along with this request. + WriteRequest +} + +// RequestDatacenter returns the datacenter for a given request. +func (q *PreparedQueryRequest) RequestDatacenter() string { + return q.Datacenter +} + +// PreparedQuerySpecificRequest is used to get information about a prepared +// query. +type PreparedQuerySpecificRequest struct { + // Datacenter is the target this request is intended for. + Datacenter string + + // QueryID is the ID of a query. + QueryID string + + // QueryOptions (unfortunately named here) controls the consistency + // settings for the query lookup itself, as well as the service lookups. + QueryOptions +} + +// RequestDatacenter returns the datacenter for a given request. +func (q *PreparedQuerySpecificRequest) RequestDatacenter() string { + return q.Datacenter +} + +// PreparedQueryExecuteRequest is used to execute a prepared query. +type PreparedQueryExecuteRequest struct { + // Datacenter is the target this request is intended for. + Datacenter string + + // QueryIDOrName is the ID of a query _or_ the name of one, either can + // be provided. + QueryIDOrName string + + // Limit will trim the resulting list down to the given limit. + Limit int + + // Source is used to sort the results relative to a given node using + // network coordinates. + Source QuerySource + + // Agent is used to carry around a reference to the agent which initiated + // the execute request. Used to distance-sort relative to the local node. + Agent QuerySource + + // QueryOptions (unfortunately named here) controls the consistency + // settings for the query lookup itself, as well as the service lookups. + QueryOptions +} + +// RequestDatacenter returns the datacenter for a given request. +func (q *PreparedQueryExecuteRequest) RequestDatacenter() string { + return q.Datacenter +} + +// PreparedQueryExecuteRemoteRequest is used when running a local query in a +// remote datacenter. +type PreparedQueryExecuteRemoteRequest struct { + // Datacenter is the target this request is intended for. + Datacenter string + + // Query is a copy of the query to execute. We have to ship the entire + // query over since it won't be present in the remote state store. + Query PreparedQuery + + // Limit will trim the resulting list down to the given limit. + Limit int + + // QueryOptions (unfortunately named here) controls the consistency + // settings for the the service lookups. + QueryOptions +} + +// RequestDatacenter returns the datacenter for a given request. +func (q *PreparedQueryExecuteRemoteRequest) RequestDatacenter() string { + return q.Datacenter +} + +// PreparedQueryExecuteResponse has the results of executing a query. +type PreparedQueryExecuteResponse struct { + // Service is the service that was queried. + Service string + + // Nodes has the nodes that were output by the query. + Nodes CheckServiceNodes + + // DNS has the options for serving these results over DNS. + DNS QueryDNSOptions + + // Datacenter is the datacenter that these results came from. + Datacenter string + + // Failovers is a count of how many times we had to query a remote + // datacenter. + Failovers int + + // QueryMeta has freshness information about the query. + QueryMeta +} + +// PreparedQueryExplainResponse has the results when explaining a query/ +type PreparedQueryExplainResponse struct { + // Query has the fully-rendered query. + Query PreparedQuery + + // QueryMeta has freshness information about the query. + QueryMeta +} diff --git a/vendor/github.com/hashicorp/consul/consul/structs/snapshot.go b/vendor/github.com/hashicorp/consul/consul/structs/snapshot.go new file mode 100644 index 0000000000..3d65e317f0 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/consul/structs/snapshot.go @@ -0,0 +1,40 @@ +package structs + +type SnapshotOp int + +const ( + SnapshotSave SnapshotOp = iota + SnapshotRestore +) + +// SnapshotRequest is used as a header for a snapshot RPC request. This will +// precede any streaming data that's part of the request and is JSON-encoded on +// the wire. +type SnapshotRequest struct { + // Datacenter is the target datacenter for this request. The request + // will be forwarded if necessary. + Datacenter string + + // Token is the ACL token to use for the operation. If ACLs are enabled + // then all operations require a management token. + Token string + + // If set, any follower can service the request. Results may be + // arbitrarily stale. Only applies to SnapshotSave. + AllowStale bool + + // Op is the operation code for the RPC. + Op SnapshotOp +} + +// SnapshotResponse is used header for a snapshot RPC response. This will +// precede any streaming data that's part of the request and is JSON-encoded on +// the wire. +type SnapshotResponse struct { + // Error is the overall error status of the RPC request. + Error string + + // QueryMeta has freshness information about the server that handled the + // request. It is only filled in for a SnapshotSave. + QueryMeta +} diff --git a/vendor/github.com/hashicorp/consul/consul/structs/structs.go b/vendor/github.com/hashicorp/consul/consul/structs/structs.go new file mode 100644 index 0000000000..13c67b3d55 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/consul/structs/structs.go @@ -0,0 +1,1041 @@ +package structs + +import ( + "bytes" + "fmt" + "math/rand" + "reflect" + "time" + + "github.com/hashicorp/consul/acl" + "github.com/hashicorp/consul/types" + "github.com/hashicorp/go-msgpack/codec" + "github.com/hashicorp/serf/coordinate" + "regexp" + "strings" +) + +var ( + ErrNoLeader = fmt.Errorf("No cluster leader") + ErrNoDCPath = fmt.Errorf("No path to datacenter") + ErrNoServers = fmt.Errorf("No known Consul servers") +) + +type MessageType uint8 + +// RaftIndex is used to track the index used while creating +// or modifying a given struct type. +type RaftIndex struct { + CreateIndex uint64 + ModifyIndex uint64 +} + +const ( + RegisterRequestType MessageType = iota + DeregisterRequestType + KVSRequestType + SessionRequestType + ACLRequestType + TombstoneRequestType + CoordinateBatchUpdateType + PreparedQueryRequestType + TxnRequestType +) + +const ( + // IgnoreUnknownTypeFlag is set along with a MessageType + // to indicate that the message type can be safely ignored + // if it is not recognized. This is for future proofing, so + // that new commands can be added in a way that won't cause + // old servers to crash when the FSM attempts to process them. + IgnoreUnknownTypeFlag MessageType = 128 +) + +const ( + // HealthAny is special, and is used as a wild card, + // not as a specific state. + HealthAny = "any" + HealthPassing = "passing" + HealthWarning = "warning" + HealthCritical = "critical" + HealthMaint = "maintenance" +) + +const ( + // NodeMaint is the special key set by a node in maintenance mode. + NodeMaint = "_node_maintenance" + + // ServiceMaintPrefix is the prefix for a service in maintenance mode. + ServiceMaintPrefix = "_service_maintenance:" +) + +const ( + // The meta key prefix reserved for Consul's internal use + metaKeyReservedPrefix = "consul-" + + // The maximum number of metadata key pairs allowed to be registered + metaMaxKeyPairs = 64 + + // The maximum allowed length of a metadata key + metaKeyMaxLength = 128 + + // The maximum allowed length of a metadata value + metaValueMaxLength = 512 +) + +var ( + // metaKeyFormat checks if a metadata key string is valid + metaKeyFormat = regexp.MustCompile(`^[a-zA-Z0-9_-]+$`).MatchString +) + +func ValidStatus(s string) bool { + return s == HealthPassing || + s == HealthWarning || + s == HealthCritical +} + +const ( + // Client tokens have rules applied + ACLTypeClient = "client" + + // Management tokens have an always allow policy. + // They are used for token management. + ACLTypeManagement = "management" +) + +const ( + // MaxLockDelay provides a maximum LockDelay value for + // a session. Any value above this will not be respected. + MaxLockDelay = 60 * time.Second +) + +// RPCInfo is used to describe common information about query +type RPCInfo interface { + RequestDatacenter() string + IsRead() bool + AllowStaleRead() bool + ACLToken() string +} + +// QueryOptions is used to specify various flags for read queries +type QueryOptions struct { + // Token is the ACL token ID. If not provided, the 'anonymous' + // token is assumed for backwards compatibility. + Token string + + // If set, wait until query exceeds given index. Must be provided + // with MaxQueryTime. + MinQueryIndex uint64 + + // Provided with MinQueryIndex to wait for change. + MaxQueryTime time.Duration + + // If set, any follower can service the request. Results + // may be arbitrarily stale. + AllowStale bool + + // If set, the leader must verify leadership prior to + // servicing the request. Prevents a stale read. + RequireConsistent bool +} + +// QueryOption only applies to reads, so always true +func (q QueryOptions) IsRead() bool { + return true +} + +func (q QueryOptions) AllowStaleRead() bool { + return q.AllowStale +} + +func (q QueryOptions) ACLToken() string { + return q.Token +} + +type WriteRequest struct { + // Token is the ACL token ID. If not provided, the 'anonymous' + // token is assumed for backwards compatibility. + Token string +} + +// WriteRequest only applies to writes, always false +func (w WriteRequest) IsRead() bool { + return false +} + +func (w WriteRequest) AllowStaleRead() bool { + return false +} + +func (w WriteRequest) ACLToken() string { + return w.Token +} + +// QueryMeta allows a query response to include potentially +// useful metadata about a query +type QueryMeta struct { + // This is the index associated with the read + Index uint64 + + // If AllowStale is used, this is time elapsed since + // last contact between the follower and leader. This + // can be used to gauge staleness. + LastContact time.Duration + + // Used to indicate if there is a known leader node + KnownLeader bool +} + +// RegisterRequest is used for the Catalog.Register endpoint +// to register a node as providing a service. If no service +// is provided, the node is registered. +type RegisterRequest struct { + Datacenter string + ID types.NodeID + Node string + Address string + TaggedAddresses map[string]string + NodeMeta map[string]string + Service *NodeService + Check *HealthCheck + Checks HealthChecks + WriteRequest +} + +func (r *RegisterRequest) RequestDatacenter() string { + return r.Datacenter +} + +// ChangesNode returns true if the given register request changes the given +// node, which can be nil. This only looks for changes to the node record itself, +// not any of the health checks. +func (r *RegisterRequest) ChangesNode(node *Node) bool { + // This means it's creating the node. + if node == nil { + return true + } + + // Check if any of the node-level fields are being changed. + if r.ID != node.ID || + r.Node != node.Node || + r.Address != node.Address || + !reflect.DeepEqual(r.TaggedAddresses, node.TaggedAddresses) || + !reflect.DeepEqual(r.NodeMeta, node.Meta) { + return true + } + + return false +} + +// DeregisterRequest is used for the Catalog.Deregister endpoint +// to deregister a node as providing a service. If no service is +// provided the entire node is deregistered. +type DeregisterRequest struct { + Datacenter string + Node string + ServiceID string + CheckID types.CheckID + WriteRequest +} + +func (r *DeregisterRequest) RequestDatacenter() string { + return r.Datacenter +} + +// QuerySource is used to pass along information about the source node +// in queries so that we can adjust the response based on its network +// coordinates. +type QuerySource struct { + Datacenter string + Node string +} + +// DCSpecificRequest is used to query about a specific DC +type DCSpecificRequest struct { + Datacenter string + NodeMetaFilters map[string]string + Source QuerySource + QueryOptions +} + +func (r *DCSpecificRequest) RequestDatacenter() string { + return r.Datacenter +} + +// ServiceSpecificRequest is used to query about a specific service +type ServiceSpecificRequest struct { + Datacenter string + NodeMetaFilters map[string]string + ServiceName string + ServiceTag string + TagFilter bool // Controls tag filtering + Source QuerySource + QueryOptions +} + +func (r *ServiceSpecificRequest) RequestDatacenter() string { + return r.Datacenter +} + +// NodeSpecificRequest is used to request the information about a single node +type NodeSpecificRequest struct { + Datacenter string + Node string + QueryOptions +} + +func (r *NodeSpecificRequest) RequestDatacenter() string { + return r.Datacenter +} + +// ChecksInStateRequest is used to query for nodes in a state +type ChecksInStateRequest struct { + Datacenter string + NodeMetaFilters map[string]string + State string + Source QuerySource + QueryOptions +} + +func (r *ChecksInStateRequest) RequestDatacenter() string { + return r.Datacenter +} + +// Used to return information about a node +type Node struct { + ID types.NodeID + Node string + Address string + TaggedAddresses map[string]string + Meta map[string]string + + RaftIndex +} +type Nodes []*Node + +// ValidateMeta validates a set of key/value pairs from the agent config +func ValidateMetadata(meta map[string]string) error { + if len(meta) > metaMaxKeyPairs { + return fmt.Errorf("Node metadata cannot contain more than %d key/value pairs", metaMaxKeyPairs) + } + + for key, value := range meta { + if err := validateMetaPair(key, value); err != nil { + return fmt.Errorf("Couldn't load metadata pair ('%s', '%s'): %s", key, value, err) + } + } + + return nil +} + +// validateMetaPair checks that the given key/value pair is in a valid format +func validateMetaPair(key, value string) error { + if key == "" { + return fmt.Errorf("Key cannot be blank") + } + if !metaKeyFormat(key) { + return fmt.Errorf("Key contains invalid characters") + } + if len(key) > metaKeyMaxLength { + return fmt.Errorf("Key is too long (limit: %d characters)", metaKeyMaxLength) + } + if strings.HasPrefix(key, metaKeyReservedPrefix) { + return fmt.Errorf("Key prefix '%s' is reserved for internal use", metaKeyReservedPrefix) + } + if len(value) > metaValueMaxLength { + return fmt.Errorf("Value is too long (limit: %d characters)", metaValueMaxLength) + } + return nil +} + +// SatisfiesMetaFilters returns true if the metadata map contains the given filters +func SatisfiesMetaFilters(meta map[string]string, filters map[string]string) bool { + for key, value := range filters { + if v, ok := meta[key]; !ok || v != value { + return false + } + } + return true +} + +// Used to return information about a provided services. +// Maps service name to available tags +type Services map[string][]string + +// ServiceNode represents a node that is part of a service. ID, Address, +// TaggedAddresses, and NodeMeta are node-related fields that are always empty +// in the state store and are filled in on the way out by parseServiceNodes(). +// This is also why PartialClone() skips them, because we know they are blank +// already so it would be a waste of time to copy them. +type ServiceNode struct { + ID types.NodeID + Node string + Address string + TaggedAddresses map[string]string + NodeMeta map[string]string + ServiceID string + ServiceName string + ServiceTags []string + ServiceAddress string + ServicePort int + ServiceEnableTagOverride bool + + RaftIndex +} + +// PartialClone() returns a clone of the given service node, minus the node- +// related fields that get filled in later, Address and TaggedAddresses. +func (s *ServiceNode) PartialClone() *ServiceNode { + tags := make([]string, len(s.ServiceTags)) + copy(tags, s.ServiceTags) + + return &ServiceNode{ + // Skip ID, see above. + Node: s.Node, + // Skip Address, see above. + // Skip TaggedAddresses, see above. + ServiceID: s.ServiceID, + ServiceName: s.ServiceName, + ServiceTags: tags, + ServiceAddress: s.ServiceAddress, + ServicePort: s.ServicePort, + ServiceEnableTagOverride: s.ServiceEnableTagOverride, + RaftIndex: RaftIndex{ + CreateIndex: s.CreateIndex, + ModifyIndex: s.ModifyIndex, + }, + } +} + +// ToNodeService converts the given service node to a node service. +func (s *ServiceNode) ToNodeService() *NodeService { + return &NodeService{ + ID: s.ServiceID, + Service: s.ServiceName, + Tags: s.ServiceTags, + Address: s.ServiceAddress, + Port: s.ServicePort, + EnableTagOverride: s.ServiceEnableTagOverride, + RaftIndex: RaftIndex{ + CreateIndex: s.CreateIndex, + ModifyIndex: s.ModifyIndex, + }, + } +} + +type ServiceNodes []*ServiceNode + +// NodeService is a service provided by a node +type NodeService struct { + ID string + Service string + Tags []string + Address string + Port int + EnableTagOverride bool + + RaftIndex +} + +// IsSame checks if one NodeService is the same as another, without looking +// at the Raft information (that's why we didn't call it IsEqual). This is +// useful for seeing if an update would be idempotent for all the functional +// parts of the structure. +func (s *NodeService) IsSame(other *NodeService) bool { + if s.ID != other.ID || + s.Service != other.Service || + !reflect.DeepEqual(s.Tags, other.Tags) || + s.Address != other.Address || + s.Port != other.Port || + s.EnableTagOverride != other.EnableTagOverride { + return false + } + + return true +} + +// ToServiceNode converts the given node service to a service node. +func (s *NodeService) ToServiceNode(node string) *ServiceNode { + return &ServiceNode{ + // Skip ID, see ServiceNode definition. + Node: node, + // Skip Address, see ServiceNode definition. + // Skip TaggedAddresses, see ServiceNode definition. + ServiceID: s.ID, + ServiceName: s.Service, + ServiceTags: s.Tags, + ServiceAddress: s.Address, + ServicePort: s.Port, + ServiceEnableTagOverride: s.EnableTagOverride, + RaftIndex: RaftIndex{ + CreateIndex: s.CreateIndex, + ModifyIndex: s.ModifyIndex, + }, + } +} + +type NodeServices struct { + Node *Node + Services map[string]*NodeService +} + +// HealthCheck represents a single check on a given node +type HealthCheck struct { + Node string + CheckID types.CheckID // Unique per-node ID + Name string // Check name + Status string // The current check status + Notes string // Additional notes with the status + Output string // Holds output of script runs + ServiceID string // optional associated service + ServiceName string // optional service name + + RaftIndex +} + +// IsSame checks if one HealthCheck is the same as another, without looking +// at the Raft information (that's why we didn't call it IsEqual). This is +// useful for seeing if an update would be idempotent for all the functional +// parts of the structure. +func (c *HealthCheck) IsSame(other *HealthCheck) bool { + if c.Node != other.Node || + c.CheckID != other.CheckID || + c.Name != other.Name || + c.Status != other.Status || + c.Notes != other.Notes || + c.Output != other.Output || + c.ServiceID != other.ServiceID || + c.ServiceName != other.ServiceName { + return false + } + + return true +} + +// Clone returns a distinct clone of the HealthCheck. +func (c *HealthCheck) Clone() *HealthCheck { + clone := new(HealthCheck) + *clone = *c + return clone +} + +// HealthChecks is a collection of HealthCheck structs. +type HealthChecks []*HealthCheck + +// CheckServiceNode is used to provide the node, its service +// definition, as well as a HealthCheck that is associated. +type CheckServiceNode struct { + Node *Node + Service *NodeService + Checks HealthChecks +} +type CheckServiceNodes []CheckServiceNode + +// Shuffle does an in-place random shuffle using the Fisher-Yates algorithm. +func (nodes CheckServiceNodes) Shuffle() { + for i := len(nodes) - 1; i > 0; i-- { + j := rand.Int31n(int32(i + 1)) + nodes[i], nodes[j] = nodes[j], nodes[i] + } +} + +// Filter removes nodes that are failing health checks (and any non-passing +// check if that option is selected). Note that this returns the filtered +// results AND modifies the receiver for performance. +func (nodes CheckServiceNodes) Filter(onlyPassing bool) CheckServiceNodes { + n := len(nodes) +OUTER: + for i := 0; i < n; i++ { + node := nodes[i] + for _, check := range node.Checks { + if check.Status == HealthCritical || + (onlyPassing && check.Status != HealthPassing) { + nodes[i], nodes[n-1] = nodes[n-1], CheckServiceNode{} + n-- + i-- + continue OUTER + } + } + } + return nodes[:n] +} + +// NodeInfo is used to dump all associated information about +// a node. This is currently used for the UI only, as it is +// rather expensive to generate. +type NodeInfo struct { + ID types.NodeID + Node string + Address string + TaggedAddresses map[string]string + Meta map[string]string + Services []*NodeService + Checks HealthChecks +} + +// NodeDump is used to dump all the nodes with all their +// associated data. This is currently used for the UI only, +// as it is rather expensive to generate. +type NodeDump []*NodeInfo + +type IndexedNodes struct { + Nodes Nodes + QueryMeta +} + +type IndexedServices struct { + Services Services + QueryMeta +} + +type IndexedServiceNodes struct { + ServiceNodes ServiceNodes + QueryMeta +} + +type IndexedNodeServices struct { + NodeServices *NodeServices + QueryMeta +} + +type IndexedHealthChecks struct { + HealthChecks HealthChecks + QueryMeta +} + +type IndexedCheckServiceNodes struct { + Nodes CheckServiceNodes + QueryMeta +} + +type IndexedNodeDump struct { + Dump NodeDump + QueryMeta +} + +// DirEntry is used to represent a directory entry. This is +// used for values in our Key-Value store. +type DirEntry struct { + LockIndex uint64 + Key string + Flags uint64 + Value []byte + Session string `json:",omitempty"` + + RaftIndex +} + +// Returns a clone of the given directory entry. +func (d *DirEntry) Clone() *DirEntry { + return &DirEntry{ + LockIndex: d.LockIndex, + Key: d.Key, + Flags: d.Flags, + Value: d.Value, + Session: d.Session, + RaftIndex: RaftIndex{ + CreateIndex: d.CreateIndex, + ModifyIndex: d.ModifyIndex, + }, + } +} + +type DirEntries []*DirEntry + +type KVSOp string + +const ( + KVSSet KVSOp = "set" + KVSDelete = "delete" + KVSDeleteCAS = "delete-cas" // Delete with check-and-set + KVSDeleteTree = "delete-tree" + KVSCAS = "cas" // Check-and-set + KVSLock = "lock" // Lock a key + KVSUnlock = "unlock" // Unlock a key + + // The following operations are only available inside of atomic + // transactions via the Txn request. + KVSGet = "get" // Read the key during the transaction. + KVSGetTree = "get-tree" // Read all keys with the given prefix during the transaction. + KVSCheckSession = "check-session" // Check the session holds the key. + KVSCheckIndex = "check-index" // Check the modify index of the key. +) + +// IsWrite returns true if the given operation alters the state store. +func (op KVSOp) IsWrite() bool { + switch op { + case KVSGet, KVSGetTree, KVSCheckSession, KVSCheckIndex: + return false + + default: + return true + } +} + +// KVSRequest is used to operate on the Key-Value store +type KVSRequest struct { + Datacenter string + Op KVSOp // Which operation are we performing + DirEnt DirEntry // Which directory entry + WriteRequest +} + +func (r *KVSRequest) RequestDatacenter() string { + return r.Datacenter +} + +// KeyRequest is used to request a key, or key prefix +type KeyRequest struct { + Datacenter string + Key string + QueryOptions +} + +func (r *KeyRequest) RequestDatacenter() string { + return r.Datacenter +} + +// KeyListRequest is used to list keys +type KeyListRequest struct { + Datacenter string + Prefix string + Seperator string + QueryOptions +} + +func (r *KeyListRequest) RequestDatacenter() string { + return r.Datacenter +} + +type IndexedDirEntries struct { + Entries DirEntries + QueryMeta +} + +type IndexedKeyList struct { + Keys []string + QueryMeta +} + +type SessionBehavior string + +const ( + SessionKeysRelease SessionBehavior = "release" + SessionKeysDelete = "delete" +) + +const ( + SessionTTLMax = 24 * time.Hour + SessionTTLMultiplier = 2 +) + +// Session is used to represent an open session in the KV store. +// This issued to associate node checks with acquired locks. +type Session struct { + ID string + Name string + Node string + Checks []types.CheckID + LockDelay time.Duration + Behavior SessionBehavior // What to do when session is invalidated + TTL string + + RaftIndex +} +type Sessions []*Session + +type SessionOp string + +const ( + SessionCreate SessionOp = "create" + SessionDestroy = "destroy" +) + +// SessionRequest is used to operate on sessions +type SessionRequest struct { + Datacenter string + Op SessionOp // Which operation are we performing + Session Session // Which session + WriteRequest +} + +func (r *SessionRequest) RequestDatacenter() string { + return r.Datacenter +} + +// SessionSpecificRequest is used to request a session by ID +type SessionSpecificRequest struct { + Datacenter string + Session string + QueryOptions +} + +func (r *SessionSpecificRequest) RequestDatacenter() string { + return r.Datacenter +} + +type IndexedSessions struct { + Sessions Sessions + QueryMeta +} + +// ACL is used to represent a token and its rules +type ACL struct { + ID string + Name string + Type string + Rules string + + RaftIndex +} +type ACLs []*ACL + +type ACLOp string + +const ( + ACLSet ACLOp = "set" + ACLForceSet = "force-set" // Deprecated, left to backwards compatibility + ACLDelete = "delete" +) + +// IsSame checks if one ACL is the same as another, without looking +// at the Raft information (that's why we didn't call it IsEqual). This is +// useful for seeing if an update would be idempotent for all the functional +// parts of the structure. +func (a *ACL) IsSame(other *ACL) bool { + if a.ID != other.ID || + a.Name != other.Name || + a.Type != other.Type || + a.Rules != other.Rules { + return false + } + + return true +} + +// ACLRequest is used to create, update or delete an ACL +type ACLRequest struct { + Datacenter string + Op ACLOp + ACL ACL + WriteRequest +} + +func (r *ACLRequest) RequestDatacenter() string { + return r.Datacenter +} + +// ACLRequests is a list of ACL change requests. +type ACLRequests []*ACLRequest + +// ACLSpecificRequest is used to request an ACL by ID +type ACLSpecificRequest struct { + Datacenter string + ACL string + QueryOptions +} + +func (r *ACLSpecificRequest) RequestDatacenter() string { + return r.Datacenter +} + +// ACLPolicyRequest is used to request an ACL by ID, conditionally +// filtering on an ID +type ACLPolicyRequest struct { + Datacenter string + ACL string + ETag string + QueryOptions +} + +func (r *ACLPolicyRequest) RequestDatacenter() string { + return r.Datacenter +} + +type IndexedACLs struct { + ACLs ACLs + QueryMeta +} + +type ACLPolicy struct { + ETag string + Parent string + Policy *acl.Policy + TTL time.Duration + QueryMeta +} + +// ACLReplicationStatus provides information about the health of the ACL +// replication system. +type ACLReplicationStatus struct { + Enabled bool + Running bool + SourceDatacenter string + ReplicatedIndex uint64 + LastSuccess time.Time + LastError time.Time +} + +// Coordinate stores a node name with its associated network coordinate. +type Coordinate struct { + Node string + Coord *coordinate.Coordinate +} + +type Coordinates []*Coordinate + +// IndexedCoordinate is used to represent a single node's coordinate from the state +// store. +type IndexedCoordinate struct { + Coord *coordinate.Coordinate + QueryMeta +} + +// IndexedCoordinates is used to represent a list of nodes and their +// corresponding raw coordinates. +type IndexedCoordinates struct { + Coordinates Coordinates + QueryMeta +} + +// DatacenterMap is used to represent a list of nodes with their raw coordinates, +// associated with a datacenter. +type DatacenterMap struct { + Datacenter string + Coordinates Coordinates +} + +// CoordinateUpdateRequest is used to update the network coordinate of a given +// node. +type CoordinateUpdateRequest struct { + Datacenter string + Node string + Coord *coordinate.Coordinate + WriteRequest +} + +// RequestDatacenter returns the datacenter for a given update request. +func (c *CoordinateUpdateRequest) RequestDatacenter() string { + return c.Datacenter +} + +// EventFireRequest is used to ask a server to fire +// a Serf event. It is a bit odd, since it doesn't depend on +// the catalog or leader. Any node can respond, so it's not quite +// like a standard write request. This is used only internally. +type EventFireRequest struct { + Datacenter string + Name string + Payload []byte + + // Not using WriteRequest so that any server can process + // the request. It is a bit unusual... + QueryOptions +} + +func (r *EventFireRequest) RequestDatacenter() string { + return r.Datacenter +} + +// EventFireResponse is used to respond to a fire request. +type EventFireResponse struct { + QueryMeta +} + +type TombstoneOp string + +const ( + TombstoneReap TombstoneOp = "reap" +) + +// TombstoneRequest is used to trigger a reaping of the tombstones +type TombstoneRequest struct { + Datacenter string + Op TombstoneOp + ReapIndex uint64 + WriteRequest +} + +func (r *TombstoneRequest) RequestDatacenter() string { + return r.Datacenter +} + +// msgpackHandle is a shared handle for encoding/decoding of structs +var msgpackHandle = &codec.MsgpackHandle{} + +// Decode is used to decode a MsgPack encoded object +func Decode(buf []byte, out interface{}) error { + return codec.NewDecoder(bytes.NewReader(buf), msgpackHandle).Decode(out) +} + +// Encode is used to encode a MsgPack object with type prefix +func Encode(t MessageType, msg interface{}) ([]byte, error) { + var buf bytes.Buffer + buf.WriteByte(uint8(t)) + err := codec.NewEncoder(&buf, msgpackHandle).Encode(msg) + return buf.Bytes(), err +} + +// CompoundResponse is an interface for gathering multiple responses. It is +// used in cross-datacenter RPC calls where more than 1 datacenter is +// expected to reply. +type CompoundResponse interface { + // Add adds a new response to the compound response + Add(interface{}) + + // New returns an empty response object which can be passed around by + // reference, and then passed to Add() later on. + New() interface{} +} + +type KeyringOp string + +const ( + KeyringList KeyringOp = "list" + KeyringInstall = "install" + KeyringUse = "use" + KeyringRemove = "remove" +) + +// KeyringRequest encapsulates a request to modify an encryption keyring. +// It can be used for install, remove, or use key type operations. +type KeyringRequest struct { + Operation KeyringOp + Key string + Datacenter string + Forwarded bool + RelayFactor uint8 + QueryOptions +} + +func (r *KeyringRequest) RequestDatacenter() string { + return r.Datacenter +} + +// KeyringResponse is a unified key response and can be used for install, +// remove, use, as well as listing key queries. +type KeyringResponse struct { + WAN bool + Datacenter string + Messages map[string]string `json:",omitempty"` + Keys map[string]int + NumNodes int + Error string `json:",omitempty"` +} + +// KeyringResponses holds multiple responses to keyring queries. Each +// datacenter replies independently, and KeyringResponses is used as a +// container for the set of all responses. +type KeyringResponses struct { + Responses []*KeyringResponse + QueryMeta +} + +func (r *KeyringResponses) Add(v interface{}) { + val := v.(*KeyringResponses) + r.Responses = append(r.Responses, val.Responses...) +} + +func (r *KeyringResponses) New() interface{} { + return new(KeyringResponses) +} diff --git a/vendor/github.com/hashicorp/consul/consul/structs/txn.go b/vendor/github.com/hashicorp/consul/consul/structs/txn.go new file mode 100644 index 0000000000..3f8035b97e --- /dev/null +++ b/vendor/github.com/hashicorp/consul/consul/structs/txn.go @@ -0,0 +1,85 @@ +package structs + +import ( + "fmt" +) + +// TxnKVOp is used to define a single operation on the KVS inside a +// transaction +type TxnKVOp struct { + Verb KVSOp + DirEnt DirEntry +} + +// TxnKVResult is used to define the result of a single operation on the KVS +// inside a transaction. +type TxnKVResult *DirEntry + +// TxnOp is used to define a single operation inside a transaction. Only one +// of the types should be filled out per entry. +type TxnOp struct { + KV *TxnKVOp +} + +// TxnOps is a list of operations within a transaction. +type TxnOps []*TxnOp + +// TxnRequest is used to apply multiple operations to the state store in a +// single transaction +type TxnRequest struct { + Datacenter string + Ops TxnOps + WriteRequest +} + +func (r *TxnRequest) RequestDatacenter() string { + return r.Datacenter +} + +// TxnReadRequest is used as a fast path for read-only transactions that don't +// modify the state store. +type TxnReadRequest struct { + Datacenter string + Ops TxnOps + QueryOptions +} + +func (r *TxnReadRequest) RequestDatacenter() string { + return r.Datacenter +} + +// TxnError is used to return information about an error for a specific +// operation. +type TxnError struct { + OpIndex int + What string +} + +// Error returns the string representation of an atomic error. +func (e TxnError) Error() string { + return fmt.Sprintf("op %d: %s", e.OpIndex, e.What) +} + +// TxnErrors is a list of TxnError entries. +type TxnErrors []*TxnError + +// TxnResult is used to define the result of a given operation inside a +// transaction. Only one of the types should be filled out per entry. +type TxnResult struct { + KV TxnKVResult +} + +// TxnResults is a list of TxnResult entries. +type TxnResults []*TxnResult + +// TxnResponse is the structure returned by a TxnRequest. +type TxnResponse struct { + Results TxnResults + Errors TxnErrors +} + +// TxnReadResponse is the structure returned by a TxnReadRequest. +type TxnReadResponse struct { + TxnResponse + QueryMeta +} diff --git a/vendor/github.com/hashicorp/consul/testutil/README.md b/vendor/github.com/hashicorp/consul/testutil/README.md new file mode 100644 index 0000000000..21eb01d2a7 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/testutil/README.md @@ -0,0 +1,65 @@ +Consul Testing Utilities +======================== + +This package provides some generic helpers to facilitate testing in Consul. + +TestServer +========== + +TestServer is a harness for managing Consul agents and initializing them with +test data. Using it, you can form test clusters, create services, add health +checks, manipulate the K/V store, etc. This test harness is completely decoupled +from Consul's core and API client, meaning it can be easily imported and used in +external unit tests for various applications. It works by invoking the Consul +CLI, which means it is a requirement to have Consul installed in the `$PATH`. + +Following is an example usage: + +```go +package my_program + +import ( + "testing" + + "github.com/hashicorp/consul/consul/structs" + "github.com/hashicorp/consul/testutil" +) + +func TestMain(t *testing.T) { + // Create a test Consul server + srv1 := testutil.NewTestServer(t) + defer srv1.Stop() + + // Create a secondary server, passing in configuration + // to avoid bootstrapping as we are forming a cluster. + srv2 := testutil.NewTestServerConfig(t, func(c *testutil.TestServerConfig) { + c.Bootstrap = false + }) + defer srv2.Stop() + + // Join the servers together + srv1.JoinLAN(srv2.LANAddr) + + // Create a test key/value pair + srv1.SetKV("foo", []byte("bar")) + + // Create lots of test key/value pairs + srv1.PopulateKV(map[string][]byte{ + "bar": []byte("123"), + "baz": []byte("456"), + }) + + // Create a service + srv1.AddService("redis", structs.HealthPassing, []string{"master"}) + + // Create a service check + srv1.AddCheck("service:redis", "redis", structs.HealthPassing) + + // Create a node check + srv1.AddCheck("mem", "", structs.HealthCritical) + + // The HTTPAddr field contains the address of the Consul + // API on the new test server instance. + println(srv1.HTTPAddr) +} +``` diff --git a/vendor/github.com/hashicorp/consul/testutil/server.go b/vendor/github.com/hashicorp/consul/testutil/server.go new file mode 100644 index 0000000000..7daa21ed60 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/testutil/server.go @@ -0,0 +1,528 @@ +package testutil + +// TestServer is a test helper. It uses a fork/exec model to create +// a test Consul server instance in the background and initialize it +// with some data and/or services. The test server can then be used +// to run a unit test, and offers an easy API to tear itself down +// when the test has completed. The only prerequisite is to have a consul +// binary available on the $PATH. +// +// This package does not use Consul's official API client. This is +// because we use TestServer to test the API client, which would +// otherwise cause an import cycle. + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "net" + "net/http" + "os" + "os/exec" + "strconv" + "strings" + + "github.com/hashicorp/consul/consul/structs" + "github.com/hashicorp/go-cleanhttp" +) + +// TestPerformanceConfig configures the performance parameters. +type TestPerformanceConfig struct { + RaftMultiplier uint `json:"raft_multiplier,omitempty"` +} + +// TestPortConfig configures the various ports used for services +// provided by the Consul server. +type TestPortConfig struct { + DNS int `json:"dns,omitempty"` + HTTP int `json:"http,omitempty"` + RPC int `json:"rpc,omitempty"` + SerfLan int `json:"serf_lan,omitempty"` + SerfWan int `json:"serf_wan,omitempty"` + Server int `json:"server,omitempty"` +} + +// TestAddressConfig contains the bind addresses for various +// components of the Consul server. +type TestAddressConfig struct { + HTTP string `json:"http,omitempty"` +} + +// TestServerConfig is the main server configuration struct. +type TestServerConfig struct { + NodeName string `json:"node_name"` + NodeMeta map[string]string `json:"node_meta,omitempty"` + Performance *TestPerformanceConfig `json:"performance,omitempty"` + Bootstrap bool `json:"bootstrap,omitempty"` + Server bool `json:"server,omitempty"` + DataDir string `json:"data_dir,omitempty"` + Datacenter string `json:"datacenter,omitempty"` + DisableCheckpoint bool `json:"disable_update_check"` + LogLevel string `json:"log_level,omitempty"` + Bind string `json:"bind_addr,omitempty"` + Addresses *TestAddressConfig `json:"addresses,omitempty"` + Ports *TestPortConfig `json:"ports,omitempty"` + ACLMasterToken string `json:"acl_master_token,omitempty"` + ACLDatacenter string `json:"acl_datacenter,omitempty"` + ACLDefaultPolicy string `json:"acl_default_policy,omitempty"` + Encrypt string `json:"encrypt,omitempty"` + Stdout, Stderr io.Writer `json:"-"` + Args []string `json:"-"` +} + +// ServerConfigCallback is a function interface which can be +// passed to NewTestServerConfig to modify the server config. +type ServerConfigCallback func(c *TestServerConfig) + +// defaultServerConfig returns a new TestServerConfig struct +// with all of the listen ports incremented by one. +func defaultServerConfig() *TestServerConfig { + return &TestServerConfig{ + NodeName: fmt.Sprintf("node%d", randomPort()), + DisableCheckpoint: true, + Performance: &TestPerformanceConfig{ + RaftMultiplier: 1, + }, + Bootstrap: true, + Server: true, + LogLevel: "debug", + Bind: "127.0.0.1", + Addresses: &TestAddressConfig{}, + Ports: &TestPortConfig{ + DNS: randomPort(), + HTTP: randomPort(), + RPC: randomPort(), + SerfLan: randomPort(), + SerfWan: randomPort(), + Server: randomPort(), + }, + } +} + +// randomPort asks the kernel for a random port to use. +func randomPort() int { + l, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + panic(err) + } + defer l.Close() + return l.Addr().(*net.TCPAddr).Port +} + +// TestService is used to serialize a service definition. +type TestService struct { + ID string `json:",omitempty"` + Name string `json:",omitempty"` + Tags []string `json:",omitempty"` + Address string `json:",omitempty"` + Port int `json:",omitempty"` +} + +// TestCheck is used to serialize a check definition. +type TestCheck struct { + ID string `json:",omitempty"` + Name string `json:",omitempty"` + ServiceID string `json:",omitempty"` + TTL string `json:",omitempty"` +} + +// TestingT is an interface wrapper around TestingT +type TestingT interface { + Logf(format string, args ...interface{}) + Errorf(format string, args ...interface{}) + Fatalf(format string, args ...interface{}) + Fatal(args ...interface{}) + Skip(args ...interface{}) +} + +// TestKVResponse is what we use to decode KV data. +type TestKVResponse struct { + Value string +} + +// TestServer is the main server wrapper struct. +type TestServer struct { + cmd *exec.Cmd + Config *TestServerConfig + t TestingT + + HTTPAddr string + LANAddr string + WANAddr string + + HttpClient *http.Client +} + +// NewTestServer is an easy helper method to create a new Consul +// test server with the most basic configuration. +func NewTestServer(t TestingT) *TestServer { + return NewTestServerConfig(t, nil) +} + +// NewTestServerConfig creates a new TestServer, and makes a call to +// an optional callback function to modify the configuration. +func NewTestServerConfig(t TestingT, cb ServerConfigCallback) *TestServer { + if path, err := exec.LookPath("consul"); err != nil || path == "" { + t.Fatal("consul not found on $PATH - download and install " + + "consul or skip this test") + } + + dataDir, err := ioutil.TempDir("", "consul") + if err != nil { + t.Fatalf("err: %s", err) + } + + configFile, err := ioutil.TempFile(dataDir, "config") + if err != nil { + defer os.RemoveAll(dataDir) + t.Fatalf("err: %s", err) + } + + consulConfig := defaultServerConfig() + consulConfig.DataDir = dataDir + + if cb != nil { + cb(consulConfig) + } + + configContent, err := json.Marshal(consulConfig) + if err != nil { + t.Fatalf("err: %s", err) + } + + if _, err := configFile.Write(configContent); err != nil { + t.Fatalf("err: %s", err) + } + configFile.Close() + + stdout := io.Writer(os.Stdout) + if consulConfig.Stdout != nil { + stdout = consulConfig.Stdout + } + + stderr := io.Writer(os.Stderr) + if consulConfig.Stderr != nil { + stderr = consulConfig.Stderr + } + + // Start the server + args := []string{"agent", "-config-file", configFile.Name()} + args = append(args, consulConfig.Args...) + cmd := exec.Command("consul", args...) + cmd.Stdout = stdout + cmd.Stderr = stderr + if err := cmd.Start(); err != nil { + t.Fatalf("err: %s", err) + } + + var httpAddr string + var client *http.Client + if strings.HasPrefix(consulConfig.Addresses.HTTP, "unix://") { + httpAddr = consulConfig.Addresses.HTTP + trans := cleanhttp.DefaultTransport() + trans.Dial = func(_, _ string) (net.Conn, error) { + return net.Dial("unix", httpAddr[7:]) + } + client = &http.Client{ + Transport: trans, + } + } else { + httpAddr = fmt.Sprintf("127.0.0.1:%d", consulConfig.Ports.HTTP) + client = cleanhttp.DefaultClient() + } + + server := &TestServer{ + Config: consulConfig, + cmd: cmd, + t: t, + + HTTPAddr: httpAddr, + LANAddr: fmt.Sprintf("127.0.0.1:%d", consulConfig.Ports.SerfLan), + WANAddr: fmt.Sprintf("127.0.0.1:%d", consulConfig.Ports.SerfWan), + + HttpClient: client, + } + + // Wait for the server to be ready + if consulConfig.Bootstrap { + server.waitForLeader() + } else { + server.waitForAPI() + } + + return server +} + +// Stop stops the test Consul server, and removes the Consul data +// directory once we are done. +func (s *TestServer) Stop() { + defer os.RemoveAll(s.Config.DataDir) + + if err := s.cmd.Process.Kill(); err != nil { + s.t.Errorf("err: %s", err) + } + + // wait for the process to exit to be sure that the data dir can be + // deleted on all platforms. + s.cmd.Wait() +} + +// waitForAPI waits for only the agent HTTP endpoint to start +// responding. This is an indication that the agent has started, +// but will likely return before a leader is elected. +func (s *TestServer) waitForAPI() { + WaitForResult(func() (bool, error) { + resp, err := s.HttpClient.Get(s.url("/v1/agent/self")) + if err != nil { + return false, err + } + defer resp.Body.Close() + if err := s.requireOK(resp); err != nil { + return false, err + } + return true, nil + }, func(err error) { + defer s.Stop() + s.t.Fatalf("err: %s", err) + }) +} + +// waitForLeader waits for the Consul server's HTTP API to become +// available, and then waits for a known leader and an index of +// 1 or more to be observed to confirm leader election is done. +// It then waits to ensure the anti-entropy sync has completed. +func (s *TestServer) waitForLeader() { + var index int64 + WaitForResult(func() (bool, error) { + // Query the API and check the status code. + url := s.url(fmt.Sprintf("/v1/catalog/nodes?index=%d&wait=2s", index)) + resp, err := s.HttpClient.Get(url) + if err != nil { + return false, err + } + defer resp.Body.Close() + if err := s.requireOK(resp); err != nil { + return false, err + } + + // Ensure we have a leader and a node registration. + if leader := resp.Header.Get("X-Consul-KnownLeader"); leader != "true" { + return false, fmt.Errorf("Consul leader status: %#v", leader) + } + index, err = strconv.ParseInt(resp.Header.Get("X-Consul-Index"), 10, 64) + if err != nil { + return false, fmt.Errorf("Consul index was bad: %v", err) + } + if index == 0 { + return false, fmt.Errorf("Consul index is 0") + } + + // Watch for the anti-entropy sync to finish. + var parsed []map[string]interface{} + dec := json.NewDecoder(resp.Body) + if err := dec.Decode(&parsed); err != nil { + return false, err + } + if len(parsed) < 1 { + return false, fmt.Errorf("No nodes") + } + taggedAddresses, ok := parsed[0]["TaggedAddresses"].(map[string]interface{}) + if !ok { + return false, fmt.Errorf("Missing tagged addresses") + } + if _, ok := taggedAddresses["lan"]; !ok { + return false, fmt.Errorf("No lan tagged addresses") + } + return true, nil + }, func(err error) { + defer s.Stop() + s.t.Fatalf("err: %s", err) + }) +} + +// url is a helper function which takes a relative URL and +// makes it into a proper URL against the local Consul server. +func (s *TestServer) url(path string) string { + return fmt.Sprintf("http://127.0.0.1:%d%s", s.Config.Ports.HTTP, path) +} + +// requireOK checks the HTTP response code and ensures it is acceptable. +func (s *TestServer) requireOK(resp *http.Response) error { + if resp.StatusCode != 200 { + return fmt.Errorf("Bad status code: %d", resp.StatusCode) + } + return nil +} + +// put performs a new HTTP PUT request. +func (s *TestServer) put(path string, body io.Reader) *http.Response { + req, err := http.NewRequest("PUT", s.url(path), body) + if err != nil { + s.t.Fatalf("err: %s", err) + } + resp, err := s.HttpClient.Do(req) + if err != nil { + s.t.Fatalf("err: %s", err) + } + if err := s.requireOK(resp); err != nil { + defer resp.Body.Close() + s.t.Fatal(err) + } + return resp +} + +// get performs a new HTTP GET request. +func (s *TestServer) get(path string) *http.Response { + resp, err := s.HttpClient.Get(s.url(path)) + if err != nil { + s.t.Fatalf("err: %s", err) + } + if err := s.requireOK(resp); err != nil { + defer resp.Body.Close() + s.t.Fatal(err) + } + return resp +} + +// encodePayload returns a new io.Reader wrapping the encoded contents +// of the payload, suitable for passing directly to a new request. +func (s *TestServer) encodePayload(payload interface{}) io.Reader { + var encoded bytes.Buffer + enc := json.NewEncoder(&encoded) + if err := enc.Encode(payload); err != nil { + s.t.Fatalf("err: %s", err) + } + return &encoded +} + +// JoinLAN is used to join nodes within the same datacenter. +func (s *TestServer) JoinLAN(addr string) { + resp := s.get("/v1/agent/join/" + addr) + resp.Body.Close() +} + +// JoinWAN is used to join remote datacenters together. +func (s *TestServer) JoinWAN(addr string) { + resp := s.get("/v1/agent/join/" + addr + "?wan=1") + resp.Body.Close() +} + +// SetKV sets an individual key in the K/V store. +func (s *TestServer) SetKV(key string, val []byte) { + resp := s.put("/v1/kv/"+key, bytes.NewBuffer(val)) + resp.Body.Close() +} + +// GetKV retrieves a single key and returns its value +func (s *TestServer) GetKV(key string) []byte { + resp := s.get("/v1/kv/" + key) + defer resp.Body.Close() + + raw, err := ioutil.ReadAll(resp.Body) + if err != nil { + s.t.Fatalf("err: %s", err) + } + + var result []*TestKVResponse + if err := json.Unmarshal(raw, &result); err != nil { + s.t.Fatalf("err: %s", err) + } + if len(result) < 1 { + s.t.Fatalf("key does not exist: %s", key) + } + + v, err := base64.StdEncoding.DecodeString(result[0].Value) + if err != nil { + s.t.Fatalf("err: %s", err) + } + + return v +} + +// PopulateKV fills the Consul KV with data from a generic map. +func (s *TestServer) PopulateKV(data map[string][]byte) { + for k, v := range data { + s.SetKV(k, v) + } +} + +// ListKV returns a list of keys present in the KV store. This will list all +// keys under the given prefix recursively and return them as a slice. +func (s *TestServer) ListKV(prefix string) []string { + resp := s.get("/v1/kv/" + prefix + "?keys") + defer resp.Body.Close() + + raw, err := ioutil.ReadAll(resp.Body) + if err != nil { + s.t.Fatalf("err: %s", err) + } + + var result []string + if err := json.Unmarshal(raw, &result); err != nil { + s.t.Fatalf("err: %s", err) + } + return result +} + +// AddService adds a new service to the Consul instance. It also +// automatically adds a health check with the given status, which +// can be one of "passing", "warning", or "critical". +func (s *TestServer) AddService(name, status string, tags []string) { + svc := &TestService{ + Name: name, + Tags: tags, + } + payload := s.encodePayload(svc) + s.put("/v1/agent/service/register", payload) + + chkName := "service:" + name + chk := &TestCheck{ + Name: chkName, + ServiceID: name, + TTL: "10m", + } + payload = s.encodePayload(chk) + s.put("/v1/agent/check/register", payload) + + switch status { + case structs.HealthPassing: + s.put("/v1/agent/check/pass/"+chkName, nil) + case structs.HealthWarning: + s.put("/v1/agent/check/warn/"+chkName, nil) + case structs.HealthCritical: + s.put("/v1/agent/check/fail/"+chkName, nil) + default: + s.t.Fatalf("Unrecognized status: %s", status) + } +} + +// AddCheck adds a check to the Consul instance. If the serviceID is +// left empty (""), then the check will be associated with the node. +// The check status may be "passing", "warning", or "critical". +func (s *TestServer) AddCheck(name, serviceID, status string) { + chk := &TestCheck{ + ID: name, + Name: name, + TTL: "10m", + } + if serviceID != "" { + chk.ServiceID = serviceID + } + + payload := s.encodePayload(chk) + s.put("/v1/agent/check/register", payload) + + switch status { + case structs.HealthPassing: + s.put("/v1/agent/check/pass/"+name, nil) + case structs.HealthWarning: + s.put("/v1/agent/check/warn/"+name, nil) + case structs.HealthCritical: + s.put("/v1/agent/check/fail/"+name, nil) + default: + s.t.Fatalf("Unrecognized status: %s", status) + } +} diff --git a/vendor/github.com/hashicorp/consul/testutil/wait.go b/vendor/github.com/hashicorp/consul/testutil/wait.go new file mode 100644 index 0000000000..bd240796ff --- /dev/null +++ b/vendor/github.com/hashicorp/consul/testutil/wait.go @@ -0,0 +1,62 @@ +package testutil + +import ( + "fmt" + "testing" + "time" + + "github.com/hashicorp/consul/consul/structs" +) + +type testFn func() (bool, error) +type errorFn func(error) + +const ( + baseWait = 1 * time.Millisecond + maxWait = 100 * time.Millisecond +) + +func WaitForResult(try testFn, fail errorFn) { + var err error + wait := baseWait + for retries := 100; retries > 0; retries-- { + var success bool + success, err = try() + if success { + time.Sleep(25 * time.Millisecond) + return + } + + time.Sleep(wait) + wait *= 2 + if wait > maxWait { + wait = maxWait + } + } + fail(err) +} + +type rpcFn func(string, interface{}, interface{}) error + +func WaitForLeader(t *testing.T, rpc rpcFn, dc string) structs.IndexedNodes { + var out structs.IndexedNodes + WaitForResult(func() (bool, error) { + // Ensure we have a leader and a node registration. + args := &structs.DCSpecificRequest{ + Datacenter: dc, + } + if err := rpc("Catalog.ListNodes", args, &out); err != nil { + return false, fmt.Errorf("Catalog.ListNodes failed: %v", err) + } + if !out.QueryMeta.KnownLeader { + return false, fmt.Errorf("No leader") + } + if out.Index == 0 { + return false, fmt.Errorf("Consul index is 0") + } + return true, nil + }, func(err error) { + t.Fatalf("failed to find leader: %v", err) + }) + return out +} diff --git a/vendor/github.com/hashicorp/consul/types/README.md b/vendor/github.com/hashicorp/consul/types/README.md new file mode 100644 index 0000000000..da662f4a1c --- /dev/null +++ b/vendor/github.com/hashicorp/consul/types/README.md @@ -0,0 +1,39 @@ +# Consul `types` Package + +The Go language has a strong type system built into the language. The +`types` package corrals named types into a single package that is terminal in +`go`'s import graph. The `types` package should not have any downstream +dependencies. Each subsystem that defines its own set of types exists in its +own file, but all types are defined in the same package. + +# Why + +> Everything should be made as simple as possible, but not simpler. + +`string` is a useful container and underlying type for identifiers, however +the `string` type is effectively opaque to the compiler in terms of how a +given string is intended to be used. For instance, there is nothing +preventing the following from happening: + +```go +// `map` of Widgets, looked up by ID +var widgetLookup map[string]*Widget +// ... +var widgetID string = "widgetID" +w, found := widgetLookup[widgetID] + +// Bad! +var widgetName string = "name of widget" +w, found := widgetLookup[widgetName] +``` + +but this class of problem is entirely preventable: + +```go +type WidgetID string +var widgetLookup map[WidgetID]*Widget +var widgetName +``` + +TL;DR: intentions and idioms aren't statically checked by compilers. The +`types` package uses Go's strong type system to prevent this class of bug. diff --git a/vendor/github.com/hashicorp/consul/types/checks.go b/vendor/github.com/hashicorp/consul/types/checks.go new file mode 100644 index 0000000000..25a136b4f4 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/types/checks.go @@ -0,0 +1,5 @@ +package types + +// CheckID is a strongly typed string used to uniquely represent a Consul +// Check on an Agent (a CheckID is not globally unique). +type CheckID string diff --git a/vendor/github.com/hashicorp/consul/types/node_id.go b/vendor/github.com/hashicorp/consul/types/node_id.go new file mode 100644 index 0000000000..c0588ed421 --- /dev/null +++ b/vendor/github.com/hashicorp/consul/types/node_id.go @@ -0,0 +1,4 @@ +package types + +// NodeID is a unique identifier for a node across space and time. +type NodeID string diff --git a/vendor/github.com/hashicorp/go-sockaddr/LICENSE b/vendor/github.com/hashicorp/go-sockaddr/LICENSE new file mode 100644 index 0000000000..a612ad9813 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/LICENSE @@ -0,0 +1,373 @@ +Mozilla Public License Version 2.0 +================================== + +1. Definitions +-------------- + +1.1. "Contributor" + means each individual or legal entity that creates, contributes to + the creation of, or owns Covered Software. + +1.2. "Contributor Version" + means the combination of the Contributions of others (if any) used + by a Contributor and that particular Contributor's Contribution. + +1.3. "Contribution" + means Covered Software of a particular Contributor. + +1.4. "Covered Software" + means Source Code Form to which the initial Contributor has attached + the notice in Exhibit A, the Executable Form of such Source Code + Form, and Modifications of such Source Code Form, in each case + including portions thereof. + +1.5. "Incompatible With Secondary Licenses" + means + + (a) that the initial Contributor has attached the notice described + in Exhibit B to the Covered Software; or + + (b) that the Covered Software was made available under the terms of + version 1.1 or earlier of the License, but not also under the + terms of a Secondary License. + +1.6. "Executable Form" + means any form of the work other than Source Code Form. + +1.7. "Larger Work" + means a work that combines Covered Software with other material, in + a separate file or files, that is not Covered Software. + +1.8. "License" + means this document. + +1.9. "Licensable" + means having the right to grant, to the maximum extent possible, + whether at the time of the initial grant or subsequently, any and + all of the rights conveyed by this License. + +1.10. "Modifications" + means any of the following: + + (a) any file in Source Code Form that results from an addition to, + deletion from, or modification of the contents of Covered + Software; or + + (b) any new file in Source Code Form that contains any Covered + Software. + +1.11. "Patent Claims" of a Contributor + means any patent claim(s), including without limitation, method, + process, and apparatus claims, in any patent Licensable by such + Contributor that would be infringed, but for the grant of the + License, by the making, using, selling, offering for sale, having + made, import, or transfer of either its Contributions or its + Contributor Version. + +1.12. "Secondary License" + means either the GNU General Public License, Version 2.0, the GNU + Lesser General Public License, Version 2.1, the GNU Affero General + Public License, Version 3.0, or any later versions of those + licenses. + +1.13. "Source Code Form" + means the form of the work preferred for making modifications. + +1.14. "You" (or "Your") + means an individual or a legal entity exercising rights under this + License. For legal entities, "You" includes any entity that + controls, is controlled by, or is under common control with You. For + purposes of this definition, "control" means (a) the power, direct + or indirect, to cause the direction or management of such entity, + whether by contract or otherwise, or (b) ownership of more than + fifty percent (50%) of the outstanding shares or beneficial + ownership of such entity. + +2. License Grants and Conditions +-------------------------------- + +2.1. Grants + +Each Contributor hereby grants You a world-wide, royalty-free, +non-exclusive license: + +(a) under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or + as part of a Larger Work; and + +(b) under Patent Claims of such Contributor to make, use, sell, offer + for sale, have made, import, and otherwise transfer either its + Contributions or its Contributor Version. + +2.2. Effective Date + +The licenses granted in Section 2.1 with respect to any Contribution +become effective for each Contribution on the date the Contributor first +distributes such Contribution. + +2.3. Limitations on Grant Scope + +The licenses granted in this Section 2 are the only rights granted under +this License. No additional rights or licenses will be implied from the +distribution or licensing of Covered Software under this License. +Notwithstanding Section 2.1(b) above, no patent license is granted by a +Contributor: + +(a) for any code that a Contributor has removed from Covered Software; + or + +(b) for infringements caused by: (i) Your and any other third party's + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + +(c) under Patent Claims infringed by Covered Software in the absence of + its Contributions. + +This License does not grant any rights in the trademarks, service marks, +or logos of any Contributor (except as may be necessary to comply with +the notice requirements in Section 3.4). + +2.4. Subsequent Licenses + +No Contributor makes additional grants as a result of Your choice to +distribute the Covered Software under a subsequent version of this +License (see Section 10.2) or under the terms of a Secondary License (if +permitted under the terms of Section 3.3). + +2.5. Representation + +Each Contributor represents that the Contributor believes its +Contributions are its original creation(s) or it has sufficient rights +to grant the rights to its Contributions conveyed by this License. + +2.6. Fair Use + +This License is not intended to limit any rights You have under +applicable copyright doctrines of fair use, fair dealing, or other +equivalents. + +2.7. Conditions + +Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted +in Section 2.1. + +3. Responsibilities +------------------- + +3.1. Distribution of Source Form + +All distribution of Covered Software in Source Code Form, including any +Modifications that You create or to which You contribute, must be under +the terms of this License. You must inform recipients that the Source +Code Form of the Covered Software is governed by the terms of this +License, and how they can obtain a copy of this License. You may not +attempt to alter or restrict the recipients' rights in the Source Code +Form. + +3.2. Distribution of Executable Form + +If You distribute Covered Software in Executable Form then: + +(a) such Covered Software must also be made available in Source Code + Form, as described in Section 3.1, and You must inform recipients of + the Executable Form how they can obtain a copy of such Source Code + Form by reasonable means in a timely manner, at a charge no more + than the cost of distribution to the recipient; and + +(b) You may distribute such Executable Form under the terms of this + License, or sublicense it under different terms, provided that the + license for the Executable Form does not attempt to limit or alter + the recipients' rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + +You may create and distribute a Larger Work under terms of Your choice, +provided that You also comply with the requirements of this License for +the Covered Software. If the Larger Work is a combination of Covered +Software with a work governed by one or more Secondary Licenses, and the +Covered Software is not Incompatible With Secondary Licenses, this +License permits You to additionally distribute such Covered Software +under the terms of such Secondary License(s), so that the recipient of +the Larger Work may, at their option, further distribute the Covered +Software under the terms of either this License or such Secondary +License(s). + +3.4. Notices + +You may not remove or alter the substance of any license notices +(including copyright notices, patent notices, disclaimers of warranty, +or limitations of liability) contained within the Source Code Form of +the Covered Software, except that You may alter any license notices to +the extent required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + +You may choose to offer, and to charge a fee for, warranty, support, +indemnity or liability obligations to one or more recipients of Covered +Software. However, You may do so only on Your own behalf, and not on +behalf of any Contributor. You must make it absolutely clear that any +such warranty, support, indemnity, or liability obligation is offered by +You alone, and You hereby agree to indemnify every Contributor for any +liability incurred by such Contributor as a result of warranty, support, +indemnity or liability terms You offer. You may include additional +disclaimers of warranty and limitations of liability specific to any +jurisdiction. + +4. Inability to Comply Due to Statute or Regulation +--------------------------------------------------- + +If it is impossible for You to comply with any of the terms of this +License with respect to some or all of the Covered Software due to +statute, judicial order, or regulation then You must: (a) comply with +the terms of this License to the maximum extent possible; and (b) +describe the limitations and the code they affect. Such description must +be placed in a text file included with all distributions of the Covered +Software under this License. Except to the extent prohibited by statute +or regulation, such description must be sufficiently detailed for a +recipient of ordinary skill to be able to understand it. + +5. Termination +-------------- + +5.1. The rights granted under this License will terminate automatically +if You fail to comply with any of its terms. However, if You become +compliant, then the rights granted under this License from a particular +Contributor are reinstated (a) provisionally, unless and until such +Contributor explicitly and finally terminates Your grants, and (b) on an +ongoing basis, if such Contributor fails to notify You of the +non-compliance by some reasonable means prior to 60 days after You have +come back into compliance. Moreover, Your grants from a particular +Contributor are reinstated on an ongoing basis if such Contributor +notifies You of the non-compliance by some reasonable means, this is the +first time You have received notice of non-compliance with this License +from such Contributor, and You become compliant prior to 30 days after +Your receipt of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent +infringement claim (excluding declaratory judgment actions, +counter-claims, and cross-claims) alleging that a Contributor Version +directly or indirectly infringes any patent, then the rights granted to +You by any and all Contributors for the Covered Software under Section +2.1 of this License shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all +end user license agreements (excluding distributors and resellers) which +have been validly granted by You or Your distributors under this License +prior to termination shall survive termination. + +************************************************************************ +* * +* 6. Disclaimer of Warranty * +* ------------------------- * +* * +* Covered Software is provided under this License on an "as is" * +* basis, without warranty of any kind, either expressed, implied, or * +* statutory, including, without limitation, warranties that the * +* Covered Software is free of defects, merchantable, fit for a * +* particular purpose or non-infringing. The entire risk as to the * +* quality and performance of the Covered Software is with You. * +* Should any Covered Software prove defective in any respect, You * +* (not any Contributor) assume the cost of any necessary servicing, * +* repair, or correction. This disclaimer of warranty constitutes an * +* essential part of this License. No use of any Covered Software is * +* authorized under this License except under this disclaimer. * +* * +************************************************************************ + +************************************************************************ +* * +* 7. Limitation of Liability * +* -------------------------- * +* * +* Under no circumstances and under no legal theory, whether tort * +* (including negligence), contract, or otherwise, shall any * +* Contributor, or anyone who distributes Covered Software as * +* permitted above, be liable to You for any direct, indirect, * +* special, incidental, or consequential damages of any character * +* including, without limitation, damages for lost profits, loss of * +* goodwill, work stoppage, computer failure or malfunction, or any * +* and all other commercial damages or losses, even if such party * +* shall have been informed of the possibility of such damages. This * +* limitation of liability shall not apply to liability for death or * +* personal injury resulting from such party's negligence to the * +* extent applicable law prohibits such limitation. Some * +* jurisdictions do not allow the exclusion or limitation of * +* incidental or consequential damages, so this exclusion and * +* limitation may not apply to You. * +* * +************************************************************************ + +8. Litigation +------------- + +Any litigation relating to this License may be brought only in the +courts of a jurisdiction where the defendant maintains its principal +place of business and such litigation shall be governed by laws of that +jurisdiction, without reference to its conflict-of-law provisions. +Nothing in this Section shall prevent a party's ability to bring +cross-claims or counter-claims. + +9. Miscellaneous +---------------- + +This License represents the complete agreement concerning the subject +matter hereof. If any provision of this License is held to be +unenforceable, such provision shall be reformed only to the extent +necessary to make it enforceable. Any law or regulation which provides +that the language of a contract shall be construed against the drafter +shall not be used to construe this License against a Contributor. + +10. Versions of the License +--------------------------- + +10.1. New Versions + +Mozilla Foundation is the license steward. Except as provided in Section +10.3, no one other than the license steward has the right to modify or +publish new versions of this License. Each version will be given a +distinguishing version number. + +10.2. Effect of New Versions + +You may distribute the Covered Software under the terms of the version +of the License under which You originally received the Covered Software, +or under the terms of any subsequent version published by the license +steward. + +10.3. Modified Versions + +If you create software not governed by this License, and you want to +create a new license for such software, you may create and use a +modified version of this License if you rename the license and remove +any references to the name of the license steward (except to note that +such modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary +Licenses + +If You choose to distribute Source Code Form that is Incompatible With +Secondary Licenses under the terms of this version of the License, the +notice described in Exhibit B of this License must be attached. + +Exhibit A - Source Code Form License Notice +------------------------------------------- + + This Source Code Form is subject to the terms of the Mozilla Public + License, v. 2.0. If a copy of the MPL was not distributed with this + file, You can obtain one at http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular +file, then You may include the notice in a location (such as a LICENSE +file in a relevant directory) where a recipient would be likely to look +for such a notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - "Incompatible With Secondary Licenses" Notice +--------------------------------------------------------- + + This Source Code Form is "Incompatible With Secondary Licenses", as + defined by the Mozilla Public License, v. 2.0. diff --git a/vendor/github.com/hashicorp/go-sockaddr/Makefile b/vendor/github.com/hashicorp/go-sockaddr/Makefile new file mode 100644 index 0000000000..224135dc1e --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/Makefile @@ -0,0 +1,63 @@ +TOOLS= golang.org/x/tools/cover +GOCOVER_TMPFILE?= $(GOCOVER_FILE).tmp +GOCOVER_FILE?= .cover.out +GOCOVERHTML?= coverage.html + +test:: $(GOCOVER_FILE) + @$(MAKE) -C cmd/sockaddr test + +cover:: coverage_report + +$(GOCOVER_FILE):: + @find . -type d ! -path '*cmd*' ! -path '*.git*' -print0 | xargs -0 -I % sh -ec "cd % && rm -f $(GOCOVER_TMPFILE) && go test -coverprofile=$(GOCOVER_TMPFILE)" + + @echo 'mode: set' > $(GOCOVER_FILE) + @find . -type f ! -path '*cmd*' ! -path '*.git*' -name "$(GOCOVER_TMPFILE)" -print0 | xargs -0 -n1 cat $(GOCOVER_TMPFILE) | grep -v '^mode: ' >> ${PWD}/$(GOCOVER_FILE) + +$(GOCOVERHTML): $(GOCOVER_FILE) + go tool cover -html=$(GOCOVER_FILE) -o $(GOCOVERHTML) + +coverage_report:: $(GOCOVER_FILE) + go tool cover -html=$(GOCOVER_FILE) + +audit_tools:: + @go get -u github.com/golang/lint/golint && echo "Installed golint:" + @go get -u github.com/fzipp/gocyclo && echo "Installed gocyclo:" + @go get -u github.com/remyoudompheng/go-misc/deadcode && echo "Installed deadcode:" + @go get -u github.com/client9/misspell/cmd/misspell && echo "Installed misspell:" + @go get -u github.com/gordonklaus/ineffassign && echo "Installed ineffassign:" + +audit:: + deadcode + go tool vet -all *.go + go tool vet -shadow=true *.go + golint *.go + ineffassign . + gocyclo -over 65 *.go + misspell *.go + +clean:: + rm -f $(GOCOVER_FILE) $(GOCOVERHTML) + +dev:: + @go build + @make -B -C cmd/sockaddr sockaddr + +install:: + @go install + @make -C cmd/sockaddr install + +doc:: + echo Visit: http://127.0.0.1:6060/pkg/github.com/hashicorp/go-sockaddr/ + godoc -http=:6060 -goroot $GOROOT + +world:: + @set -e; \ + for os in solaris darwin freebsd linux windows; do \ + for arch in amd64; do \ + printf "Building on %s-%s\n" "$${os}" "$${arch}" ; \ + env GOOS="$${os}" GOARCH="$${arch}" go build -o /dev/null; \ + done; \ + done + + make -C cmd/sockaddr world diff --git a/vendor/github.com/hashicorp/go-sockaddr/README.md b/vendor/github.com/hashicorp/go-sockaddr/README.md new file mode 100644 index 0000000000..5273ee8998 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/README.md @@ -0,0 +1,118 @@ +# go-sockaddr + +## `sockaddr` Library + +Socket address convenience functions for Go. `go-sockaddr` is a convenience +library that makes doing the right thing with IP addresses easy. `go-sockaddr` +is loosely modeled after the UNIX `sockaddr_t` and creates a union of the family +of `sockaddr_t` types (see below for an ascii diagram). Library documentation +is available +at +[https://godoc.org/github.com/hashicorp/go-sockaddr](https://godoc.org/github.com/hashicorp/go-sockaddr). +The primary intent of the library was to make it possible to define heuristics +for selecting the correct IP addresses when a configuration is evaluated at +runtime. See +the +[docs](https://godoc.org/github.com/hashicorp/go-sockaddr), +[`template` package](https://godoc.org/github.com/hashicorp/go-sockaddr/template), +tests, +and +[CLI utility](https://github.com/hashicorp/go-sockaddr/tree/master/cmd/sockaddr) +for details and hints as to how to use this library. + +For example, with this library it is possible to find an IP address that: + +* is attached to a default route + ([`GetDefaultInterfaces()`](https://godoc.org/github.com/hashicorp/go-sockaddr#GetDefaultInterfaces)) +* is contained within a CIDR block (['IfByNetwork()'](https://godoc.org/github.com/hashicorp/go-sockaddr#IfByNetwork)) +* is an RFC1918 address + ([`IfByRFC("1918")`](https://godoc.org/github.com/hashicorp/go-sockaddr#IfByRFC)) +* is ordered + ([`OrderedIfAddrBy(args)`](https://godoc.org/github.com/hashicorp/go-sockaddr#OrderedIfAddrBy) where + `args` includes, but is not limited + to, + [`AscIfType`](https://godoc.org/github.com/hashicorp/go-sockaddr#AscIfType), + [`AscNetworkSize`](https://godoc.org/github.com/hashicorp/go-sockaddr#AscNetworkSize)) +* excludes all IPv6 addresses + ([`IfByType("^(IPv4)$")`](https://godoc.org/github.com/hashicorp/go-sockaddr#IfByType)) +* is larger than a `/32` + ([`IfByMaskSize(32)`](https://godoc.org/github.com/hashicorp/go-sockaddr#IfByMaskSize)) +* is not on a `down` interface + ([`ExcludeIfs("flags", "down")`](https://godoc.org/github.com/hashicorp/go-sockaddr#ExcludeIfs)) +* preferences an IPv6 address over an IPv4 address + ([`SortIfByType()`](https://godoc.org/github.com/hashicorp/go-sockaddr#SortIfByType) + + [`ReverseIfAddrs()`](https://godoc.org/github.com/hashicorp/go-sockaddr#ReverseIfAddrs)); and +* excludes any IP in RFC6890 address + ([`IfByRFC("6890")`](https://godoc.org/github.com/hashicorp/go-sockaddr#IfByRFC)) + +Or any combination or variation therein. + +There are also a few simple helper functions such as `GetPublicIP` and +`GetPrivateIP` which both return strings and select the first public or private +IP address on the default interface, respectively. Similarly, there is also a +helper function called `GetInterfaceIP` which returns the first usable IP +address on the named interface. + +## `sockaddr` CLI + +Given the possible complexity of the `sockaddr` library, there is a CLI utility +that accompanies the library, also +called +[`sockaddr`](https://github.com/hashicorp/go-sockaddr/tree/master/cmd/sockaddr). +The +[`sockaddr`](https://github.com/hashicorp/go-sockaddr/tree/master/cmd/sockaddr) +utility exposes nearly all of the functionality of the library and can be used +either as an administrative tool or testing tool. To install +the +[`sockaddr`](https://github.com/hashicorp/go-sockaddr/tree/master/cmd/sockaddr), +run: + +```text +$ go get -u github.com/hashicorp/go-sockaddr/cmd/sockaddr +``` + +If you're familiar with UNIX's `sockaddr` struct's, the following diagram +mapping the C `sockaddr` (top) to `go-sockaddr` structs (bottom) and +interfaces will be helpful: + +``` ++-------------------------------------------------------+ +| | +| sockaddr | +| SockAddr | +| | +| +--------------+ +----------------------------------+ | +| | sockaddr_un | | | | +| | SockAddrUnix | | sockaddr_in{,6} | | +| +--------------+ | IPAddr | | +| | | | +| | +-------------+ +--------------+ | | +| | | sockaddr_in | | sockaddr_in6 | | | +| | | IPv4Addr | | IPv6Addr | | | +| | +-------------+ +--------------+ | | +| | | | +| +----------------------------------+ | +| | ++-------------------------------------------------------+ +``` + +## Inspiration and Design + +There were many subtle inspirations that led to this design, but the most direct +inspiration for the filtering syntax was +OpenBSD's +[`pf.conf(5)`](https://www.freebsd.org/cgi/man.cgi?query=pf.conf&apropos=0&sektion=0&arch=default&format=html#PARAMETERS) firewall +syntax that lets you select the first IP address on a given named interface. +The original problem stemmed from: + +* needing to create immutable images using [Packer](https://www.packer.io) that + ran the [Consul](https://www.consul.io) process (Consul can only use one IP + address at a time); +* images that may or may not have multiple interfaces or IP addresses at + runtime; and +* we didn't want to rely on configuration management to render out the correct + IP address if the VM image was being used in an auto-scaling group. + +Instead we needed some way to codify a heuristic that would correctly select the +right IP address but the input parameters were not known when the image was +created. diff --git a/vendor/github.com/hashicorp/go-sockaddr/doc.go b/vendor/github.com/hashicorp/go-sockaddr/doc.go new file mode 100644 index 0000000000..90671deb51 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/doc.go @@ -0,0 +1,5 @@ +/* +Package sockaddr is a Go implementation of the UNIX socket family data types and +related helper functions. +*/ +package sockaddr diff --git a/vendor/github.com/hashicorp/go-sockaddr/ifaddr.go b/vendor/github.com/hashicorp/go-sockaddr/ifaddr.go new file mode 100644 index 0000000000..3e4ff9fca4 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/ifaddr.go @@ -0,0 +1,126 @@ +package sockaddr + +// ifAddrAttrMap is a map of the IfAddr type-specific attributes. +var ifAddrAttrMap map[AttrName]func(IfAddr) string +var ifAddrAttrs []AttrName + +func init() { + ifAddrAttrInit() +} + +// GetPrivateIP returns a string with a single IP address that is part of RFC +// 6890 and has a default route. If the system can't determine its IP address +// or find an RFC 6890 IP address, an empty string will be returned instead. +// This function is the `eval` equivalent of: +// +// ``` +// $ sockaddr eval -r '{{GetPrivateInterfaces | attr "address"}}' +/// ``` +func GetPrivateIP() (string, error) { + privateIfs, err := GetPrivateInterfaces() + if err != nil { + return "", err + } + if len(privateIfs) < 1 { + return "", nil + } + + ifAddr := privateIfs[0] + ip := *ToIPAddr(ifAddr.SockAddr) + return ip.NetIP().String(), nil +} + +// GetPublicIP returns a string with a single IP address that is NOT part of RFC +// 6890 and has a default route. If the system can't determine its IP address +// or find a non RFC 6890 IP address, an empty string will be returned instead. +// This function is the `eval` equivalent of: +// +// ``` +// $ sockaddr eval -r '{{GetPublicInterfaces | attr "address"}}' +/// ``` +func GetPublicIP() (string, error) { + publicIfs, err := GetPublicInterfaces() + if err != nil { + return "", err + } else if len(publicIfs) < 1 { + return "", nil + } + + ifAddr := publicIfs[0] + ip := *ToIPAddr(ifAddr.SockAddr) + return ip.NetIP().String(), nil +} + +// GetInterfaceIP returns a string with a single IP address sorted by the size +// of the network (i.e. IP addresses with a smaller netmask, larger network +// size, are sorted first). This function is the `eval` equivalent of: +// +// ``` +// $ sockaddr eval -r '{{GetAllInterfaces | include "name" <> | sort "type,size" | include "flag" "forwardable" | attr "address" }}' +/// ``` +func GetInterfaceIP(namedIfRE string) (string, error) { + ifAddrs, err := GetAllInterfaces() + if err != nil { + return "", err + } + + ifAddrs, _, err = IfByName(namedIfRE, ifAddrs) + if err != nil { + return "", err + } + + ifAddrs, _, err = IfByFlag("forwardable", ifAddrs) + if err != nil { + return "", err + } + + ifAddrs, err = SortIfBy("+type,+size", ifAddrs) + if err != nil { + return "", err + } + + if len(ifAddrs) == 0 { + return "", err + } + + ip := ToIPAddr(ifAddrs[0].SockAddr) + if ip == nil { + return "", err + } + + return IPAddrAttr(*ip, "address"), nil +} + +// IfAddrAttrs returns a list of attributes supported by the IfAddr type +func IfAddrAttrs() []AttrName { + return ifAddrAttrs +} + +// IfAddrAttr returns a string representation of an attribute for the given +// IfAddr. +func IfAddrAttr(ifAddr IfAddr, attrName AttrName) string { + fn, found := ifAddrAttrMap[attrName] + if !found { + return "" + } + + return fn(ifAddr) +} + +// ifAddrAttrInit is called once at init() +func ifAddrAttrInit() { + // Sorted for human readability + ifAddrAttrs = []AttrName{ + "flags", + "name", + } + + ifAddrAttrMap = map[AttrName]func(ifAddr IfAddr) string{ + "flags": func(ifAddr IfAddr) string { + return ifAddr.Interface.Flags.String() + }, + "name": func(ifAddr IfAddr) string { + return ifAddr.Interface.Name + }, + } +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/ifaddrs.go b/vendor/github.com/hashicorp/go-sockaddr/ifaddrs.go new file mode 100644 index 0000000000..8233be2022 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/ifaddrs.go @@ -0,0 +1,969 @@ +package sockaddr + +import ( + "errors" + "fmt" + "net" + "regexp" + "sort" + "strconv" + "strings" +) + +// IfAddrs is a slice of IfAddr +type IfAddrs []IfAddr + +func (ifs IfAddrs) Len() int { return len(ifs) } + +// CmpIfFunc is the function signature that must be met to be used in the +// OrderedIfAddrBy multiIfAddrSorter +type CmpIfAddrFunc func(p1, p2 *IfAddr) int + +// multiIfAddrSorter implements the Sort interface, sorting the IfAddrs within. +type multiIfAddrSorter struct { + ifAddrs IfAddrs + cmp []CmpIfAddrFunc +} + +// Sort sorts the argument slice according to the Cmp functions passed to +// OrderedIfAddrBy. +func (ms *multiIfAddrSorter) Sort(ifAddrs IfAddrs) { + ms.ifAddrs = ifAddrs + sort.Sort(ms) +} + +// OrderedIfAddrBy sorts SockAddr by the list of sort function pointers. +func OrderedIfAddrBy(cmpFuncs ...CmpIfAddrFunc) *multiIfAddrSorter { + return &multiIfAddrSorter{ + cmp: cmpFuncs, + } +} + +// Len is part of sort.Interface. +func (ms *multiIfAddrSorter) Len() int { + return len(ms.ifAddrs) +} + +// Less is part of sort.Interface. It is implemented by looping along the Cmp() +// functions until it finds a comparison that is either less than or greater +// than. A return value of 0 defers sorting to the next function in the +// multisorter (which means the results of sorting may leave the resutls in a +// non-deterministic order). +func (ms *multiIfAddrSorter) Less(i, j int) bool { + p, q := &ms.ifAddrs[i], &ms.ifAddrs[j] + // Try all but the last comparison. + var k int + for k = 0; k < len(ms.cmp)-1; k++ { + cmp := ms.cmp[k] + x := cmp(p, q) + switch x { + case -1: + // p < q, so we have a decision. + return true + case 1: + // p > q, so we have a decision. + return false + } + // p == q; try the next comparison. + } + // All comparisons to here said "equal", so just return whatever the + // final comparison reports. + switch ms.cmp[k](p, q) { + case -1: + return true + case 1: + return false + default: + // Still a tie! Now what? + return false + panic("undefined sort order for remaining items in the list") + } +} + +// Swap is part of sort.Interface. +func (ms *multiIfAddrSorter) Swap(i, j int) { + ms.ifAddrs[i], ms.ifAddrs[j] = ms.ifAddrs[j], ms.ifAddrs[i] +} + +// AscIfAddress is a sorting function to sort IfAddrs by their respective +// address type. Non-equal types are deferred in the sort. +func AscIfAddress(p1Ptr, p2Ptr *IfAddr) int { + return AscAddress(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// AscIfName is a sorting function to sort IfAddrs by their interface names. +func AscIfName(p1Ptr, p2Ptr *IfAddr) int { + return strings.Compare(p1Ptr.Name, p2Ptr.Name) +} + +// AscIfNetworkSize is a sorting function to sort IfAddrs by their respective +// network mask size. +func AscIfNetworkSize(p1Ptr, p2Ptr *IfAddr) int { + return AscNetworkSize(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// AscIfPort is a sorting function to sort IfAddrs by their respective +// port type. Non-equal types are deferred in the sort. +func AscIfPort(p1Ptr, p2Ptr *IfAddr) int { + return AscPort(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// AscIfPrivate is a sorting function to sort IfAddrs by "private" values before +// "public" values. Both IPv4 and IPv6 are compared against RFC6890 (RFC6890 +// includes, and is not limited to, RFC1918 and RFC6598 for IPv4, and IPv6 +// includes RFC4193). +func AscIfPrivate(p1Ptr, p2Ptr *IfAddr) int { + return AscPrivate(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// AscIfType is a sorting function to sort IfAddrs by their respective address +// type. Non-equal types are deferred in the sort. +func AscIfType(p1Ptr, p2Ptr *IfAddr) int { + return AscType(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// DescIfAddress is identical to AscIfAddress but reverse ordered. +func DescIfAddress(p1Ptr, p2Ptr *IfAddr) int { + return -1 * AscAddress(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// DescIfName is identical to AscIfName but reverse ordered. +func DescIfName(p1Ptr, p2Ptr *IfAddr) int { + return -1 * strings.Compare(p1Ptr.Name, p2Ptr.Name) +} + +// DescIfNetworkSize is identical to AscIfNetworkSize but reverse ordered. +func DescIfNetworkSize(p1Ptr, p2Ptr *IfAddr) int { + return -1 * AscNetworkSize(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// DescIfPort is identical to AscIfPort but reverse ordered. +func DescIfPort(p1Ptr, p2Ptr *IfAddr) int { + return -1 * AscPort(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// DescIfPrivate is identical to AscIfPrivate but reverse ordered. +func DescIfPrivate(p1Ptr, p2Ptr *IfAddr) int { + return -1 * AscPrivate(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// DescIfType is identical to AscIfType but reverse ordered. +func DescIfType(p1Ptr, p2Ptr *IfAddr) int { + return -1 * AscType(&p1Ptr.SockAddr, &p2Ptr.SockAddr) +} + +// FilterIfByType filters IfAddrs and returns a list of the matching type +func FilterIfByType(ifAddrs IfAddrs, type_ SockAddrType) (matchedIfs, excludedIfs IfAddrs) { + excludedIfs = make(IfAddrs, 0, len(ifAddrs)) + matchedIfs = make(IfAddrs, 0, len(ifAddrs)) + + for _, ifAddr := range ifAddrs { + if ifAddr.SockAddr.Type()&type_ != 0 { + matchedIfs = append(matchedIfs, ifAddr) + } else { + excludedIfs = append(excludedIfs, ifAddr) + } + } + return matchedIfs, excludedIfs +} + +// IfAttr forwards the selector to IfAttr.Attr() for resolution. If there is +// more than one IfAddr, only the first IfAddr is used. +func IfAttr(selectorName string, ifAddrs IfAddrs) (string, error) { + if len(ifAddrs) == 0 { + return "", nil + } + + attrName := AttrName(strings.ToLower(selectorName)) + attrVal, err := ifAddrs[0].Attr(attrName) + return attrVal, err +} + +// GetAllInterfaces iterates over all available network interfaces and finds all +// available IP addresses on each interface and converts them to +// sockaddr.IPAddrs, and returning the result as an array of IfAddr. +func GetAllInterfaces() (IfAddrs, error) { + ifs, err := net.Interfaces() + if err != nil { + return nil, err + } + + ifAddrs := make(IfAddrs, 0, len(ifs)) + for _, intf := range ifs { + addrs, err := intf.Addrs() + if err != nil { + return nil, err + } + + for _, addr := range addrs { + var ipAddr IPAddr + ipAddr, err = NewIPAddr(addr.String()) + if err != nil { + return IfAddrs{}, fmt.Errorf("unable to create an IP address from %q", addr.String()) + } + + ifAddr := IfAddr{ + SockAddr: ipAddr, + Interface: intf, + } + ifAddrs = append(ifAddrs, ifAddr) + } + } + + return ifAddrs, nil +} + +// GetDefaultInterfaces returns IfAddrs of the addresses attached to the default +// route. +func GetDefaultInterfaces() (IfAddrs, error) { + ri, err := NewRouteInfo() + if err != nil { + return nil, err + } + + defaultIfName, err := ri.GetDefaultInterfaceName() + if err != nil { + return nil, err + } + + var defaultIfs, ifAddrs IfAddrs + ifAddrs, err = GetAllInterfaces() + for _, ifAddr := range ifAddrs { + if ifAddr.Name == defaultIfName { + defaultIfs = append(defaultIfs, ifAddr) + } + } + + return defaultIfs, nil +} + +// GetPrivateInterfaces returns an IfAddrs that are part of RFC 6890 and have a +// default route. If the system can't determine its IP address or find an RFC +// 6890 IP address, an empty IfAddrs will be returned instead. This function is +// the `eval` equivalent of: +// +// ``` +// $ sockaddr eval -r '{{GetDefaultInterfaces | include "type" "ip" | include "flags" "forwardable|up" | sort "type,size" | include "RFC" "6890" }}' +/// ``` +func GetPrivateInterfaces() (IfAddrs, error) { + privateIfs, err := GetDefaultInterfaces() + if err != nil { + return IfAddrs{}, err + } + if len(privateIfs) == 0 { + return IfAddrs{}, nil + } + + privateIfs, _ = FilterIfByType(privateIfs, TypeIP) + if len(privateIfs) == 0 { + return IfAddrs{}, nil + } + + privateIfs, _, err = IfByFlag("forwardable|up", privateIfs) + if err != nil { + return IfAddrs{}, err + } + if len(privateIfs) == 0 { + return IfAddrs{}, nil + } + + OrderedIfAddrBy(AscIfType, AscIfNetworkSize).Sort(privateIfs) + + privateIfs, _, err = IfByRFC("6890", privateIfs) + if err != nil { + return IfAddrs{}, err + } else if len(privateIfs) == 0 { + return IfAddrs{}, nil + } + + return privateIfs, nil +} + +// GetPublicInterfaces returns an IfAddrs that are NOT part of RFC 6890 and has a +// default route. If the system can't determine its IP address or find a non +// RFC 6890 IP address, an empty IfAddrs will be returned instead. This +// function is the `eval` equivalent of: +// +// ``` +// $ sockaddr eval -r '{{GetDefaultInterfaces | include "type" "ip" | include "flags" "forwardable|up" | sort "type,size" | exclude "RFC" "6890" }}' +/// ``` +func GetPublicInterfaces() (IfAddrs, error) { + publicIfs, err := GetDefaultInterfaces() + if err != nil { + return IfAddrs{}, err + } + if len(publicIfs) == 0 { + return IfAddrs{}, nil + } + + publicIfs, _ = FilterIfByType(publicIfs, TypeIP) + if len(publicIfs) == 0 { + return IfAddrs{}, nil + } + + publicIfs, _, err = IfByFlag("forwardable|up", publicIfs) + if err != nil { + return IfAddrs{}, err + } + if len(publicIfs) == 0 { + return IfAddrs{}, nil + } + + OrderedIfAddrBy(AscIfType, AscIfNetworkSize).Sort(publicIfs) + + _, publicIfs, err = IfByRFC("6890", publicIfs) + if err != nil { + return IfAddrs{}, err + } else if len(publicIfs) == 0 { + return IfAddrs{}, nil + } + + return publicIfs, nil +} + +// IfByAddress returns a list of matched and non-matched IfAddrs, or an error if +// the regexp fails to compile. +func IfByAddress(inputRe string, ifAddrs IfAddrs) (matched, remainder IfAddrs, err error) { + re, err := regexp.Compile(inputRe) + if err != nil { + return nil, nil, fmt.Errorf("Unable to compile address regexp %+q: %v", inputRe, err) + } + + matchedAddrs := make(IfAddrs, 0, len(ifAddrs)) + excludedAddrs := make(IfAddrs, 0, len(ifAddrs)) + for _, addr := range ifAddrs { + if re.MatchString(addr.SockAddr.String()) { + matchedAddrs = append(matchedAddrs, addr) + } else { + excludedAddrs = append(excludedAddrs, addr) + } + } + + return matchedAddrs, excludedAddrs, nil +} + +// IfByName returns a list of matched and non-matched IfAddrs, or an error if +// the regexp fails to compile. +func IfByName(inputRe string, ifAddrs IfAddrs) (matched, remainder IfAddrs, err error) { + re, err := regexp.Compile(inputRe) + if err != nil { + return nil, nil, fmt.Errorf("Unable to compile name regexp %+q: %v", inputRe, err) + } + + matchedAddrs := make(IfAddrs, 0, len(ifAddrs)) + excludedAddrs := make(IfAddrs, 0, len(ifAddrs)) + for _, addr := range ifAddrs { + if re.MatchString(addr.Name) { + matchedAddrs = append(matchedAddrs, addr) + } else { + excludedAddrs = append(excludedAddrs, addr) + } + } + + return matchedAddrs, excludedAddrs, nil +} + +// IfByPort returns a list of matched and non-matched IfAddrs, or an error if +// the regexp fails to compile. +func IfByPort(inputRe string, ifAddrs IfAddrs) (matchedIfs, excludedIfs IfAddrs, err error) { + re, err := regexp.Compile(inputRe) + if err != nil { + return nil, nil, fmt.Errorf("Unable to compile port regexp %+q: %v", inputRe, err) + } + + ipIfs, nonIfs := FilterIfByType(ifAddrs, TypeIP) + matchedIfs = make(IfAddrs, 0, len(ipIfs)) + excludedIfs = append(IfAddrs(nil), nonIfs...) + for _, addr := range ipIfs { + ipAddr := ToIPAddr(addr.SockAddr) + if ipAddr == nil { + continue + } + + port := strconv.FormatInt(int64((*ipAddr).IPPort()), 10) + if re.MatchString(port) { + matchedIfs = append(matchedIfs, addr) + } else { + excludedIfs = append(excludedIfs, addr) + } + } + + return matchedIfs, excludedIfs, nil +} + +// IfByRFC returns a list of matched and non-matched IfAddrs that contain the +// relevant RFC-specified traits. +func IfByRFC(selectorParam string, ifAddrs IfAddrs) (matched, remainder IfAddrs, err error) { + inputRFC, err := strconv.ParseUint(selectorParam, 10, 64) + if err != nil { + return IfAddrs{}, IfAddrs{}, fmt.Errorf("unable to parse RFC number %q: %v", selectorParam, err) + } + + matchedIfAddrs := make(IfAddrs, 0, len(ifAddrs)) + remainingIfAddrs := make(IfAddrs, 0, len(ifAddrs)) + + rfcNetMap := KnownRFCs() + rfcNets, ok := rfcNetMap[uint(inputRFC)] + if !ok { + return nil, nil, fmt.Errorf("unsupported RFC %d", inputRFC) + } + + for _, ifAddr := range ifAddrs { + var contained bool + for _, rfcNet := range rfcNets { + if rfcNet.Contains(ifAddr.SockAddr) { + matchedIfAddrs = append(matchedIfAddrs, ifAddr) + contained = true + break + } + } + if !contained { + remainingIfAddrs = append(remainingIfAddrs, ifAddr) + } + } + + return matchedIfAddrs, remainingIfAddrs, nil +} + +// IfByRFCs returns a list of matched and non-matched IfAddrs that contain the +// relevant RFC-specified traits. Multiple RFCs can be specified and separated +// by the `|` symbol. No protection is taken to ensure an IfAddr does not end +// up in both the included and excluded list. +func IfByRFCs(selectorParam string, ifAddrs IfAddrs) (matched, remainder IfAddrs, err error) { + var includedIfs, excludedIfs IfAddrs + for _, rfcStr := range strings.Split(selectorParam, "|") { + includedRFCIfs, excludedRFCIfs, err := IfByRFC(rfcStr, ifAddrs) + if err != nil { + return IfAddrs{}, IfAddrs{}, fmt.Errorf("unable to lookup RFC number %q: %v", rfcStr, err) + } + includedIfs = append(includedIfs, includedRFCIfs...) + excludedIfs = append(excludedIfs, excludedRFCIfs...) + } + + return includedIfs, excludedIfs, nil +} + +// IfByMaskSize returns a list of matched and non-matched IfAddrs that have the +// matching mask size. +func IfByMaskSize(selectorParam string, ifAddrs IfAddrs) (matchedIfs, excludedIfs IfAddrs, err error) { + maskSize, err := strconv.ParseUint(selectorParam, 10, 64) + if err != nil { + return IfAddrs{}, IfAddrs{}, fmt.Errorf("invalid exclude size argument (%q): %v", selectorParam, err) + } + + ipIfs, nonIfs := FilterIfByType(ifAddrs, TypeIP) + matchedIfs = make(IfAddrs, 0, len(ipIfs)) + excludedIfs = append(IfAddrs(nil), nonIfs...) + for _, addr := range ipIfs { + ipAddr := ToIPAddr(addr.SockAddr) + if ipAddr == nil { + return IfAddrs{}, IfAddrs{}, fmt.Errorf("unable to filter mask sizes on non-IP type %s: %v", addr.SockAddr.Type().String(), addr.SockAddr.String()) + } + + switch { + case (*ipAddr).Type()&TypeIPv4 != 0 && maskSize > 32: + return IfAddrs{}, IfAddrs{}, fmt.Errorf("mask size out of bounds for IPv4 address: %d", maskSize) + case (*ipAddr).Type()&TypeIPv6 != 0 && maskSize > 128: + return IfAddrs{}, IfAddrs{}, fmt.Errorf("mask size out of bounds for IPv6 address: %d", maskSize) + } + + if (*ipAddr).Maskbits() == int(maskSize) { + matchedIfs = append(matchedIfs, addr) + } else { + excludedIfs = append(excludedIfs, addr) + } + } + + return matchedIfs, excludedIfs, nil +} + +// IfByType returns a list of matching and non-matching IfAddr that match the +// specified type. For instance: +// +// include "type" "IPv4,IPv6" +// +// will include any IfAddrs that is either an IPv4 or IPv6 address. Any +// addresses on those interfaces that don't match will be included in the +// remainder results. +func IfByType(inputTypes string, ifAddrs IfAddrs) (matched, remainder IfAddrs, err error) { + matchingIfAddrs := make(IfAddrs, 0, len(ifAddrs)) + remainingIfAddrs := make(IfAddrs, 0, len(ifAddrs)) + + ifTypes := strings.Split(strings.ToLower(inputTypes), "|") + for _, ifType := range ifTypes { + switch ifType { + case "ip", "ipv4", "ipv6", "unix": + // Valid types + default: + return nil, nil, fmt.Errorf("unsupported type %q %q", ifType, inputTypes) + } + } + + for _, ifAddr := range ifAddrs { + for _, ifType := range ifTypes { + var matched bool + switch { + case ifType == "ip" && ifAddr.SockAddr.Type()&TypeIP != 0: + matched = true + case ifType == "ipv4" && ifAddr.SockAddr.Type()&TypeIPv4 != 0: + matched = true + case ifType == "ipv6" && ifAddr.SockAddr.Type()&TypeIPv6 != 0: + matched = true + case ifType == "unix" && ifAddr.SockAddr.Type()&TypeUnix != 0: + matched = true + } + + if matched { + matchingIfAddrs = append(matchingIfAddrs, ifAddr) + } else { + remainingIfAddrs = append(remainingIfAddrs, ifAddr) + } + } + } + + return matchingIfAddrs, remainingIfAddrs, nil +} + +// IfByFlag returns a list of matching and non-matching IfAddrs that match the +// specified type. For instance: +// +// include "flag" "up,broadcast" +// +// will include any IfAddrs that have both the "up" and "broadcast" flags set. +// Any addresses on those interfaces that don't match will be omitted from the +// results. +func IfByFlag(inputFlags string, ifAddrs IfAddrs) (matched, remainder IfAddrs, err error) { + matchedAddrs := make(IfAddrs, 0, len(ifAddrs)) + excludedAddrs := make(IfAddrs, 0, len(ifAddrs)) + + var wantForwardable, + wantGlobalUnicast, + wantInterfaceLocalMulticast, + wantLinkLocalMulticast, + wantLinkLocalUnicast, + wantLoopback, + wantMulticast, + wantUnspecified bool + var ifFlags net.Flags + var checkFlags, checkAttrs bool + for _, flagName := range strings.Split(strings.ToLower(inputFlags), "|") { + switch flagName { + case "broadcast": + checkFlags = true + ifFlags = ifFlags | net.FlagBroadcast + case "down": + checkFlags = true + ifFlags = (ifFlags &^ net.FlagUp) + case "forwardable": + checkAttrs = true + wantForwardable = true + case "global unicast": + checkAttrs = true + wantGlobalUnicast = true + case "interface-local multicast": + checkAttrs = true + wantInterfaceLocalMulticast = true + case "link-local multicast": + checkAttrs = true + wantLinkLocalMulticast = true + case "link-local unicast": + checkAttrs = true + wantLinkLocalUnicast = true + case "loopback": + checkAttrs = true + checkFlags = true + ifFlags = ifFlags | net.FlagLoopback + wantLoopback = true + case "multicast": + checkAttrs = true + checkFlags = true + ifFlags = ifFlags | net.FlagMulticast + wantMulticast = true + case "point-to-point": + checkFlags = true + ifFlags = ifFlags | net.FlagPointToPoint + case "unspecified": + checkAttrs = true + wantUnspecified = true + case "up": + checkFlags = true + ifFlags = ifFlags | net.FlagUp + default: + return nil, nil, fmt.Errorf("Unknown interface flag: %+q", flagName) + } + } + + for _, ifAddr := range ifAddrs { + var matched bool + if checkFlags && ifAddr.Interface.Flags&ifFlags == ifFlags { + matched = true + } + if checkAttrs { + if ip := ToIPAddr(ifAddr.SockAddr); ip != nil { + netIP := (*ip).NetIP() + switch { + case wantGlobalUnicast && netIP.IsGlobalUnicast(): + matched = true + case wantInterfaceLocalMulticast && netIP.IsInterfaceLocalMulticast(): + matched = true + case wantLinkLocalMulticast && netIP.IsLinkLocalMulticast(): + matched = true + case wantLinkLocalUnicast && netIP.IsLinkLocalUnicast(): + matched = true + case wantLoopback && netIP.IsLoopback(): + matched = true + case wantMulticast && netIP.IsMulticast(): + matched = true + case wantUnspecified && netIP.IsUnspecified(): + matched = true + case wantForwardable && !IsRFC(ForwardingBlacklist, ifAddr.SockAddr): + matched = true + } + } + } + if matched { + matchedAddrs = append(matchedAddrs, ifAddr) + } else { + excludedAddrs = append(excludedAddrs, ifAddr) + } + } + return matchedAddrs, excludedAddrs, nil +} + +// IfByNetwork returns an IfAddrs that are equal to or included within the +// network passed in by selector. +func IfByNetwork(selectorParam string, inputIfAddrs IfAddrs) (IfAddrs, IfAddrs, error) { + var includedIfs, excludedIfs IfAddrs + for _, netStr := range strings.Split(selectorParam, "|") { + netAddr, err := NewIPAddr(netStr) + if err != nil { + return nil, nil, fmt.Errorf("unable to create an IP address from %+q: %v", netStr, err) + } + + for _, ifAddr := range inputIfAddrs { + if netAddr.Contains(ifAddr.SockAddr) { + includedIfs = append(includedIfs, ifAddr) + } else { + excludedIfs = append(excludedIfs, ifAddr) + } + } + } + + return includedIfs, excludedIfs, nil +} + +// IncludeIfs returns an IfAddrs based on the passed in selector. +func IncludeIfs(selectorName, selectorParam string, inputIfAddrs IfAddrs) (IfAddrs, error) { + var includedIfs IfAddrs + var err error + + switch strings.ToLower(selectorName) { + case "address": + includedIfs, _, err = IfByAddress(selectorParam, inputIfAddrs) + case "flag", "flags": + includedIfs, _, err = IfByFlag(selectorParam, inputIfAddrs) + case "name": + includedIfs, _, err = IfByName(selectorParam, inputIfAddrs) + case "network": + includedIfs, _, err = IfByNetwork(selectorParam, inputIfAddrs) + case "port": + includedIfs, _, err = IfByPort(selectorParam, inputIfAddrs) + case "rfc", "rfcs": + includedIfs, _, err = IfByRFCs(selectorParam, inputIfAddrs) + case "size": + includedIfs, _, err = IfByMaskSize(selectorParam, inputIfAddrs) + case "type": + includedIfs, _, err = IfByType(selectorParam, inputIfAddrs) + default: + return IfAddrs{}, fmt.Errorf("invalid include selector %q", selectorName) + } + + if err != nil { + return IfAddrs{}, err + } + + return includedIfs, nil +} + +// ExcludeIfs returns an IfAddrs based on the passed in selector. +func ExcludeIfs(selectorName, selectorParam string, inputIfAddrs IfAddrs) (IfAddrs, error) { + var excludedIfs IfAddrs + var err error + + switch strings.ToLower(selectorName) { + case "address": + _, excludedIfs, err = IfByAddress(selectorParam, inputIfAddrs) + case "flag", "flags": + _, excludedIfs, err = IfByFlag(selectorParam, inputIfAddrs) + case "name": + _, excludedIfs, err = IfByName(selectorParam, inputIfAddrs) + case "network": + _, excludedIfs, err = IfByNetwork(selectorParam, inputIfAddrs) + case "port": + _, excludedIfs, err = IfByPort(selectorParam, inputIfAddrs) + case "rfc", "rfcs": + _, excludedIfs, err = IfByRFCs(selectorParam, inputIfAddrs) + case "size": + _, excludedIfs, err = IfByMaskSize(selectorParam, inputIfAddrs) + case "type": + _, excludedIfs, err = IfByType(selectorParam, inputIfAddrs) + default: + return IfAddrs{}, fmt.Errorf("invalid exclude selector %q", selectorName) + } + + if err != nil { + return IfAddrs{}, err + } + + return excludedIfs, nil +} + +// SortIfBy returns an IfAddrs sorted based on the passed in selector. Multiple +// sort clauses can be passed in as a comma delimited list without whitespace. +func SortIfBy(selectorParam string, inputIfAddrs IfAddrs) (IfAddrs, error) { + sortedIfs := append(IfAddrs(nil), inputIfAddrs...) + + clauses := strings.Split(selectorParam, ",") + sortFuncs := make([]CmpIfAddrFunc, len(clauses)) + + for i, clause := range clauses { + switch strings.TrimSpace(strings.ToLower(clause)) { + case "+address", "address": + // The "address" selector returns an array of IfAddrs + // ordered by the network address. IfAddrs that are not + // comparable will be at the end of the list and in a + // non-deterministic order. + sortFuncs[i] = AscIfAddress + case "-address": + sortFuncs[i] = DescIfAddress + case "+name", "name": + // The "name" selector returns an array of IfAddrs + // ordered by the interface name. + sortFuncs[i] = AscIfName + case "-name": + sortFuncs[i] = DescIfName + case "+port", "port": + // The "port" selector returns an array of IfAddrs + // ordered by the port, if included in the IfAddr. + // IfAddrs that are not comparable will be at the end of + // the list and in a non-deterministic order. + sortFuncs[i] = AscIfPort + case "-port": + sortFuncs[i] = DescIfPort + case "+private", "private": + // The "private" selector returns an array of IfAddrs + // ordered by private addresses first. IfAddrs that are + // not comparable will be at the end of the list and in + // a non-deterministic order. + sortFuncs[i] = AscIfPrivate + case "-private": + sortFuncs[i] = DescIfPrivate + case "+size", "size": + // The "size" selector returns an array of IfAddrs + // ordered by the size of the network mask, smaller mask + // (larger number of hosts per network) to largest + // (e.g. a /24 sorts before a /32). + sortFuncs[i] = AscIfNetworkSize + case "-size": + sortFuncs[i] = DescIfNetworkSize + case "+type", "type": + // The "type" selector returns an array of IfAddrs + // ordered by the type of the IfAddr. The sort order is + // Unix, IPv4, then IPv6. + sortFuncs[i] = AscIfType + case "-type": + sortFuncs[i] = DescIfType + default: + // Return an empty list for invalid sort types. + return IfAddrs{}, fmt.Errorf("unknown sort type: %q", clause) + } + } + + OrderedIfAddrBy(sortFuncs...).Sort(sortedIfs) + + return sortedIfs, nil +} + +// UniqueIfAddrsBy creates a unique set of IfAddrs based on the matching +// selector. UniqueIfAddrsBy assumes the input has already been sorted. +func UniqueIfAddrsBy(selectorName string, inputIfAddrs IfAddrs) (IfAddrs, error) { + attrName := strings.ToLower(selectorName) + + ifs := make(IfAddrs, 0, len(inputIfAddrs)) + var lastMatch string + for _, ifAddr := range inputIfAddrs { + var out string + switch attrName { + case "address": + out = ifAddr.SockAddr.String() + case "name": + out = ifAddr.Name + default: + return nil, fmt.Errorf("unsupported unique constraint %+q", selectorName) + } + + switch { + case lastMatch == "", lastMatch != out: + lastMatch = out + ifs = append(ifs, ifAddr) + case lastMatch == out: + continue + } + } + + return ifs, nil +} + +// JoinIfAddrs joins an IfAddrs and returns a string +func JoinIfAddrs(selectorName string, joinStr string, inputIfAddrs IfAddrs) (string, error) { + outputs := make([]string, 0, len(inputIfAddrs)) + attrName := AttrName(strings.ToLower(selectorName)) + + for _, ifAddr := range inputIfAddrs { + var attrVal string + var err error + attrVal, err = ifAddr.Attr(attrName) + if err != nil { + return "", err + } + outputs = append(outputs, attrVal) + } + return strings.Join(outputs, joinStr), nil +} + +// LimitIfAddrs returns a slice of IfAddrs based on the specified limit. +func LimitIfAddrs(lim uint, in IfAddrs) (IfAddrs, error) { + // Clamp the limit to the length of the array + if int(lim) > len(in) { + lim = uint(len(in)) + } + + return in[0:lim], nil +} + +// OffsetIfAddrs returns a slice of IfAddrs based on the specified offset. +func OffsetIfAddrs(off int, in IfAddrs) (IfAddrs, error) { + var end bool + if off < 0 { + end = true + off = off * -1 + } + + if off > len(in) { + return IfAddrs{}, fmt.Errorf("unable to seek past the end of the interface array: offset (%d) exceeds the number of interfaces (%d)", off, len(in)) + } + + if end { + return in[len(in)-off:], nil + } + return in[off:], nil +} + +func (ifAddr IfAddr) String() string { + return fmt.Sprintf("%s %v", ifAddr.SockAddr, ifAddr.Interface) +} + +// parseDefaultIfNameFromRoute parses standard route(8)'s output for the *BSDs +// and Solaris. +func parseDefaultIfNameFromRoute(routeOut string) (string, error) { + lines := strings.Split(routeOut, "\n") + for _, line := range lines { + kvs := strings.SplitN(line, ":", 2) + if len(kvs) != 2 { + continue + } + + if strings.TrimSpace(kvs[0]) == "interface" { + ifName := strings.TrimSpace(kvs[1]) + return ifName, nil + } + } + + return "", errors.New("No default interface found") +} + +// parseDefaultIfNameFromIPCmd parses the default interface from ip(8) for +// Linux. +func parseDefaultIfNameFromIPCmd(routeOut string) (string, error) { + lines := strings.Split(routeOut, "\n") + re := regexp.MustCompile(`[\s]+`) + for _, line := range lines { + kvs := re.Split(line, -1) + if len(kvs) < 5 { + continue + } + + if kvs[0] == "default" && + kvs[1] == "via" && + kvs[3] == "dev" { + ifName := strings.TrimSpace(kvs[4]) + return ifName, nil + } + } + + return "", errors.New("No default interface found") +} + +// parseDefaultIfNameWindows parses the default interface from `netstat -rn` and +// `ipconfig` on Windows. +func parseDefaultIfNameWindows(routeOut, ipconfigOut string) (string, error) { + defaultIPAddr, err := parseDefaultIPAddrWindowsRoute(routeOut) + if err != nil { + return "", err + } + + ifName, err := parseDefaultIfNameWindowsIPConfig(defaultIPAddr, ipconfigOut) + if err != nil { + return "", err + } + + return ifName, nil +} + +// parseDefaultIPAddrWindowsRoute parses the IP address on the default interface +// `netstat -rn`. +// +// NOTES(sean): Only IPv4 addresses are parsed at this time. If you have an +// IPv6 connected host, submit an issue on github.com/hashicorp/go-sockaddr with +// the output from `netstat -rn`, `ipconfig`, and version of Windows to see IPv6 +// support added. +func parseDefaultIPAddrWindowsRoute(routeOut string) (string, error) { + lines := strings.Split(routeOut, "\n") + re := regexp.MustCompile(`[\s]+`) + for _, line := range lines { + kvs := re.Split(strings.TrimSpace(line), -1) + if len(kvs) < 3 { + continue + } + + if kvs[0] == "0.0.0.0" && kvs[1] == "0.0.0.0" { + defaultIPAddr := strings.TrimSpace(kvs[3]) + return defaultIPAddr, nil + } + } + + return "", errors.New("No IP on default interface found") +} + +// parseDefaultIfNameWindowsIPConfig parses the output of `ipconfig` to find the +// interface name forwarding traffic to the default gateway. +func parseDefaultIfNameWindowsIPConfig(defaultIPAddr, routeOut string) (string, error) { + lines := strings.Split(routeOut, "\n") + ifNameRE := regexp.MustCompile(`^Ethernet adapter ([^\s:]+):`) + ipAddrRE := regexp.MustCompile(`^ IPv[46] Address\. \. \. \. \. \. \. \. \. \. \. : ([^\s]+)`) + var ifName string + for _, line := range lines { + switch ifNameMatches := ifNameRE.FindStringSubmatch(line); { + case len(ifNameMatches) > 1: + ifName = ifNameMatches[1] + continue + } + + switch ipAddrMatches := ipAddrRE.FindStringSubmatch(line); { + case len(ipAddrMatches) > 1 && ipAddrMatches[1] == defaultIPAddr: + return ifName, nil + } + } + + return "", errors.New("No default interface found with matching IP") +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/ifattr.go b/vendor/github.com/hashicorp/go-sockaddr/ifattr.go new file mode 100644 index 0000000000..6984cb4a35 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/ifattr.go @@ -0,0 +1,65 @@ +package sockaddr + +import ( + "fmt" + "net" +) + +// IfAddr is a union of a SockAddr and a net.Interface. +type IfAddr struct { + SockAddr + net.Interface +} + +// Attr returns the named attribute as a string +func (ifAddr IfAddr) Attr(attrName AttrName) (string, error) { + val := IfAddrAttr(ifAddr, attrName) + if val != "" { + return val, nil + } + + return Attr(ifAddr.SockAddr, attrName) +} + +// Attr returns the named attribute as a string +func Attr(sa SockAddr, attrName AttrName) (string, error) { + switch sockType := sa.Type(); { + case sockType&TypeIP != 0: + ip := *ToIPAddr(sa) + attrVal := IPAddrAttr(ip, attrName) + if attrVal != "" { + return attrVal, nil + } + + if sockType == TypeIPv4 { + ipv4 := *ToIPv4Addr(sa) + attrVal := IPv4AddrAttr(ipv4, attrName) + if attrVal != "" { + return attrVal, nil + } + } else if sockType == TypeIPv6 { + ipv6 := *ToIPv6Addr(sa) + attrVal := IPv6AddrAttr(ipv6, attrName) + if attrVal != "" { + return attrVal, nil + } + } + + case sockType == TypeUnix: + us := *ToUnixSock(sa) + attrVal := UnixSockAttr(us, attrName) + if attrVal != "" { + return attrVal, nil + } + } + + // Non type-specific attributes + switch attrName { + case "string": + return sa.String(), nil + case "type": + return sa.Type().String(), nil + } + + return "", fmt.Errorf("unsupported attribute name %q", attrName) +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/ipaddr.go b/vendor/github.com/hashicorp/go-sockaddr/ipaddr.go new file mode 100644 index 0000000000..b47d15c201 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/ipaddr.go @@ -0,0 +1,169 @@ +package sockaddr + +import ( + "fmt" + "math/big" + "net" + "strings" +) + +// Constants for the sizes of IPv3, IPv4, and IPv6 address types. +const ( + IPv3len = 6 + IPv4len = 4 + IPv6len = 16 +) + +// IPAddr is a generic IP address interface for IPv4 and IPv6 addresses, +// networks, and socket endpoints. +type IPAddr interface { + SockAddr + AddressBinString() string + AddressHexString() string + Cmp(SockAddr) int + CmpAddress(SockAddr) int + CmpPort(SockAddr) int + FirstUsable() IPAddr + Host() IPAddr + IPPort() IPPort + LastUsable() IPAddr + Maskbits() int + NetIP() *net.IP + NetIPMask() *net.IPMask + NetIPNet() *net.IPNet + Network() IPAddr + Octets() []int +} + +// IPPort is the type for an IP port number for the TCP and UDP IP transports. +type IPPort uint16 + +// IPPrefixLen is a typed integer representing the prefix length for a given +// IPAddr. +type IPPrefixLen byte + +// ipAddrAttrMap is a map of the IPAddr type-specific attributes. +var ipAddrAttrMap map[AttrName]func(IPAddr) string +var ipAddrAttrs []AttrName + +func init() { + ipAddrInit() +} + +// NewIPAddr creates a new IPAddr from a string. Returns nil if the string is +// not an IPv4 or an IPv6 address. +func NewIPAddr(addr string) (IPAddr, error) { + ipv4Addr, err := NewIPv4Addr(addr) + if err == nil { + return ipv4Addr, nil + } + + ipv6Addr, err := NewIPv6Addr(addr) + if err == nil { + return ipv6Addr, nil + } + + return nil, fmt.Errorf("invalid IPAddr %v", addr) +} + +// IPAddrAttr returns a string representation of an attribute for the given +// IPAddr. +func IPAddrAttr(ip IPAddr, selector AttrName) string { + fn, found := ipAddrAttrMap[selector] + if !found { + return "" + } + + return fn(ip) +} + +// IPAttrs returns a list of attributes supported by the IPAddr type +func IPAttrs() []AttrName { + return ipAddrAttrs +} + +// MustIPAddr is a helper method that must return an IPAddr or panic on invalid +// input. +func MustIPAddr(addr string) IPAddr { + ip, err := NewIPAddr(addr) + if err != nil { + panic(fmt.Sprintf("Unable to create an IPAddr from %+q: %v", addr, err)) + } + return ip +} + +// ipAddrInit is called once at init() +func ipAddrInit() { + // Sorted for human readability + ipAddrAttrs = []AttrName{ + "host", + "address", + "port", + "netmask", + "network", + "mask_bits", + "binary", + "hex", + "first_usable", + "last_usable", + "octets", + } + + ipAddrAttrMap = map[AttrName]func(ip IPAddr) string{ + "address": func(ip IPAddr) string { + return ip.NetIP().String() + }, + "binary": func(ip IPAddr) string { + return ip.AddressBinString() + }, + "first_usable": func(ip IPAddr) string { + return ip.FirstUsable().String() + }, + "hex": func(ip IPAddr) string { + return ip.AddressHexString() + }, + "host": func(ip IPAddr) string { + return ip.Host().String() + }, + "last_usable": func(ip IPAddr) string { + return ip.LastUsable().String() + }, + "mask_bits": func(ip IPAddr) string { + return fmt.Sprintf("%d", ip.Maskbits()) + }, + "netmask": func(ip IPAddr) string { + switch v := ip.(type) { + case IPv4Addr: + ipv4Mask := IPv4Addr{ + Address: IPv4Address(v.Mask), + Mask: IPv4HostMask, + } + return ipv4Mask.String() + case IPv6Addr: + ipv6Mask := new(big.Int) + ipv6Mask.Set(v.Mask) + ipv6MaskAddr := IPv6Addr{ + Address: IPv6Address(ipv6Mask), + Mask: ipv6HostMask, + } + return ipv6MaskAddr.String() + default: + return fmt.Sprintf("", ip) + } + }, + "network": func(ip IPAddr) string { + return ip.Network().NetIP().String() + }, + "octets": func(ip IPAddr) string { + octets := ip.Octets() + octetStrs := make([]string, 0, len(octets)) + for _, octet := range octets { + octetStrs = append(octetStrs, fmt.Sprintf("%d", octet)) + } + return strings.Join(octetStrs, " ") + }, + "port": func(ip IPAddr) string { + return fmt.Sprintf("%d", ip.IPPort()) + }, + } +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/ipaddrs.go b/vendor/github.com/hashicorp/go-sockaddr/ipaddrs.go new file mode 100644 index 0000000000..6eeb7ddd2f --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/ipaddrs.go @@ -0,0 +1,98 @@ +package sockaddr + +import "bytes" + +type IPAddrs []IPAddr + +func (s IPAddrs) Len() int { return len(s) } +func (s IPAddrs) Swap(i, j int) { s[i], s[j] = s[j], s[i] } + +// // SortIPAddrsByCmp is a type that satisfies sort.Interface and can be used +// // by the routines in this package. The SortIPAddrsByCmp type is used to +// // sort IPAddrs by Cmp() +// type SortIPAddrsByCmp struct{ IPAddrs } + +// // Less reports whether the element with index i should sort before the +// // element with index j. +// func (s SortIPAddrsByCmp) Less(i, j int) bool { +// // Sort by Type, then address, then port number. +// return Less(s.IPAddrs[i], s.IPAddrs[j]) +// } + +// SortIPAddrsBySpecificMaskLen is a type that satisfies sort.Interface and +// can be used by the routines in this package. The +// SortIPAddrsBySpecificMaskLen type is used to sort IPAddrs by smallest +// network (most specific to largest network). +type SortIPAddrsByNetworkSize struct{ IPAddrs } + +// Less reports whether the element with index i should sort before the +// element with index j. +func (s SortIPAddrsByNetworkSize) Less(i, j int) bool { + // Sort masks with a larger binary value (i.e. fewer hosts per network + // prefix) after masks with a smaller value (larger number of hosts per + // prefix). + switch bytes.Compare([]byte(*s.IPAddrs[i].NetIPMask()), []byte(*s.IPAddrs[j].NetIPMask())) { + case 0: + // Fall through to the second test if the net.IPMasks are the + // same. + break + case 1: + return true + case -1: + return false + default: + panic("bad, m'kay?") + } + + // Sort IPs based on the length (i.e. prefer IPv4 over IPv6). + iLen := len(*s.IPAddrs[i].NetIP()) + jLen := len(*s.IPAddrs[j].NetIP()) + if iLen != jLen { + return iLen > jLen + } + + // Sort IPs based on their network address from lowest to highest. + switch bytes.Compare(s.IPAddrs[i].NetIPNet().IP, s.IPAddrs[j].NetIPNet().IP) { + case 0: + break + case 1: + return false + case -1: + return true + default: + panic("lol wut?") + } + + // If a host does not have a port set, it always sorts after hosts + // that have a port (e.g. a host with a /32 and port number is more + // specific and should sort first over a host with a /32 but no port + // set). + if s.IPAddrs[i].IPPort() == 0 || s.IPAddrs[j].IPPort() == 0 { + return false + } + return s.IPAddrs[i].IPPort() < s.IPAddrs[j].IPPort() +} + +// SortIPAddrsBySpecificMaskLen is a type that satisfies sort.Interface and +// can be used by the routines in this package. The +// SortIPAddrsBySpecificMaskLen type is used to sort IPAddrs by smallest +// network (most specific to largest network). +type SortIPAddrsBySpecificMaskLen struct{ IPAddrs } + +// Less reports whether the element with index i should sort before the +// element with index j. +func (s SortIPAddrsBySpecificMaskLen) Less(i, j int) bool { + return s.IPAddrs[i].Maskbits() > s.IPAddrs[j].Maskbits() +} + +// SortIPAddrsByBroadMaskLen is a type that satisfies sort.Interface and can +// be used by the routines in this package. The SortIPAddrsByBroadMaskLen +// type is used to sort IPAddrs by largest network (i.e. largest subnets +// first). +type SortIPAddrsByBroadMaskLen struct{ IPAddrs } + +// Less reports whether the element with index i should sort before the +// element with index j. +func (s SortIPAddrsByBroadMaskLen) Less(i, j int) bool { + return s.IPAddrs[i].Maskbits() < s.IPAddrs[j].Maskbits() +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/ipv4addr.go b/vendor/github.com/hashicorp/go-sockaddr/ipv4addr.go new file mode 100644 index 0000000000..9f2616a69f --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/ipv4addr.go @@ -0,0 +1,515 @@ +package sockaddr + +import ( + "encoding/binary" + "fmt" + "net" + "regexp" + "strconv" + "strings" +) + +type ( + // IPv4Address is a named type representing an IPv4 address. + IPv4Address uint32 + + // IPv4Network is a named type representing an IPv4 network. + IPv4Network uint32 + + // IPv4Mask is a named type representing an IPv4 network mask. + IPv4Mask uint32 +) + +// IPv4HostMask is a constant represents a /32 IPv4 Address +// (i.e. 255.255.255.255). +const IPv4HostMask = IPv4Mask(0xffffffff) + +// ipv4AddrAttrMap is a map of the IPv4Addr type-specific attributes. +var ipv4AddrAttrMap map[AttrName]func(IPv4Addr) string +var ipv4AddrAttrs []AttrName +var trailingHexNetmaskRE *regexp.Regexp + +// IPv4Addr implements a convenience wrapper around the union of Go's +// built-in net.IP and net.IPNet types. In UNIX-speak, IPv4Addr implements +// `sockaddr` when the the address family is set to AF_INET +// (i.e. `sockaddr_in`). +type IPv4Addr struct { + IPAddr + Address IPv4Address + Mask IPv4Mask + Port IPPort +} + +func init() { + ipv4AddrInit() + trailingHexNetmaskRE = regexp.MustCompile(`/([0f]{8})$`) +} + +// NewIPv4Addr creates an IPv4Addr from a string. String can be in the form +// of either an IPv4:port (e.g. `1.2.3.4:80`, in which case the mask is +// assumed to be a `/32`), an IPv4 address (e.g. `1.2.3.4`, also with a `/32` +// mask), or an IPv4 CIDR (e.g. `1.2.3.4/24`, which has its IP port +// initialized to zero). ipv4Str can not be a hostname. +// +// NOTE: Many net.*() routines will initialize and return an IPv6 address. +// To create uint32 values from net.IP, always test to make sure the address +// returned can be converted to a 4 byte array using To4(). +func NewIPv4Addr(ipv4Str string) (IPv4Addr, error) { + // Strip off any bogus hex-encoded netmasks that will be mis-parsed by Go. In + // particular, clients with the Barracuda VPN client will see something like: + // `192.168.3.51/00ffffff` as their IP address. + if match := trailingHexNetmaskRE.FindStringIndex(ipv4Str); match != nil { + ipv4Str = ipv4Str[:match[0]] + } + + // Parse as an IPv4 CIDR + ipAddr, network, err := net.ParseCIDR(ipv4Str) + if err == nil { + ipv4 := ipAddr.To4() + if ipv4 == nil { + return IPv4Addr{}, fmt.Errorf("Unable to convert %s to an IPv4 address", ipv4Str) + } + + // If we see an IPv6 netmask, convert it to an IPv4 mask. + netmaskSepPos := strings.LastIndexByte(ipv4Str, '/') + if netmaskSepPos != -1 && netmaskSepPos+1 < len(ipv4Str) { + netMask, err := strconv.ParseUint(ipv4Str[netmaskSepPos+1:], 10, 8) + if err != nil { + return IPv4Addr{}, fmt.Errorf("Unable to convert %s to an IPv4 address: unable to parse CIDR netmask: %v", ipv4Str, err) + } else if netMask > 128 { + return IPv4Addr{}, fmt.Errorf("Unable to convert %s to an IPv4 address: invalid CIDR netmask", ipv4Str) + } + + if netMask >= 96 { + // Convert the IPv6 netmask to an IPv4 netmask + network.Mask = net.CIDRMask(int(netMask-96), IPv4len*8) + } + } + ipv4Addr := IPv4Addr{ + Address: IPv4Address(binary.BigEndian.Uint32(ipv4)), + Mask: IPv4Mask(binary.BigEndian.Uint32(network.Mask)), + } + return ipv4Addr, nil + } + + // Attempt to parse ipv4Str as a /32 host with a port number. + tcpAddr, err := net.ResolveTCPAddr("tcp4", ipv4Str) + if err == nil { + ipv4 := tcpAddr.IP.To4() + if ipv4 == nil { + return IPv4Addr{}, fmt.Errorf("Unable to resolve %+q as an IPv4 address", ipv4Str) + } + + ipv4Uint32 := binary.BigEndian.Uint32(ipv4) + ipv4Addr := IPv4Addr{ + Address: IPv4Address(ipv4Uint32), + Mask: IPv4HostMask, + Port: IPPort(tcpAddr.Port), + } + + return ipv4Addr, nil + } + + // Parse as a naked IPv4 address + ip := net.ParseIP(ipv4Str) + if ip != nil { + ipv4 := ip.To4() + if ipv4 == nil { + return IPv4Addr{}, fmt.Errorf("Unable to string convert %+q to an IPv4 address", ipv4Str) + } + + ipv4Uint32 := binary.BigEndian.Uint32(ipv4) + ipv4Addr := IPv4Addr{ + Address: IPv4Address(ipv4Uint32), + Mask: IPv4HostMask, + } + return ipv4Addr, nil + } + + return IPv4Addr{}, fmt.Errorf("Unable to parse %+q to an IPv4 address: %v", ipv4Str, err) +} + +// AddressBinString returns a string with the IPv4Addr's Address represented +// as a sequence of '0' and '1' characters. This method is useful for +// debugging or by operators who want to inspect an address. +func (ipv4 IPv4Addr) AddressBinString() string { + return fmt.Sprintf("%032s", strconv.FormatUint(uint64(ipv4.Address), 2)) +} + +// AddressHexString returns a string with the IPv4Addr address represented as +// a sequence of hex characters. This method is useful for debugging or by +// operators who want to inspect an address. +func (ipv4 IPv4Addr) AddressHexString() string { + return fmt.Sprintf("%08s", strconv.FormatUint(uint64(ipv4.Address), 16)) +} + +// Broadcast is an IPv4Addr-only method that returns the broadcast address of +// the network. +// +// NOTE: IPv6 only supports multicast, so this method only exists for +// IPv4Addr. +func (ipv4 IPv4Addr) Broadcast() IPAddr { + // Nothing should listen on a broadcast address. + return IPv4Addr{ + Address: IPv4Address(ipv4.BroadcastAddress()), + Mask: IPv4HostMask, + } +} + +// BroadcastAddress returns a IPv4Network of the IPv4Addr's broadcast +// address. +func (ipv4 IPv4Addr) BroadcastAddress() IPv4Network { + return IPv4Network(uint32(ipv4.Address)&uint32(ipv4.Mask) | ^uint32(ipv4.Mask)) +} + +// CmpAddress follows the Cmp() standard protocol and returns: +// +// - -1 If the receiver should sort first because its address is lower than arg +// - 0 if the SockAddr arg is equal to the receiving IPv4Addr or the argument is +// of a different type. +// - 1 If the argument should sort first. +func (ipv4 IPv4Addr) CmpAddress(sa SockAddr) int { + ipv4b, ok := sa.(IPv4Addr) + if !ok { + return sortDeferDecision + } + + switch { + case ipv4.Address == ipv4b.Address: + return sortDeferDecision + case ipv4.Address < ipv4b.Address: + return sortReceiverBeforeArg + default: + return sortArgBeforeReceiver + } +} + +// CmpPort follows the Cmp() standard protocol and returns: +// +// - -1 If the receiver should sort first because its port is lower than arg +// - 0 if the SockAddr arg's port number is equal to the receiving IPv4Addr, +// regardless of type. +// - 1 If the argument should sort first. +func (ipv4 IPv4Addr) CmpPort(sa SockAddr) int { + var saPort IPPort + switch v := sa.(type) { + case IPv4Addr: + saPort = v.Port + case IPv6Addr: + saPort = v.Port + default: + return sortDeferDecision + } + + switch { + case ipv4.Port == saPort: + return sortDeferDecision + case ipv4.Port < saPort: + return sortReceiverBeforeArg + default: + return sortArgBeforeReceiver + } +} + +// CmpRFC follows the Cmp() standard protocol and returns: +// +// - -1 If the receiver should sort first because it belongs to the RFC and its +// arg does not +// - 0 if the receiver and arg both belong to the same RFC or neither do. +// - 1 If the arg belongs to the RFC but receiver does not. +func (ipv4 IPv4Addr) CmpRFC(rfcNum uint, sa SockAddr) int { + recvInRFC := IsRFC(rfcNum, ipv4) + ipv4b, ok := sa.(IPv4Addr) + if !ok { + // If the receiver is part of the desired RFC and the SockAddr + // argument is not, return -1 so that the receiver sorts before + // the non-IPv4 SockAddr. Conversely, if the receiver is not + // part of the RFC, punt on sorting and leave it for the next + // sorter. + if recvInRFC { + return sortReceiverBeforeArg + } else { + return sortDeferDecision + } + } + + argInRFC := IsRFC(rfcNum, ipv4b) + switch { + case (recvInRFC && argInRFC), (!recvInRFC && !argInRFC): + // If a and b both belong to the RFC, or neither belong to + // rfcNum, defer sorting to the next sorter. + return sortDeferDecision + case recvInRFC && !argInRFC: + return sortReceiverBeforeArg + default: + return sortArgBeforeReceiver + } +} + +// Contains returns true if the SockAddr is contained within the receiver. +func (ipv4 IPv4Addr) Contains(sa SockAddr) bool { + ipv4b, ok := sa.(IPv4Addr) + if !ok { + return false + } + + return ipv4.ContainsNetwork(ipv4b) +} + +// ContainsAddress returns true if the IPv4Address is contained within the +// receiver. +func (ipv4 IPv4Addr) ContainsAddress(x IPv4Address) bool { + return IPv4Address(ipv4.NetworkAddress()) <= x && + IPv4Address(ipv4.BroadcastAddress()) >= x +} + +// ContainsNetwork returns true if the network from IPv4Addr is contained +// within the receiver. +func (ipv4 IPv4Addr) ContainsNetwork(x IPv4Addr) bool { + return ipv4.NetworkAddress() <= x.NetworkAddress() && + ipv4.BroadcastAddress() >= x.BroadcastAddress() +} + +// DialPacketArgs returns the arguments required to be passed to +// net.DialUDP(). If the Mask of ipv4 is not a /32 or the Port is 0, +// DialPacketArgs() will fail. See Host() to create an IPv4Addr with its +// mask set to /32. +func (ipv4 IPv4Addr) DialPacketArgs() (network, dialArgs string) { + if ipv4.Mask != IPv4HostMask || ipv4.Port == 0 { + return "udp4", "" + } + return "udp4", fmt.Sprintf("%s:%d", ipv4.NetIP().String(), ipv4.Port) +} + +// DialStreamArgs returns the arguments required to be passed to +// net.DialTCP(). If the Mask of ipv4 is not a /32 or the Port is 0, +// DialStreamArgs() will fail. See Host() to create an IPv4Addr with its +// mask set to /32. +func (ipv4 IPv4Addr) DialStreamArgs() (network, dialArgs string) { + if ipv4.Mask != IPv4HostMask || ipv4.Port == 0 { + return "tcp4", "" + } + return "tcp4", fmt.Sprintf("%s:%d", ipv4.NetIP().String(), ipv4.Port) +} + +// Equal returns true if a SockAddr is equal to the receiving IPv4Addr. +func (ipv4 IPv4Addr) Equal(sa SockAddr) bool { + ipv4b, ok := sa.(IPv4Addr) + if !ok { + return false + } + + if ipv4.Port != ipv4b.Port { + return false + } + + if ipv4.Address != ipv4b.Address { + return false + } + + if ipv4.NetIPNet().String() != ipv4b.NetIPNet().String() { + return false + } + + return true +} + +// FirstUsable returns an IPv4Addr set to the first address following the +// network prefix. The first usable address in a network is normally the +// gateway and should not be used except by devices forwarding packets +// between two administratively distinct networks (i.e. a router). This +// function does not discriminate against first usable vs "first address that +// should be used." For example, FirstUsable() on "192.168.1.10/24" would +// return the address "192.168.1.1/24". +func (ipv4 IPv4Addr) FirstUsable() IPAddr { + addr := ipv4.NetworkAddress() + + // If /32, return the address itself. If /31 assume a point-to-point + // link and return the lower address. + if ipv4.Maskbits() < 31 { + addr++ + } + + return IPv4Addr{ + Address: IPv4Address(addr), + Mask: IPv4HostMask, + } +} + +// Host returns a copy of ipv4 with its mask set to /32 so that it can be +// used by DialPacketArgs(), DialStreamArgs(), ListenPacketArgs(), or +// ListenStreamArgs(). +func (ipv4 IPv4Addr) Host() IPAddr { + // Nothing should listen on a broadcast address. + return IPv4Addr{ + Address: ipv4.Address, + Mask: IPv4HostMask, + Port: ipv4.Port, + } +} + +// IPPort returns the Port number attached to the IPv4Addr +func (ipv4 IPv4Addr) IPPort() IPPort { + return ipv4.Port +} + +// LastUsable returns the last address before the broadcast address in a +// given network. +func (ipv4 IPv4Addr) LastUsable() IPAddr { + addr := ipv4.BroadcastAddress() + + // If /32, return the address itself. If /31 assume a point-to-point + // link and return the upper address. + if ipv4.Maskbits() < 31 { + addr-- + } + + return IPv4Addr{ + Address: IPv4Address(addr), + Mask: IPv4HostMask, + } +} + +// ListenPacketArgs returns the arguments required to be passed to +// net.ListenUDP(). If the Mask of ipv4 is not a /32, ListenPacketArgs() +// will fail. See Host() to create an IPv4Addr with its mask set to /32. +func (ipv4 IPv4Addr) ListenPacketArgs() (network, listenArgs string) { + if ipv4.Mask != IPv4HostMask { + return "udp4", "" + } + return "udp4", fmt.Sprintf("%s:%d", ipv4.NetIP().String(), ipv4.Port) +} + +// ListenStreamArgs returns the arguments required to be passed to +// net.ListenTCP(). If the Mask of ipv4 is not a /32, ListenStreamArgs() +// will fail. See Host() to create an IPv4Addr with its mask set to /32. +func (ipv4 IPv4Addr) ListenStreamArgs() (network, listenArgs string) { + if ipv4.Mask != IPv4HostMask { + return "tcp4", "" + } + return "tcp4", fmt.Sprintf("%s:%d", ipv4.NetIP().String(), ipv4.Port) +} + +// Maskbits returns the number of network mask bits in a given IPv4Addr. For +// example, the Maskbits() of "192.168.1.1/24" would return 24. +func (ipv4 IPv4Addr) Maskbits() int { + mask := make(net.IPMask, IPv4len) + binary.BigEndian.PutUint32(mask, uint32(ipv4.Mask)) + maskOnes, _ := mask.Size() + return maskOnes +} + +// MustIPv4Addr is a helper method that must return an IPv4Addr or panic on +// invalid input. +func MustIPv4Addr(addr string) IPv4Addr { + ipv4, err := NewIPv4Addr(addr) + if err != nil { + panic(fmt.Sprintf("Unable to create an IPv4Addr from %+q: %v", addr, err)) + } + return ipv4 +} + +// NetIP returns the address as a net.IP (address is always presized to +// IPv4). +func (ipv4 IPv4Addr) NetIP() *net.IP { + x := make(net.IP, IPv4len) + binary.BigEndian.PutUint32(x, uint32(ipv4.Address)) + return &x +} + +// NetIPMask create a new net.IPMask from the IPv4Addr. +func (ipv4 IPv4Addr) NetIPMask() *net.IPMask { + ipv4Mask := net.IPMask{} + ipv4Mask = make(net.IPMask, IPv4len) + binary.BigEndian.PutUint32(ipv4Mask, uint32(ipv4.Mask)) + return &ipv4Mask +} + +// NetIPNet create a new net.IPNet from the IPv4Addr. +func (ipv4 IPv4Addr) NetIPNet() *net.IPNet { + ipv4net := &net.IPNet{} + ipv4net.IP = make(net.IP, IPv4len) + binary.BigEndian.PutUint32(ipv4net.IP, uint32(ipv4.NetworkAddress())) + ipv4net.Mask = *ipv4.NetIPMask() + return ipv4net +} + +// Network returns the network prefix or network address for a given network. +func (ipv4 IPv4Addr) Network() IPAddr { + return IPv4Addr{ + Address: IPv4Address(ipv4.NetworkAddress()), + Mask: ipv4.Mask, + } +} + +// NetworkAddress returns an IPv4Network of the IPv4Addr's network address. +func (ipv4 IPv4Addr) NetworkAddress() IPv4Network { + return IPv4Network(uint32(ipv4.Address) & uint32(ipv4.Mask)) +} + +// Octets returns a slice of the four octets in an IPv4Addr's Address. The +// order of the bytes is big endian. +func (ipv4 IPv4Addr) Octets() []int { + return []int{ + int(ipv4.Address >> 24), + int((ipv4.Address >> 16) & 0xff), + int((ipv4.Address >> 8) & 0xff), + int(ipv4.Address & 0xff), + } +} + +// String returns a string representation of the IPv4Addr +func (ipv4 IPv4Addr) String() string { + if ipv4.Port != 0 { + return fmt.Sprintf("%s:%d", ipv4.NetIP().String(), ipv4.Port) + } + + if ipv4.Maskbits() == 32 { + return ipv4.NetIP().String() + } + + return fmt.Sprintf("%s/%d", ipv4.NetIP().String(), ipv4.Maskbits()) +} + +// Type is used as a type switch and returns TypeIPv4 +func (IPv4Addr) Type() SockAddrType { + return TypeIPv4 +} + +// IPv4AddrAttr returns a string representation of an attribute for the given +// IPv4Addr. +func IPv4AddrAttr(ipv4 IPv4Addr, selector AttrName) string { + fn, found := ipv4AddrAttrMap[selector] + if !found { + return "" + } + + return fn(ipv4) +} + +// IPv4Attrs returns a list of attributes supported by the IPv4Addr type +func IPv4Attrs() []AttrName { + return ipv4AddrAttrs +} + +// ipv4AddrInit is called once at init() +func ipv4AddrInit() { + // Sorted for human readability + ipv4AddrAttrs = []AttrName{ + "size", // Same position as in IPv6 for output consistency + "broadcast", + "uint32", + } + + ipv4AddrAttrMap = map[AttrName]func(ipv4 IPv4Addr) string{ + "broadcast": func(ipv4 IPv4Addr) string { + return ipv4.Broadcast().String() + }, + "size": func(ipv4 IPv4Addr) string { + return fmt.Sprintf("%d", 1< 2 && ipv6Str[0] == '[' && ipv6Str[len(ipv6Str)-1] == ']' { + ipv6Str = ipv6Str[1 : len(ipv6Str)-1] + } + ip := net.ParseIP(ipv6Str) + if ip != nil { + ipv6 := ip.To16() + if ipv6 == nil { + return IPv6Addr{}, fmt.Errorf("Unable to string convert %+q to a 16byte IPv6 address", ipv6Str) + } + + ipv6BigIntAddr := new(big.Int) + ipv6BigIntAddr.SetBytes(ipv6) + + ipv6BigIntMask := new(big.Int) + ipv6BigIntMask.Set(ipv6HostMask) + + return IPv6Addr{ + Address: IPv6Address(ipv6BigIntAddr), + Mask: IPv6Mask(ipv6BigIntMask), + }, nil + } + + // Parse as an IPv6 CIDR + ipAddr, network, err := net.ParseCIDR(ipv6Str) + if err == nil { + ipv6 := ipAddr.To16() + if ipv6 == nil { + return IPv6Addr{}, fmt.Errorf("Unable to convert %+q to a 16byte IPv6 address", ipv6Str) + } + + ipv6BigIntAddr := new(big.Int) + ipv6BigIntAddr.SetBytes(ipv6) + + ipv6BigIntMask := new(big.Int) + ipv6BigIntMask.SetBytes(network.Mask) + + ipv6Addr := IPv6Addr{ + Address: IPv6Address(ipv6BigIntAddr), + Mask: IPv6Mask(ipv6BigIntMask), + } + return ipv6Addr, nil + } + + return IPv6Addr{}, fmt.Errorf("Unable to parse %+q to an IPv6 address: %v", ipv6Str, err) +} + +// AddressBinString returns a string with the IPv6Addr's Address represented +// as a sequence of '0' and '1' characters. This method is useful for +// debugging or by operators who want to inspect an address. +func (ipv6 IPv6Addr) AddressBinString() string { + bi := big.Int(*ipv6.Address) + return fmt.Sprintf("%0128s", bi.Text(2)) +} + +// AddressHexString returns a string with the IPv6Addr address represented as +// a sequence of hex characters. This method is useful for debugging or by +// operators who want to inspect an address. +func (ipv6 IPv6Addr) AddressHexString() string { + bi := big.Int(*ipv6.Address) + return fmt.Sprintf("%032s", bi.Text(16)) +} + +// CmpAddress follows the Cmp() standard protocol and returns: +// +// - -1 If the receiver should sort first because its address is lower than arg +// - 0 if the SockAddr arg equal to the receiving IPv6Addr or the argument is of a +// different type. +// - 1 If the argument should sort first. +func (ipv6 IPv6Addr) CmpAddress(sa SockAddr) int { + ipv6b, ok := sa.(IPv6Addr) + if !ok { + return sortDeferDecision + } + + ipv6aBigInt := new(big.Int) + ipv6aBigInt.Set(ipv6.Address) + ipv6bBigInt := new(big.Int) + ipv6bBigInt.Set(ipv6b.Address) + + return ipv6aBigInt.Cmp(ipv6bBigInt) +} + +// CmpPort follows the Cmp() standard protocol and returns: +// +// - -1 If the receiver should sort first because its port is lower than arg +// - 0 if the SockAddr arg's port number is equal to the receiving IPv6Addr, +// regardless of type. +// - 1 If the argument should sort first. +func (ipv6 IPv6Addr) CmpPort(sa SockAddr) int { + var saPort IPPort + switch v := sa.(type) { + case IPv4Addr: + saPort = v.Port + case IPv6Addr: + saPort = v.Port + default: + return sortDeferDecision + } + + switch { + case ipv6.Port == saPort: + return sortDeferDecision + case ipv6.Port < saPort: + return sortReceiverBeforeArg + default: + return sortArgBeforeReceiver + } +} + +// CmpRFC follows the Cmp() standard protocol and returns: +// +// - -1 If the receiver should sort first because it belongs to the RFC and its +// arg does not +// - 0 if the receiver and arg both belong to the same RFC or neither do. +// - 1 If the arg belongs to the RFC but receiver does not. +func (ipv6 IPv6Addr) CmpRFC(rfcNum uint, sa SockAddr) int { + recvInRFC := IsRFC(rfcNum, ipv6) + ipv6b, ok := sa.(IPv6Addr) + if !ok { + // If the receiver is part of the desired RFC and the SockAddr + // argument is not, sort receiver before the non-IPv6 SockAddr. + // Conversely, if the receiver is not part of the RFC, punt on + // sorting and leave it for the next sorter. + if recvInRFC { + return sortReceiverBeforeArg + } else { + return sortDeferDecision + } + } + + argInRFC := IsRFC(rfcNum, ipv6b) + switch { + case (recvInRFC && argInRFC), (!recvInRFC && !argInRFC): + // If a and b both belong to the RFC, or neither belong to + // rfcNum, defer sorting to the next sorter. + return sortDeferDecision + case recvInRFC && !argInRFC: + return sortReceiverBeforeArg + default: + return sortArgBeforeReceiver + } +} + +// Contains returns true if the SockAddr is contained within the receiver. +func (ipv6 IPv6Addr) Contains(sa SockAddr) bool { + ipv6b, ok := sa.(IPv6Addr) + if !ok { + return false + } + + return ipv6.ContainsNetwork(ipv6b) +} + +// ContainsAddress returns true if the IPv6Address is contained within the +// receiver. +func (ipv6 IPv6Addr) ContainsAddress(x IPv6Address) bool { + xAddr := IPv6Addr{ + Address: x, + Mask: ipv6HostMask, + } + + { + xIPv6 := xAddr.FirstUsable().(IPv6Addr) + yIPv6 := ipv6.FirstUsable().(IPv6Addr) + if xIPv6.CmpAddress(yIPv6) >= 1 { + return false + } + } + + { + xIPv6 := xAddr.LastUsable().(IPv6Addr) + yIPv6 := ipv6.LastUsable().(IPv6Addr) + if xIPv6.CmpAddress(yIPv6) <= -1 { + return false + } + } + return true +} + +// ContainsNetwork returns true if the network from IPv6Addr is contained within +// the receiver. +func (x IPv6Addr) ContainsNetwork(y IPv6Addr) bool { + { + xIPv6 := x.FirstUsable().(IPv6Addr) + yIPv6 := y.FirstUsable().(IPv6Addr) + if ret := xIPv6.CmpAddress(yIPv6); ret >= 1 { + return false + } + } + + { + xIPv6 := x.LastUsable().(IPv6Addr) + yIPv6 := y.LastUsable().(IPv6Addr) + if ret := xIPv6.CmpAddress(yIPv6); ret <= -1 { + return false + } + } + return true +} + +// DialPacketArgs returns the arguments required to be passed to +// net.DialUDP(). If the Mask of ipv6 is not a /128 or the Port is 0, +// DialPacketArgs() will fail. See Host() to create an IPv6Addr with its +// mask set to /128. +func (ipv6 IPv6Addr) DialPacketArgs() (network, dialArgs string) { + ipv6Mask := big.Int(*ipv6.Mask) + if ipv6Mask.Cmp(ipv6HostMask) != 0 || ipv6.Port == 0 { + return "udp6", "" + } + return "udp6", fmt.Sprintf("[%s]:%d", ipv6.NetIP().String(), ipv6.Port) +} + +// DialStreamArgs returns the arguments required to be passed to +// net.DialTCP(). If the Mask of ipv6 is not a /128 or the Port is 0, +// DialStreamArgs() will fail. See Host() to create an IPv6Addr with its +// mask set to /128. +func (ipv6 IPv6Addr) DialStreamArgs() (network, dialArgs string) { + ipv6Mask := big.Int(*ipv6.Mask) + if ipv6Mask.Cmp(ipv6HostMask) != 0 || ipv6.Port == 0 { + return "tcp6", "" + } + return "tcp6", fmt.Sprintf("[%s]:%d", ipv6.NetIP().String(), ipv6.Port) +} + +// Equal returns true if a SockAddr is equal to the receiving IPv4Addr. +func (ipv6a IPv6Addr) Equal(sa SockAddr) bool { + ipv6b, ok := sa.(IPv6Addr) + if !ok { + return false + } + + if ipv6a.NetIP().String() != ipv6b.NetIP().String() { + return false + } + + if ipv6a.NetIPNet().String() != ipv6b.NetIPNet().String() { + return false + } + + if ipv6a.Port != ipv6b.Port { + return false + } + + return true +} + +// FirstUsable returns an IPv6Addr set to the first address following the +// network prefix. The first usable address in a network is normally the +// gateway and should not be used except by devices forwarding packets +// between two administratively distinct networks (i.e. a router). This +// function does not discriminate against first usable vs "first address that +// should be used." For example, FirstUsable() on "2001:0db8::0003/64" would +// return "2001:0db8::00011". +func (ipv6 IPv6Addr) FirstUsable() IPAddr { + return IPv6Addr{ + Address: IPv6Address(ipv6.NetworkAddress()), + Mask: ipv6HostMask, + } +} + +// Host returns a copy of ipv6 with its mask set to /128 so that it can be +// used by DialPacketArgs(), DialStreamArgs(), ListenPacketArgs(), or +// ListenStreamArgs(). +func (ipv6 IPv6Addr) Host() IPAddr { + // Nothing should listen on a broadcast address. + return IPv6Addr{ + Address: ipv6.Address, + Mask: ipv6HostMask, + Port: ipv6.Port, + } +} + +// IPPort returns the Port number attached to the IPv6Addr +func (ipv6 IPv6Addr) IPPort() IPPort { + return ipv6.Port +} + +// LastUsable returns the last address in a given network. +func (ipv6 IPv6Addr) LastUsable() IPAddr { + addr := new(big.Int) + addr.Set(ipv6.Address) + + mask := new(big.Int) + mask.Set(ipv6.Mask) + + negMask := new(big.Int) + negMask.Xor(ipv6HostMask, mask) + + lastAddr := new(big.Int) + lastAddr.And(addr, mask) + lastAddr.Or(lastAddr, negMask) + + return IPv6Addr{ + Address: IPv6Address(lastAddr), + Mask: ipv6HostMask, + } +} + +// ListenPacketArgs returns the arguments required to be passed to +// net.ListenUDP(). If the Mask of ipv6 is not a /128, ListenPacketArgs() +// will fail. See Host() to create an IPv6Addr with its mask set to /128. +func (ipv6 IPv6Addr) ListenPacketArgs() (network, listenArgs string) { + ipv6Mask := big.Int(*ipv6.Mask) + if ipv6Mask.Cmp(ipv6HostMask) != 0 { + return "udp6", "" + } + return "udp6", fmt.Sprintf("[%s]:%d", ipv6.NetIP().String(), ipv6.Port) +} + +// ListenStreamArgs returns the arguments required to be passed to +// net.ListenTCP(). If the Mask of ipv6 is not a /128, ListenStreamArgs() +// will fail. See Host() to create an IPv6Addr with its mask set to /128. +func (ipv6 IPv6Addr) ListenStreamArgs() (network, listenArgs string) { + ipv6Mask := big.Int(*ipv6.Mask) + if ipv6Mask.Cmp(ipv6HostMask) != 0 { + return "tcp6", "" + } + return "tcp6", fmt.Sprintf("[%s]:%d", ipv6.NetIP().String(), ipv6.Port) +} + +// Maskbits returns the number of network mask bits in a given IPv6Addr. For +// example, the Maskbits() of "2001:0db8::0003/64" would return 64. +func (ipv6 IPv6Addr) Maskbits() int { + maskOnes, _ := ipv6.NetIPNet().Mask.Size() + + return maskOnes +} + +// MustIPv6Addr is a helper method that must return an IPv6Addr or panic on +// invalid input. +func MustIPv6Addr(addr string) IPv6Addr { + ipv6, err := NewIPv6Addr(addr) + if err != nil { + panic(fmt.Sprintf("Unable to create an IPv6Addr from %+q: %v", addr, err)) + } + return ipv6 +} + +// NetIP returns the address as a net.IP. +func (ipv6 IPv6Addr) NetIP() *net.IP { + return bigIntToNetIPv6(ipv6.Address) +} + +// NetIPMask create a new net.IPMask from the IPv6Addr. +func (ipv6 IPv6Addr) NetIPMask() *net.IPMask { + ipv6Mask := make(net.IPMask, IPv6len) + m := big.Int(*ipv6.Mask) + copy(ipv6Mask, m.Bytes()) + return &ipv6Mask +} + +// Network returns a pointer to the net.IPNet within IPv4Addr receiver. +func (ipv6 IPv6Addr) NetIPNet() *net.IPNet { + ipv6net := &net.IPNet{} + ipv6net.IP = make(net.IP, IPv6len) + copy(ipv6net.IP, *ipv6.NetIP()) + ipv6net.Mask = *ipv6.NetIPMask() + return ipv6net +} + +// Network returns the network prefix or network address for a given network. +func (ipv6 IPv6Addr) Network() IPAddr { + return IPv6Addr{ + Address: IPv6Address(ipv6.NetworkAddress()), + Mask: ipv6.Mask, + } +} + +// NetworkAddress returns an IPv6Network of the IPv6Addr's network address. +func (ipv6 IPv6Addr) NetworkAddress() IPv6Network { + addr := new(big.Int) + addr.SetBytes((*ipv6.Address).Bytes()) + + mask := new(big.Int) + mask.SetBytes(*ipv6.NetIPMask()) + + netAddr := new(big.Int) + netAddr.And(addr, mask) + + return IPv6Network(netAddr) +} + +// Octets returns a slice of the 16 octets in an IPv6Addr's Address. The +// order of the bytes is big endian. +func (ipv6 IPv6Addr) Octets() []int { + x := make([]int, IPv6len) + for i, b := range *bigIntToNetIPv6(ipv6.Address) { + x[i] = int(b) + } + + return x +} + +// String returns a string representation of the IPv6Addr +func (ipv6 IPv6Addr) String() string { + if ipv6.Port != 0 { + return fmt.Sprintf("[%s]:%d", ipv6.NetIP().String(), ipv6.Port) + } + + if ipv6.Maskbits() == 128 { + return ipv6.NetIP().String() + } + + return fmt.Sprintf("%s/%d", ipv6.NetIP().String(), ipv6.Maskbits()) +} + +// Type is used as a type switch and returns TypeIPv6 +func (IPv6Addr) Type() SockAddrType { + return TypeIPv6 +} + +// IPv6Attrs returns a list of attributes supported by the IPv6Addr type +func IPv6Attrs() []AttrName { + return ipv6AddrAttrs +} + +// IPv6AddrAttr returns a string representation of an attribute for the given +// IPv6Addr. +func IPv6AddrAttr(ipv6 IPv6Addr, selector AttrName) string { + fn, found := ipv6AddrAttrMap[selector] + if !found { + return "" + } + + return fn(ipv6) +} + +// ipv6AddrInit is called once at init() +func ipv6AddrInit() { + // Sorted for human readability + ipv6AddrAttrs = []AttrName{ + "size", // Same position as in IPv6 for output consistency + "uint128", + } + + ipv6AddrAttrMap = map[AttrName]func(ipv6 IPv6Addr) string{ + "size": func(ipv6 IPv6Addr) string { + netSize := big.NewInt(1) + netSize = netSize.Lsh(netSize, uint(IPv6len*8-ipv6.Maskbits())) + return netSize.Text(10) + }, + "uint128": func(ipv6 IPv6Addr) string { + b := big.Int(*ipv6.Address) + return b.Text(10) + }, + } +} + +// bigIntToNetIPv6 is a helper function that correctly returns a net.IP with the +// correctly padded values. +func bigIntToNetIPv6(bi *big.Int) *net.IP { + x := make(net.IP, IPv6len) + ipv6Bytes := bi.Bytes() + + // It's possibe for ipv6Bytes to be less than IPv6len bytes in size. If + // they are different sizes we to pad the size of response. + if len(ipv6Bytes) < IPv6len { + buf := new(bytes.Buffer) + buf.Grow(IPv6len) + + for i := len(ipv6Bytes); i < IPv6len; i++ { + if err := binary.Write(buf, binary.BigEndian, byte(0)); err != nil { + panic(fmt.Sprintf("Unable to pad byte %d of input %v: %v", i, bi, err)) + } + } + + for _, b := range ipv6Bytes { + if err := binary.Write(buf, binary.BigEndian, b); err != nil { + panic(fmt.Sprintf("Unable to preserve endianness of input %v: %v", bi, err)) + } + } + + ipv6Bytes = buf.Bytes() + } + i := copy(x, ipv6Bytes) + if i != IPv6len { + panic("IPv6 wrong size") + } + return &x +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/rfc.go b/vendor/github.com/hashicorp/go-sockaddr/rfc.go new file mode 100644 index 0000000000..fd9be940b1 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/rfc.go @@ -0,0 +1,947 @@ +package sockaddr + +// ForwardingBlacklist is a faux RFC that includes a list of non-forwardable IP +// blocks. +const ForwardingBlacklist = 4294967295 + +// IsRFC tests to see if an SockAddr matches the specified RFC +func IsRFC(rfcNum uint, sa SockAddr) bool { + rfcNetMap := KnownRFCs() + rfcNets, ok := rfcNetMap[rfcNum] + if !ok { + return false + } + + var contained bool + for _, rfcNet := range rfcNets { + if rfcNet.Contains(sa) { + contained = true + break + } + } + return contained +} + +// KnownRFCs returns an initial set of known RFCs. +// +// NOTE (sean@): As this list evolves over time, please submit patches to keep +// this list current. If something isn't right, inquire, as it may just be a +// bug on my part. Some of the inclusions were based on my judgement as to what +// would be a useful value (e.g. RFC3330). +// +// Useful resources: +// +// * https://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xhtml +// * https://www.iana.org/assignments/ipv6-unicast-address-assignments/ipv6-unicast-address-assignments.xhtml +// * https://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xhtml +func KnownRFCs() map[uint]SockAddrs { + // NOTE(sean@): Multiple SockAddrs per RFC lend themselves well to a + // RADIX tree, but `ENOTIME`. Patches welcome. + return map[uint]SockAddrs{ + 919: { + // [RFC919] Broadcasting Internet Datagrams + MustIPv4Addr("255.255.255.255/32"), // [RFC1122], §7 Broadcast IP Addressing - Proposed Standards + }, + 1122: { + // [RFC1122] Requirements for Internet Hosts -- Communication Layers + MustIPv4Addr("0.0.0.0/8"), // [RFC1122], §3.2.1.3 + MustIPv4Addr("127.0.0.0/8"), // [RFC1122], §3.2.1.3 + }, + 1112: { + // [RFC1112] Host Extensions for IP Multicasting + MustIPv4Addr("224.0.0.0/4"), // [RFC1112], §4 Host Group Addresses + }, + 1918: { + // [RFC1918] Address Allocation for Private Internets + MustIPv4Addr("10.0.0.0/8"), + MustIPv4Addr("172.16.0.0/12"), + MustIPv4Addr("192.168.0.0/16"), + }, + 2544: { + // [RFC2544] Benchmarking Methodology for Network + // Interconnect Devices + MustIPv4Addr("198.18.0.0/15"), + }, + 2765: { + // [RFC2765] Stateless IP/ICMP Translation Algorithm + // (SIIT) (obsoleted by RFCs 6145, which itself was + // later obsoleted by 7915). + + // [RFC2765], §2.1 Addresses + MustIPv6Addr("0:0:0:0:0:ffff:0:0/96"), + }, + 2928: { + // [RFC2928] Initial IPv6 Sub-TLA ID Assignments + MustIPv6Addr("2001::/16"), // Superblock + //MustIPv6Addr("2001:0000::/23"), // IANA + //MustIPv6Addr("2001:0200::/23"), // APNIC + //MustIPv6Addr("2001:0400::/23"), // ARIN + //MustIPv6Addr("2001:0600::/23"), // RIPE NCC + //MustIPv6Addr("2001:0800::/23"), // (future assignment) + // ... + //MustIPv6Addr("2001:FE00::/23"), // (future assignment) + }, + 3056: { // 6to4 address + // [RFC3056] Connection of IPv6 Domains via IPv4 Clouds + + // [RFC3056], §2 IPv6 Prefix Allocation + MustIPv6Addr("2002::/16"), + }, + 3068: { + // [RFC3068] An Anycast Prefix for 6to4 Relay Routers + // (obsolete by RFC7526) + + // [RFC3068], § 6to4 Relay anycast address + MustIPv4Addr("192.88.99.0/24"), + + // [RFC3068], §2.5 6to4 IPv6 relay anycast address + // + // NOTE: /120 == 128-(32-24) + MustIPv6Addr("2002:c058:6301::/120"), + }, + 3171: { + // [RFC3171] IANA Guidelines for IPv4 Multicast Address Assignments + MustIPv4Addr("224.0.0.0/4"), + }, + 3330: { + // [RFC3330] Special-Use IPv4 Addresses + + // Addresses in this block refer to source hosts on + // "this" network. Address 0.0.0.0/32 may be used as a + // source address for this host on this network; other + // addresses within 0.0.0.0/8 may be used to refer to + // specified hosts on this network [RFC1700, page 4]. + MustIPv4Addr("0.0.0.0/8"), + + // 10.0.0.0/8 - This block is set aside for use in + // private networks. Its intended use is documented in + // [RFC1918]. Addresses within this block should not + // appear on the public Internet. + MustIPv4Addr("10.0.0.0/8"), + + // 14.0.0.0/8 - This block is set aside for assignments + // to the international system of Public Data Networks + // [RFC1700, page 181]. The registry of assignments + // within this block can be accessed from the "Public + // Data Network Numbers" link on the web page at + // http://www.iana.org/numbers.html. Addresses within + // this block are assigned to users and should be + // treated as such. + + // 24.0.0.0/8 - This block was allocated in early 1996 + // for use in provisioning IP service over cable + // television systems. Although the IANA initially was + // involved in making assignments to cable operators, + // this responsibility was transferred to American + // Registry for Internet Numbers (ARIN) in May 2001. + // Addresses within this block are assigned in the + // normal manner and should be treated as such. + + // 39.0.0.0/8 - This block was used in the "Class A + // Subnet Experiment" that commenced in May 1995, as + // documented in [RFC1797]. The experiment has been + // completed and this block has been returned to the + // pool of addresses reserved for future allocation or + // assignment. This block therefore no longer has a + // special use and is subject to allocation to a + // Regional Internet Registry for assignment in the + // normal manner. + + // 127.0.0.0/8 - This block is assigned for use as the Internet host + // loopback address. A datagram sent by a higher level protocol to an + // address anywhere within this block should loop back inside the host. + // This is ordinarily implemented using only 127.0.0.1/32 for loopback, + // but no addresses within this block should ever appear on any network + // anywhere [RFC1700, page 5]. + MustIPv4Addr("127.0.0.0/8"), + + // 128.0.0.0/16 - This block, corresponding to the + // numerically lowest of the former Class B addresses, + // was initially and is still reserved by the IANA. + // Given the present classless nature of the IP address + // space, the basis for the reservation no longer + // applies and addresses in this block are subject to + // future allocation to a Regional Internet Registry for + // assignment in the normal manner. + + // 169.254.0.0/16 - This is the "link local" block. It + // is allocated for communication between hosts on a + // single link. Hosts obtain these addresses by + // auto-configuration, such as when a DHCP server may + // not be found. + MustIPv4Addr("169.254.0.0/16"), + + // 172.16.0.0/12 - This block is set aside for use in + // private networks. Its intended use is documented in + // [RFC1918]. Addresses within this block should not + // appear on the public Internet. + MustIPv4Addr("172.16.0.0/12"), + + // 191.255.0.0/16 - This block, corresponding to the numerically highest + // to the former Class B addresses, was initially and is still reserved + // by the IANA. Given the present classless nature of the IP address + // space, the basis for the reservation no longer applies and addresses + // in this block are subject to future allocation to a Regional Internet + // Registry for assignment in the normal manner. + + // 192.0.0.0/24 - This block, corresponding to the + // numerically lowest of the former Class C addresses, + // was initially and is still reserved by the IANA. + // Given the present classless nature of the IP address + // space, the basis for the reservation no longer + // applies and addresses in this block are subject to + // future allocation to a Regional Internet Registry for + // assignment in the normal manner. + + // 192.0.2.0/24 - This block is assigned as "TEST-NET" for use in + // documentation and example code. It is often used in conjunction with + // domain names example.com or example.net in vendor and protocol + // documentation. Addresses within this block should not appear on the + // public Internet. + MustIPv4Addr("192.0.2.0/24"), + + // 192.88.99.0/24 - This block is allocated for use as 6to4 relay + // anycast addresses, according to [RFC3068]. + MustIPv4Addr("192.88.99.0/24"), + + // 192.168.0.0/16 - This block is set aside for use in private networks. + // Its intended use is documented in [RFC1918]. Addresses within this + // block should not appear on the public Internet. + MustIPv4Addr("192.168.0.0/16"), + + // 198.18.0.0/15 - This block has been allocated for use + // in benchmark tests of network interconnect devices. + // Its use is documented in [RFC2544]. + MustIPv4Addr("198.18.0.0/15"), + + // 223.255.255.0/24 - This block, corresponding to the + // numerically highest of the former Class C addresses, + // was initially and is still reserved by the IANA. + // Given the present classless nature of the IP address + // space, the basis for the reservation no longer + // applies and addresses in this block are subject to + // future allocation to a Regional Internet Registry for + // assignment in the normal manner. + + // 224.0.0.0/4 - This block, formerly known as the Class + // D address space, is allocated for use in IPv4 + // multicast address assignments. The IANA guidelines + // for assignments from this space are described in + // [RFC3171]. + MustIPv4Addr("224.0.0.0/4"), + + // 240.0.0.0/4 - This block, formerly known as the Class E address + // space, is reserved. The "limited broadcast" destination address + // 255.255.255.255 should never be forwarded outside the (sub-)net of + // the source. The remainder of this space is reserved + // for future use. [RFC1700, page 4] + MustIPv4Addr("240.0.0.0/4"), + }, + 3849: { + // [RFC3849] IPv6 Address Prefix Reserved for Documentation + MustIPv6Addr("2001:db8::/32"), // [RFC3849], §4 IANA Considerations + }, + 3927: { + // [RFC3927] Dynamic Configuration of IPv4 Link-Local Addresses + MustIPv4Addr("169.254.0.0/16"), // [RFC3927], §2.1 Link-Local Address Selection + }, + 4038: { + // [RFC4038] Application Aspects of IPv6 Transition + + // [RFC4038], §4.2. IPv6 Applications in a Dual-Stack Node + MustIPv6Addr("0:0:0:0:0:ffff::/96"), + }, + 4193: { + // [RFC4193] Unique Local IPv6 Unicast Addresses + MustIPv6Addr("fc00::/7"), + }, + 4291: { + // [RFC4291] IP Version 6 Addressing Architecture + + // [RFC4291], §2.5.2 The Unspecified Address + MustIPv6Addr("::/128"), + + // [RFC4291], §2.5.3 The Loopback Address + MustIPv6Addr("::1/128"), + + // [RFC4291], §2.5.5.1. IPv4-Compatible IPv6 Address + MustIPv6Addr("::/96"), + + // [RFC4291], §2.5.5.2. IPv4-Mapped IPv6 Address + MustIPv6Addr("::ffff:0:0/96"), + + // [RFC4291], §2.5.6 Link-Local IPv6 Unicast Addresses + MustIPv6Addr("fe80::/10"), + + // [RFC4291], §2.5.7 Site-Local IPv6 Unicast Addresses + // (depreciated) + MustIPv6Addr("fec0::/10"), + + // [RFC4291], §2.7 Multicast Addresses + MustIPv6Addr("ff00::/8"), + + // IPv6 Multicast Information. + // + // In the following "table" below, `ff0x` is replaced + // with the following values depending on the scope of + // the query: + // + // IPv6 Multicast Scopes: + // * ff00/9 // reserved + // * ff01/9 // interface-local + // * ff02/9 // link-local + // * ff03/9 // realm-local + // * ff04/9 // admin-local + // * ff05/9 // site-local + // * ff08/9 // organization-local + // * ff0e/9 // global + // * ff0f/9 // reserved + // + // IPv6 Multicast Addresses: + // * ff0x::2 // All routers + // * ff02::5 // OSPFIGP + // * ff02::6 // OSPFIGP Designated Routers + // * ff02::9 // RIP Routers + // * ff02::a // EIGRP Routers + // * ff02::d // All PIM Routers + // * ff02::1a // All RPL Routers + // * ff0x::fb // mDNSv6 + // * ff0x::101 // All Network Time Protocol (NTP) servers + // * ff02::1:1 // Link Name + // * ff02::1:2 // All-dhcp-agents + // * ff02::1:3 // Link-local Multicast Name Resolution + // * ff05::1:3 // All-dhcp-servers + // * ff02::1:ff00:0/104 // Solicited-node multicast address. + // * ff02::2:ff00:0/104 // Node Information Queries + }, + 4380: { + // [RFC4380] Teredo: Tunneling IPv6 over UDP through + // Network Address Translations (NATs) + + // [RFC4380], §2.6 Global Teredo IPv6 Service Prefix + MustIPv6Addr("2001:0000::/32"), + }, + 4773: { + // [RFC4773] Administration of the IANA Special Purpose IPv6 Address Block + MustIPv6Addr("2001:0000::/23"), // IANA + }, + 4843: { + // [RFC4843] An IPv6 Prefix for Overlay Routable Cryptographic Hash Identifiers (ORCHID) + MustIPv6Addr("2001:10::/28"), // [RFC4843], §7 IANA Considerations + }, + 5180: { + // [RFC5180] IPv6 Benchmarking Methodology for Network Interconnect Devices + MustIPv6Addr("2001:0200::/48"), // [RFC5180], §8 IANA Considerations + }, + 5735: { + // [RFC5735] Special Use IPv4 Addresses + MustIPv4Addr("192.0.2.0/24"), // TEST-NET-1 + MustIPv4Addr("198.51.100.0/24"), // TEST-NET-2 + MustIPv4Addr("203.0.113.0/24"), // TEST-NET-3 + MustIPv4Addr("198.18.0.0/15"), // Benchmarks + }, + 5737: { + // [RFC5737] IPv4 Address Blocks Reserved for Documentation + MustIPv4Addr("192.0.2.0/24"), // TEST-NET-1 + MustIPv4Addr("198.51.100.0/24"), // TEST-NET-2 + MustIPv4Addr("203.0.113.0/24"), // TEST-NET-3 + }, + 6052: { + // [RFC6052] IPv6 Addressing of IPv4/IPv6 Translators + MustIPv6Addr("64:ff9b::/96"), // [RFC6052], §2.1. Well-Known Prefix + }, + 6333: { + // [RFC6333] Dual-Stack Lite Broadband Deployments Following IPv4 Exhaustion + MustIPv4Addr("192.0.0.0/29"), // [RFC6333], §5.7 Well-Known IPv4 Address + }, + 6598: { + // [RFC6598] IANA-Reserved IPv4 Prefix for Shared Address Space + MustIPv4Addr("100.64.0.0/10"), + }, + 6666: { + // [RFC6666] A Discard Prefix for IPv6 + MustIPv6Addr("0100::/64"), + }, + 6890: { + // [RFC6890] Special-Purpose IP Address Registries + + // From "RFC6890 §2.2.1 Information Requirements": + /* + The IPv4 and IPv6 Special-Purpose Address Registries maintain the + following information regarding each entry: + + o Address Block - A block of IPv4 or IPv6 addresses that has been + registered for a special purpose. + + o Name - A descriptive name for the special-purpose address block. + + o RFC - The RFC through which the special-purpose address block was + requested. + + o Allocation Date - The date upon which the special-purpose address + block was allocated. + + o Termination Date - The date upon which the allocation is to be + terminated. This field is applicable for limited-use allocations + only. + + o Source - A boolean value indicating whether an address from the + allocated special-purpose address block is valid when used as the + source address of an IP datagram that transits two devices. + + o Destination - A boolean value indicating whether an address from + the allocated special-purpose address block is valid when used as + the destination address of an IP datagram that transits two + devices. + + o Forwardable - A boolean value indicating whether a router may + forward an IP datagram whose destination address is drawn from the + allocated special-purpose address block between external + interfaces. + + o Global - A boolean value indicating whether an IP datagram whose + destination address is drawn from the allocated special-purpose + address block is forwardable beyond a specified administrative + domain. + + o Reserved-by-Protocol - A boolean value indicating whether the + special-purpose address block is reserved by IP, itself. This + value is "TRUE" if the RFC that created the special-purpose + address block requires all compliant IP implementations to behave + in a special way when processing packets either to or from + addresses contained by the address block. + + If the value of "Destination" is FALSE, the values of "Forwardable" + and "Global" must also be false. + */ + + /*+----------------------+----------------------------+ + * | Attribute | Value | + * +----------------------+----------------------------+ + * | Address Block | 0.0.0.0/8 | + * | Name | "This host on this network"| + * | RFC | [RFC1122], Section 3.2.1.3 | + * | Allocation Date | September 1981 | + * | Termination Date | N/A | + * | Source | True | + * | Destination | False | + * | Forwardable | False | + * | Global | False | + * | Reserved-by-Protocol | True | + * +----------------------+----------------------------+*/ + MustIPv4Addr("0.0.0.0/8"), + + /*+----------------------+---------------+ + * | Attribute | Value | + * +----------------------+---------------+ + * | Address Block | 10.0.0.0/8 | + * | Name | Private-Use | + * | RFC | [RFC1918] | + * | Allocation Date | February 1996 | + * | Termination Date | N/A | + * | Source | True | + * | Destination | True | + * | Forwardable | True | + * | Global | False | + * | Reserved-by-Protocol | False | + * +----------------------+---------------+ */ + MustIPv4Addr("10.0.0.0/8"), + + /*+----------------------+----------------------+ + | Attribute | Value | + +----------------------+----------------------+ + | Address Block | 100.64.0.0/10 | + | Name | Shared Address Space | + | RFC | [RFC6598] | + | Allocation Date | April 2012 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------------+*/ + MustIPv4Addr("100.64.0.0/10"), + + /*+----------------------+----------------------------+ + | Attribute | Value | + +----------------------+----------------------------+ + | Address Block | 127.0.0.0/8 | + | Name | Loopback | + | RFC | [RFC1122], Section 3.2.1.3 | + | Allocation Date | September 1981 | + | Termination Date | N/A | + | Source | False [1] | + | Destination | False [1] | + | Forwardable | False [1] | + | Global | False [1] | + | Reserved-by-Protocol | True | + +----------------------+----------------------------+*/ + // [1] Several protocols have been granted exceptions to + // this rule. For examples, see [RFC4379] and + // [RFC5884]. + MustIPv4Addr("127.0.0.0/8"), + + /*+----------------------+----------------+ + | Attribute | Value | + +----------------------+----------------+ + | Address Block | 169.254.0.0/16 | + | Name | Link Local | + | RFC | [RFC3927] | + | Allocation Date | May 2005 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | True | + +----------------------+----------------+*/ + MustIPv4Addr("169.254.0.0/16"), + + /*+----------------------+---------------+ + | Attribute | Value | + +----------------------+---------------+ + | Address Block | 172.16.0.0/12 | + | Name | Private-Use | + | RFC | [RFC1918] | + | Allocation Date | February 1996 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+---------------+*/ + MustIPv4Addr("172.16.0.0/12"), + + /*+----------------------+---------------------------------+ + | Attribute | Value | + +----------------------+---------------------------------+ + | Address Block | 192.0.0.0/24 [2] | + | Name | IETF Protocol Assignments | + | RFC | Section 2.1 of this document | + | Allocation Date | January 2010 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+---------------------------------+*/ + // [2] Not usable unless by virtue of a more specific + // reservation. + MustIPv4Addr("192.0.0.0/24"), + + /*+----------------------+--------------------------------+ + | Attribute | Value | + +----------------------+--------------------------------+ + | Address Block | 192.0.0.0/29 | + | Name | IPv4 Service Continuity Prefix | + | RFC | [RFC6333], [RFC7335] | + | Allocation Date | June 2011 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+--------------------------------+*/ + MustIPv4Addr("192.0.0.0/29"), + + /*+----------------------+----------------------------+ + | Attribute | Value | + +----------------------+----------------------------+ + | Address Block | 192.0.2.0/24 | + | Name | Documentation (TEST-NET-1) | + | RFC | [RFC5737] | + | Allocation Date | January 2010 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------------------+*/ + MustIPv4Addr("192.0.2.0/24"), + + /*+----------------------+--------------------+ + | Attribute | Value | + +----------------------+--------------------+ + | Address Block | 192.88.99.0/24 | + | Name | 6to4 Relay Anycast | + | RFC | [RFC3068] | + | Allocation Date | June 2001 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | True | + | Reserved-by-Protocol | False | + +----------------------+--------------------+*/ + MustIPv4Addr("192.88.99.0/24"), + + /*+----------------------+----------------+ + | Attribute | Value | + +----------------------+----------------+ + | Address Block | 192.168.0.0/16 | + | Name | Private-Use | + | RFC | [RFC1918] | + | Allocation Date | February 1996 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------+*/ + MustIPv4Addr("192.168.0.0/16"), + + /*+----------------------+---------------+ + | Attribute | Value | + +----------------------+---------------+ + | Address Block | 198.18.0.0/15 | + | Name | Benchmarking | + | RFC | [RFC2544] | + | Allocation Date | March 1999 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+---------------+*/ + MustIPv4Addr("198.18.0.0/15"), + + /*+----------------------+----------------------------+ + | Attribute | Value | + +----------------------+----------------------------+ + | Address Block | 198.51.100.0/24 | + | Name | Documentation (TEST-NET-2) | + | RFC | [RFC5737] | + | Allocation Date | January 2010 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------------------+*/ + MustIPv4Addr("198.51.100.0/24"), + + /*+----------------------+----------------------------+ + | Attribute | Value | + +----------------------+----------------------------+ + | Address Block | 203.0.113.0/24 | + | Name | Documentation (TEST-NET-3) | + | RFC | [RFC5737] | + | Allocation Date | January 2010 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------------------+*/ + MustIPv4Addr("203.0.113.0/24"), + + /*+----------------------+----------------------+ + | Attribute | Value | + +----------------------+----------------------+ + | Address Block | 240.0.0.0/4 | + | Name | Reserved | + | RFC | [RFC1112], Section 4 | + | Allocation Date | August 1989 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | True | + +----------------------+----------------------+*/ + MustIPv4Addr("240.0.0.0/4"), + + /*+----------------------+----------------------+ + | Attribute | Value | + +----------------------+----------------------+ + | Address Block | 255.255.255.255/32 | + | Name | Limited Broadcast | + | RFC | [RFC0919], Section 7 | + | Allocation Date | October 1984 | + | Termination Date | N/A | + | Source | False | + | Destination | True | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------------+*/ + MustIPv4Addr("255.255.255.255/32"), + + /*+----------------------+------------------+ + | Attribute | Value | + +----------------------+------------------+ + | Address Block | ::1/128 | + | Name | Loopback Address | + | RFC | [RFC4291] | + | Allocation Date | February 2006 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | True | + +----------------------+------------------+*/ + MustIPv6Addr("::1/128"), + + /*+----------------------+---------------------+ + | Attribute | Value | + +----------------------+---------------------+ + | Address Block | ::/128 | + | Name | Unspecified Address | + | RFC | [RFC4291] | + | Allocation Date | February 2006 | + | Termination Date | N/A | + | Source | True | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | True | + +----------------------+---------------------+*/ + MustIPv6Addr("::/128"), + + /*+----------------------+---------------------+ + | Attribute | Value | + +----------------------+---------------------+ + | Address Block | 64:ff9b::/96 | + | Name | IPv4-IPv6 Translat. | + | RFC | [RFC6052] | + | Allocation Date | October 2010 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | True | + | Reserved-by-Protocol | False | + +----------------------+---------------------+*/ + MustIPv6Addr("64:ff9b::/96"), + + /*+----------------------+---------------------+ + | Attribute | Value | + +----------------------+---------------------+ + | Address Block | ::ffff:0:0/96 | + | Name | IPv4-mapped Address | + | RFC | [RFC4291] | + | Allocation Date | February 2006 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | True | + +----------------------+---------------------+*/ + MustIPv6Addr("::ffff:0:0/96"), + + /*+----------------------+----------------------------+ + | Attribute | Value | + +----------------------+----------------------------+ + | Address Block | 100::/64 | + | Name | Discard-Only Address Block | + | RFC | [RFC6666] | + | Allocation Date | June 2012 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------------------+*/ + MustIPv6Addr("100::/64"), + + /*+----------------------+---------------------------+ + | Attribute | Value | + +----------------------+---------------------------+ + | Address Block | 2001::/23 | + | Name | IETF Protocol Assignments | + | RFC | [RFC2928] | + | Allocation Date | September 2000 | + | Termination Date | N/A | + | Source | False[1] | + | Destination | False[1] | + | Forwardable | False[1] | + | Global | False[1] | + | Reserved-by-Protocol | False | + +----------------------+---------------------------+*/ + // [1] Unless allowed by a more specific allocation. + MustIPv6Addr("2001::/16"), + + /*+----------------------+----------------+ + | Attribute | Value | + +----------------------+----------------+ + | Address Block | 2001::/32 | + | Name | TEREDO | + | RFC | [RFC4380] | + | Allocation Date | January 2006 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------+*/ + // Covered by previous entry, included for completeness. + // + // MustIPv6Addr("2001::/16"), + + /*+----------------------+----------------+ + | Attribute | Value | + +----------------------+----------------+ + | Address Block | 2001:2::/48 | + | Name | Benchmarking | + | RFC | [RFC5180] | + | Allocation Date | April 2008 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+----------------+*/ + // Covered by previous entry, included for completeness. + // + // MustIPv6Addr("2001:2::/48"), + + /*+----------------------+---------------+ + | Attribute | Value | + +----------------------+---------------+ + | Address Block | 2001:db8::/32 | + | Name | Documentation | + | RFC | [RFC3849] | + | Allocation Date | July 2004 | + | Termination Date | N/A | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+---------------+*/ + // Covered by previous entry, included for completeness. + // + // MustIPv6Addr("2001:db8::/32"), + + /*+----------------------+--------------+ + | Attribute | Value | + +----------------------+--------------+ + | Address Block | 2001:10::/28 | + | Name | ORCHID | + | RFC | [RFC4843] | + | Allocation Date | March 2007 | + | Termination Date | March 2014 | + | Source | False | + | Destination | False | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+--------------+*/ + // Covered by previous entry, included for completeness. + // + // MustIPv6Addr("2001:10::/28"), + + /*+----------------------+---------------+ + | Attribute | Value | + +----------------------+---------------+ + | Address Block | 2002::/16 [2] | + | Name | 6to4 | + | RFC | [RFC3056] | + | Allocation Date | February 2001 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | N/A [2] | + | Reserved-by-Protocol | False | + +----------------------+---------------+*/ + // [2] See [RFC3056] for details. + MustIPv6Addr("2002::/16"), + + /*+----------------------+--------------+ + | Attribute | Value | + +----------------------+--------------+ + | Address Block | fc00::/7 | + | Name | Unique-Local | + | RFC | [RFC4193] | + | Allocation Date | October 2005 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | True | + | Global | False | + | Reserved-by-Protocol | False | + +----------------------+--------------+*/ + MustIPv6Addr("fc00::/7"), + + /*+----------------------+-----------------------+ + | Attribute | Value | + +----------------------+-----------------------+ + | Address Block | fe80::/10 | + | Name | Linked-Scoped Unicast | + | RFC | [RFC4291] | + | Allocation Date | February 2006 | + | Termination Date | N/A | + | Source | True | + | Destination | True | + | Forwardable | False | + | Global | False | + | Reserved-by-Protocol | True | + +----------------------+-----------------------+*/ + MustIPv6Addr("fe80::/10"), + }, + 7335: { + // [RFC7335] IPv4 Service Continuity Prefix + MustIPv4Addr("192.0.0.0/29"), // [RFC7335], §6 IANA Considerations + }, + ForwardingBlacklist: { // Pseudo-RFC + // Blacklist of non-forwardable IP blocks taken from RFC6890 + // + // TODO: the attributes for forwardable should be + // searcahble and embedded in the main list of RFCs + // above. + MustIPv4Addr("0.0.0.0/8"), + MustIPv4Addr("127.0.0.0/8"), + MustIPv4Addr("169.254.0.0/16"), + MustIPv4Addr("192.0.0.0/24"), + MustIPv4Addr("192.0.2.0/24"), + MustIPv4Addr("198.51.100.0/24"), + MustIPv4Addr("203.0.113.0/24"), + MustIPv4Addr("240.0.0.0/4"), + MustIPv4Addr("255.255.255.255/32"), + MustIPv6Addr("::1/128"), + MustIPv6Addr("::/128"), + MustIPv6Addr("::ffff:0:0/96"), + + // There is no way of expressing a whitelist per RFC2928 + // atm without creating a negative mask, which I don't + // want to do atm. + //MustIPv6Addr("2001::/23"), + + MustIPv6Addr("2001:db8::/32"), + MustIPv6Addr("2001:10::/28"), + MustIPv6Addr("fe80::/10"), + }, + } +} + +// VisitAllRFCs iterates over all known RFCs and calls the visitor +func VisitAllRFCs(fn func(rfcNum uint, sockaddrs SockAddrs)) { + rfcNetMap := KnownRFCs() + + // Blacklist of faux-RFCs. Don't show the world that we're abusing the + // RFC system in this library. + rfcBlacklist := map[uint]struct{}{ + ForwardingBlacklist: {}, + } + + for rfcNum, sas := range rfcNetMap { + if _, found := rfcBlacklist[rfcNum]; !found { + fn(rfcNum, sas) + } + } +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/route_info.go b/vendor/github.com/hashicorp/go-sockaddr/route_info.go new file mode 100644 index 0000000000..2a3ee1db9e --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/route_info.go @@ -0,0 +1,19 @@ +package sockaddr + +// RouteInterface specifies an interface for obtaining memoized route table and +// network information from a given OS. +type RouteInterface interface { + // GetDefaultInterfaceName returns the name of the interface that has a + // default route or an error and an empty string if a problem was + // encountered. + GetDefaultInterfaceName() (string, error) +} + +// VisitCommands visits each command used by the platform-specific RouteInfo +// implementation. +func (ri routeInfo) VisitCommands(fn func(name string, cmd []string)) { + for k, v := range ri.cmds { + cmds := append([]string(nil), v...) + fn(k, cmds) + } +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/route_info_bsd.go b/vendor/github.com/hashicorp/go-sockaddr/route_info_bsd.go new file mode 100644 index 0000000000..705757abc7 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/route_info_bsd.go @@ -0,0 +1,36 @@ +// +build darwin dragonfly freebsd netbsd openbsd + +package sockaddr + +import "os/exec" + +var cmds map[string][]string = map[string][]string{ + "route": {"/sbin/route", "-n", "get", "default"}, +} + +type routeInfo struct { + cmds map[string][]string +} + +// NewRouteInfo returns a BSD-specific implementation of the RouteInfo +// interface. +func NewRouteInfo() (routeInfo, error) { + return routeInfo{ + cmds: cmds, + }, nil +} + +// GetDefaultInterfaceName returns the interface name attached to the default +// route on the default interface. +func (ri routeInfo) GetDefaultInterfaceName() (string, error) { + out, err := exec.Command(cmds["route"][0], cmds["route"][1:]...).Output() + if err != nil { + return "", err + } + + var ifName string + if ifName, err = parseDefaultIfNameFromRoute(string(out)); err != nil { + return "", err + } + return ifName, nil +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/route_info_default.go b/vendor/github.com/hashicorp/go-sockaddr/route_info_default.go new file mode 100644 index 0000000000..d1b009f653 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/route_info_default.go @@ -0,0 +1,10 @@ +// +build android nacl plan9 + +package sockaddr + +import "errors" + +// getDefaultIfName is the default interface function for unsupported platforms. +func getDefaultIfName() (string, error) { + return "", errors.New("No default interface found (unsupported platform)") +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/route_info_linux.go b/vendor/github.com/hashicorp/go-sockaddr/route_info_linux.go new file mode 100644 index 0000000000..b33e4c0d08 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/route_info_linux.go @@ -0,0 +1,37 @@ +package sockaddr + +import ( + "errors" + "os/exec" +) + +var cmds map[string][]string = map[string][]string{ + "ip": {"/sbin/ip", "route"}, +} + +type routeInfo struct { + cmds map[string][]string +} + +// NewRouteInfo returns a Linux-specific implementation of the RouteInfo +// interface. +func NewRouteInfo() (routeInfo, error) { + return routeInfo{ + cmds: cmds, + }, nil +} + +// GetDefaultInterfaceName returns the interface name attached to the default +// route on the default interface. +func (ri routeInfo) GetDefaultInterfaceName() (string, error) { + out, err := exec.Command(cmds["ip"][0], cmds["ip"][1:]...).Output() + if err != nil { + return "", err + } + + var ifName string + if ifName, err = parseDefaultIfNameFromIPCmd(string(out)); err != nil { + return "", errors.New("No default interface found") + } + return ifName, nil +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/route_info_solaris.go b/vendor/github.com/hashicorp/go-sockaddr/route_info_solaris.go new file mode 100644 index 0000000000..ee8e7984d7 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/route_info_solaris.go @@ -0,0 +1,37 @@ +package sockaddr + +import ( + "errors" + "os/exec" +) + +var cmds map[string][]string = map[string][]string{ + "route": {"/usr/sbin/route", "-n", "get", "default"}, +} + +type routeInfo struct { + cmds map[string][]string +} + +// NewRouteInfo returns a BSD-specific implementation of the RouteInfo +// interface. +func NewRouteInfo() (routeInfo, error) { + return routeInfo{ + cmds: cmds, + }, nil +} + +// GetDefaultInterfaceName returns the interface name attached to the default +// route on the default interface. +func (ri routeInfo) GetDefaultInterfaceName() (string, error) { + out, err := exec.Command(cmds["route"][0], cmds["route"][1:]...).Output() + if err != nil { + return "", err + } + + var ifName string + if ifName, err = parseDefaultIfNameFromRoute(string(out)); err != nil { + return "", errors.New("No default interface found") + } + return ifName, nil +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/route_info_windows.go b/vendor/github.com/hashicorp/go-sockaddr/route_info_windows.go new file mode 100644 index 0000000000..3da972883e --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/route_info_windows.go @@ -0,0 +1,41 @@ +package sockaddr + +import "os/exec" + +var cmds map[string][]string = map[string][]string{ + "netstat": {"netstat", "-rn"}, + "ipconfig": {"ipconfig"}, +} + +type routeInfo struct { + cmds map[string][]string +} + +// NewRouteInfo returns a BSD-specific implementation of the RouteInfo +// interface. +func NewRouteInfo() (routeInfo, error) { + return routeInfo{ + cmds: cmds, + }, nil +} + +// GetDefaultInterfaceName returns the interface name attached to the default +// route on the default interface. +func (ri routeInfo) GetDefaultInterfaceName() (string, error) { + ifNameOut, err := exec.Command(cmds["netstat"][0], cmds["netstat"][1:]...).Output() + if err != nil { + return "", err + } + + ipconfigOut, err := exec.Command(cmds["ipconfig"][0], cmds["ipconfig"][1:]...).Output() + if err != nil { + return "", err + } + + ifName, err := parseDefaultIfNameWindows(string(ifNameOut), string(ipconfigOut)) + if err != nil { + return "", err + } + + return ifName, nil +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/sockaddr.go b/vendor/github.com/hashicorp/go-sockaddr/sockaddr.go new file mode 100644 index 0000000000..51389ebe9a --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/sockaddr.go @@ -0,0 +1,178 @@ +package sockaddr + +import ( + "fmt" + "strings" +) + +type SockAddrType int +type AttrName string + +const ( + TypeUnknown SockAddrType = 0x0 + TypeUnix = 0x1 + TypeIPv4 = 0x2 + TypeIPv6 = 0x4 + + // TypeIP is the union of TypeIPv4 and TypeIPv6 + TypeIP = 0x6 +) + +type SockAddr interface { + // CmpRFC returns 0 if SockAddr exactly matches one of the matched RFC + // networks, -1 if the receiver is contained within the RFC network, or + // 1 if the address is not contained within the RFC. + CmpRFC(rfcNum uint, sa SockAddr) int + + // Contains returns true if the SockAddr arg is contained within the + // receiver + Contains(SockAddr) bool + + // Equal allows for the comparison of two SockAddrs + Equal(SockAddr) bool + + DialPacketArgs() (string, string) + DialStreamArgs() (string, string) + ListenPacketArgs() (string, string) + ListenStreamArgs() (string, string) + + // String returns the string representation of SockAddr + String() string + + // Type returns the SockAddrType + Type() SockAddrType +} + +// sockAddrAttrMap is a map of the SockAddr type-specific attributes. +var sockAddrAttrMap map[AttrName]func(SockAddr) string +var sockAddrAttrs []AttrName + +func init() { + sockAddrInit() +} + +// New creates a new SockAddr from the string. The order in which New() +// attempts to construct a SockAddr is: IPv4Addr, IPv6Addr, SockAddrUnix. +// +// NOTE: New() relies on the heuristic wherein if the path begins with either a +// '.' or '/' character before creating a new UnixSock. For UNIX sockets that +// are absolute paths or are nested within a sub-directory, this works as +// expected, however if the UNIX socket is contained in the current working +// directory, this will fail unless the path begins with "./" +// (e.g. "./my-local-socket"). Calls directly to NewUnixSock() do not suffer +// this limitation. Invalid IP addresses such as "256.0.0.0/-1" will run afoul +// of this heuristic and be assumed to be a valid UNIX socket path (which they +// are, but it is probably not what you want and you won't realize it until you +// stat(2) the file system to discover it doesn't exist). +func NewSockAddr(s string) (SockAddr, error) { + ipv4Addr, err := NewIPv4Addr(s) + if err == nil { + return ipv4Addr, nil + } + + ipv6Addr, err := NewIPv6Addr(s) + if err == nil { + return ipv6Addr, nil + } + + // Check to make sure the string begins with either a '.' or '/', or + // contains a '/'. + if len(s) > 1 && (strings.IndexAny(s[0:1], "./") != -1 || strings.IndexByte(s, '/') != -1) { + unixSock, err := NewUnixSock(s) + if err == nil { + return unixSock, nil + } + } + + return nil, fmt.Errorf("Unable to convert %q to an IPv4 or IPv6 address, or a UNIX Socket", s) +} + +// ToIPAddr returns an IPAddr type or nil if the type conversion fails. +func ToIPAddr(sa SockAddr) *IPAddr { + ipa, ok := sa.(IPAddr) + if !ok { + return nil + } + return &ipa +} + +// ToIPv4Addr returns an IPv4Addr type or nil if the type conversion fails. +func ToIPv4Addr(sa SockAddr) *IPv4Addr { + switch v := sa.(type) { + case IPv4Addr: + return &v + default: + return nil + } +} + +// ToIPv6Addr returns an IPv6Addr type or nil if the type conversion fails. +func ToIPv6Addr(sa SockAddr) *IPv6Addr { + switch v := sa.(type) { + case IPv6Addr: + return &v + default: + return nil + } +} + +// ToUnixSock returns a UnixSock type or nil if the type conversion fails. +func ToUnixSock(sa SockAddr) *UnixSock { + switch v := sa.(type) { + case UnixSock: + return &v + default: + return nil + } +} + +// SockAddrAttr returns a string representation of an attribute for the given +// SockAddr. +func SockAddrAttr(sa SockAddr, selector AttrName) string { + fn, found := sockAddrAttrMap[selector] + if !found { + return "" + } + + return fn(sa) +} + +// String() for SockAddrType returns a string representation of the +// SockAddrType (e.g. "IPv4", "IPv6", "UNIX", "IP", or "unknown"). +func (sat SockAddrType) String() string { + switch sat { + case TypeIPv4: + return "IPv4" + case TypeIPv6: + return "IPv6" + // There is no concrete "IP" type. Leaving here as a reminder. + // case TypeIP: + // return "IP" + case TypeUnix: + return "UNIX" + default: + panic("unsupported type") + } +} + +// sockAddrInit is called once at init() +func sockAddrInit() { + sockAddrAttrs = []AttrName{ + "type", // type should be first + "string", + } + + sockAddrAttrMap = map[AttrName]func(sa SockAddr) string{ + "string": func(sa SockAddr) string { + return sa.String() + }, + "type": func(sa SockAddr) string { + return sa.Type().String() + }, + } +} + +// UnixSockAttrs returns a list of attributes supported by the UnixSock type +func SockAddrAttrs() []AttrName { + return sockAddrAttrs +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/sockaddrs.go b/vendor/github.com/hashicorp/go-sockaddr/sockaddrs.go new file mode 100644 index 0000000000..75fbffb1ea --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/sockaddrs.go @@ -0,0 +1,193 @@ +package sockaddr + +import ( + "bytes" + "sort" +) + +// SockAddrs is a slice of SockAddrs +type SockAddrs []SockAddr + +func (s SockAddrs) Len() int { return len(s) } +func (s SockAddrs) Swap(i, j int) { s[i], s[j] = s[j], s[i] } + +// CmpAddrFunc is the function signature that must be met to be used in the +// OrderedAddrBy multiAddrSorter +type CmpAddrFunc func(p1, p2 *SockAddr) int + +// multiAddrSorter implements the Sort interface, sorting the SockAddrs within. +type multiAddrSorter struct { + addrs SockAddrs + cmp []CmpAddrFunc +} + +// Sort sorts the argument slice according to the Cmp functions passed to +// OrderedAddrBy. +func (ms *multiAddrSorter) Sort(sockAddrs SockAddrs) { + ms.addrs = sockAddrs + sort.Sort(ms) +} + +// OrderedAddrBy sorts SockAddr by the list of sort function pointers. +func OrderedAddrBy(cmpFuncs ...CmpAddrFunc) *multiAddrSorter { + return &multiAddrSorter{ + cmp: cmpFuncs, + } +} + +// Len is part of sort.Interface. +func (ms *multiAddrSorter) Len() int { + return len(ms.addrs) +} + +// Less is part of sort.Interface. It is implemented by looping along the +// Cmp() functions until it finds a comparison that is either less than, +// equal to, or greater than. +func (ms *multiAddrSorter) Less(i, j int) bool { + p, q := &ms.addrs[i], &ms.addrs[j] + // Try all but the last comparison. + var k int + for k = 0; k < len(ms.cmp)-1; k++ { + cmp := ms.cmp[k] + x := cmp(p, q) + switch x { + case -1: + // p < q, so we have a decision. + return true + case 1: + // p > q, so we have a decision. + return false + } + // p == q; try the next comparison. + } + // All comparisons to here said "equal", so just return whatever the + // final comparison reports. + switch ms.cmp[k](p, q) { + case -1: + return true + case 1: + return false + default: + // Still a tie! Now what? + return false + } +} + +// Swap is part of sort.Interface. +func (ms *multiAddrSorter) Swap(i, j int) { + ms.addrs[i], ms.addrs[j] = ms.addrs[j], ms.addrs[i] +} + +const ( + // NOTE (sean@): These constants are here for code readability only and + // are sprucing up the code for readability purposes. Some of the + // Cmp*() variants have confusing logic (especially when dealing with + // mixed-type comparisons) and this, I think, has made it easier to grok + // the code faster. + sortReceiverBeforeArg = -1 + sortDeferDecision = 0 + sortArgBeforeReceiver = 1 +) + +// AscAddress is a sorting function to sort SockAddrs by their respective +// address type. Non-equal types are deferred in the sort. +func AscAddress(p1Ptr, p2Ptr *SockAddr) int { + p1 := *p1Ptr + p2 := *p2Ptr + + switch v := p1.(type) { + case IPv4Addr: + return v.CmpAddress(p2) + case IPv6Addr: + return v.CmpAddress(p2) + case UnixSock: + return v.CmpAddress(p2) + default: + return sortDeferDecision + } +} + +// AscPort is a sorting function to sort SockAddrs by their respective address +// type. Non-equal types are deferred in the sort. +func AscPort(p1Ptr, p2Ptr *SockAddr) int { + p1 := *p1Ptr + p2 := *p2Ptr + + switch v := p1.(type) { + case IPv4Addr: + return v.CmpPort(p2) + case IPv6Addr: + return v.CmpPort(p2) + default: + return sortDeferDecision + } +} + +// AscPrivate is a sorting function to sort "more secure" private values before +// "more public" values. Both IPv4 and IPv6 are compared against RFC6890 +// (RFC6890 includes, and is not limited to, RFC1918 and RFC6598 for IPv4, and +// IPv6 includes RFC4193). +func AscPrivate(p1Ptr, p2Ptr *SockAddr) int { + p1 := *p1Ptr + p2 := *p2Ptr + + switch v := p1.(type) { + case IPv4Addr, IPv6Addr: + return v.CmpRFC(6890, p2) + default: + return sortDeferDecision + } +} + +// AscNetworkSize is a sorting function to sort SockAddrs based on their network +// size. Non-equal types are deferred in the sort. +func AscNetworkSize(p1Ptr, p2Ptr *SockAddr) int { + p1 := *p1Ptr + p2 := *p2Ptr + p1Type := p1.Type() + p2Type := p2.Type() + + // Network size operations on non-IP types make no sense + if p1Type != p2Type && p1Type != TypeIP { + return sortDeferDecision + } + + ipA := p1.(IPAddr) + ipB := p2.(IPAddr) + + return bytes.Compare([]byte(*ipA.NetIPMask()), []byte(*ipB.NetIPMask())) +} + +// AscType is a sorting function to sort "more secure" types before +// "less-secure" types. +func AscType(p1Ptr, p2Ptr *SockAddr) int { + p1 := *p1Ptr + p2 := *p2Ptr + p1Type := p1.Type() + p2Type := p2.Type() + switch { + case p1Type < p2Type: + return sortReceiverBeforeArg + case p1Type == p2Type: + return sortDeferDecision + case p1Type > p2Type: + return sortArgBeforeReceiver + default: + return sortDeferDecision + } +} + +// FilterByType returns two lists: a list of matched and unmatched SockAddrs +func (sas SockAddrs) FilterByType(type_ SockAddrType) (matched, excluded SockAddrs) { + matched = make(SockAddrs, 0, len(sas)) + excluded = make(SockAddrs, 0, len(sas)) + + for _, sa := range sas { + if sa.Type()&type_ != 0 { + matched = append(matched, sa) + } else { + excluded = append(excluded, sa) + } + } + return matched, excluded +} diff --git a/vendor/github.com/hashicorp/go-sockaddr/unixsock.go b/vendor/github.com/hashicorp/go-sockaddr/unixsock.go new file mode 100644 index 0000000000..f3be3f67e7 --- /dev/null +++ b/vendor/github.com/hashicorp/go-sockaddr/unixsock.go @@ -0,0 +1,135 @@ +package sockaddr + +import ( + "fmt" + "strings" +) + +type UnixSock struct { + SockAddr + path string +} +type UnixSocks []*UnixSock + +// unixAttrMap is a map of the UnixSockAddr type-specific attributes. +var unixAttrMap map[AttrName]func(UnixSock) string +var unixAttrs []AttrName + +func init() { + unixAttrInit() +} + +// NewUnixSock creates an UnixSock from a string path. String can be in the +// form of either URI-based string (e.g. `file:///etc/passwd`), an absolute +// path (e.g. `/etc/passwd`), or a relative path (e.g. `./foo`). +func NewUnixSock(s string) (ret UnixSock, err error) { + ret.path = s + return ret, nil +} + +// CmpAddress follows the Cmp() standard protocol and returns: +// +// - -1 If the receiver should sort first because its name lexically sorts before arg +// - 0 if the SockAddr arg is not a UnixSock, or is a UnixSock with the same path. +// - 1 If the argument should sort first. +func (us UnixSock) CmpAddress(sa SockAddr) int { + usb, ok := sa.(UnixSock) + if !ok { + return sortDeferDecision + } + + return strings.Compare(us.Path(), usb.Path()) +} + +// DialPacketArgs returns the arguments required to be passed to net.DialUnix() +// with the `unixgram` network type. +func (us UnixSock) DialPacketArgs() (network, dialArgs string) { + return "unixgram", us.path +} + +// DialStreamArgs returns the arguments required to be passed to net.DialUnix() +// with the `unix` network type. +func (us UnixSock) DialStreamArgs() (network, dialArgs string) { + return "unix", us.path +} + +// Equal returns true if a SockAddr is equal to the receiving UnixSock. +func (us UnixSock) Equal(sa SockAddr) bool { + usb, ok := sa.(UnixSock) + if !ok { + return false + } + + if us.Path() != usb.Path() { + return false + } + + return true +} + +// ListenPacketArgs returns the arguments required to be passed to +// net.ListenUnixgram() with the `unixgram` network type. +func (us UnixSock) ListenPacketArgs() (network, dialArgs string) { + return "unixgram", us.path +} + +// ListenStreamArgs returns the arguments required to be passed to +// net.ListenUnix() with the `unix` network type. +func (us UnixSock) ListenStreamArgs() (network, dialArgs string) { + return "unix", us.path +} + +// MustUnixSock is a helper method that must return an UnixSock or panic on +// invalid input. +func MustUnixSock(addr string) UnixSock { + us, err := NewUnixSock(addr) + if err != nil { + panic(fmt.Sprintf("Unable to create a UnixSock from %+q: %v", addr, err)) + } + return us +} + +// Path returns the given path of the UnixSock +func (us UnixSock) Path() string { + return us.path +} + +// String returns the path of the UnixSock +func (us UnixSock) String() string { + return fmt.Sprintf("%+q", us.path) +} + +// Type is used as a type switch and returns TypeUnix +func (UnixSock) Type() SockAddrType { + return TypeUnix +} + +// UnixSockAttrs returns a list of attributes supported by the UnixSockAddr type +func UnixSockAttrs() []AttrName { + return unixAttrs +} + +// UnixSockAttr returns a string representation of an attribute for the given +// UnixSock. +func UnixSockAttr(us UnixSock, attrName AttrName) string { + fn, found := unixAttrMap[attrName] + if !found { + return "" + } + + return fn(us) +} + +// unixAttrInit is called once at init() +func unixAttrInit() { + // Sorted for human readability + unixAttrs = []AttrName{ + "path", + } + + unixAttrMap = map[AttrName]func(us UnixSock) string{ + "path": func(us UnixSock) string { + return us.Path() + }, + } +} diff --git a/vendor/github.com/hashicorp/golang-lru/2q.go b/vendor/github.com/hashicorp/golang-lru/2q.go new file mode 100644 index 0000000000..337d963296 --- /dev/null +++ b/vendor/github.com/hashicorp/golang-lru/2q.go @@ -0,0 +1,212 @@ +package lru + +import ( + "fmt" + "sync" + + "github.com/hashicorp/golang-lru/simplelru" +) + +const ( + // Default2QRecentRatio is the ratio of the 2Q cache dedicated + // to recently added entries that have only been accessed once. + Default2QRecentRatio = 0.25 + + // Default2QGhostEntries is the default ratio of ghost + // entries kept to track entries recently evicted + Default2QGhostEntries = 0.50 +) + +// TwoQueueCache is a thread-safe fixed size 2Q cache. +// 2Q is an enhancement over the standard LRU cache +// in that it tracks both frequently and recently used +// entries separately. This avoids a burst in access to new +// entries from evicting frequently used entries. It adds some +// additional tracking overhead to the standard LRU cache, and is +// computationally about 2x the cost, and adds some metadata over +// head. The ARCCache is similar, but does not require setting any +// parameters. +type TwoQueueCache struct { + size int + recentSize int + + recent *simplelru.LRU + frequent *simplelru.LRU + recentEvict *simplelru.LRU + lock sync.RWMutex +} + +// New2Q creates a new TwoQueueCache using the default +// values for the parameters. +func New2Q(size int) (*TwoQueueCache, error) { + return New2QParams(size, Default2QRecentRatio, Default2QGhostEntries) +} + +// New2QParams creates a new TwoQueueCache using the provided +// parameter values. +func New2QParams(size int, recentRatio float64, ghostRatio float64) (*TwoQueueCache, error) { + if size <= 0 { + return nil, fmt.Errorf("invalid size") + } + if recentRatio < 0.0 || recentRatio > 1.0 { + return nil, fmt.Errorf("invalid recent ratio") + } + if ghostRatio < 0.0 || ghostRatio > 1.0 { + return nil, fmt.Errorf("invalid ghost ratio") + } + + // Determine the sub-sizes + recentSize := int(float64(size) * recentRatio) + evictSize := int(float64(size) * ghostRatio) + + // Allocate the LRUs + recent, err := simplelru.NewLRU(size, nil) + if err != nil { + return nil, err + } + frequent, err := simplelru.NewLRU(size, nil) + if err != nil { + return nil, err + } + recentEvict, err := simplelru.NewLRU(evictSize, nil) + if err != nil { + return nil, err + } + + // Initialize the cache + c := &TwoQueueCache{ + size: size, + recentSize: recentSize, + recent: recent, + frequent: frequent, + recentEvict: recentEvict, + } + return c, nil +} + +func (c *TwoQueueCache) Get(key interface{}) (interface{}, bool) { + c.lock.Lock() + defer c.lock.Unlock() + + // Check if this is a frequent value + if val, ok := c.frequent.Get(key); ok { + return val, ok + } + + // If the value is contained in recent, then we + // promote it to frequent + if val, ok := c.recent.Peek(key); ok { + c.recent.Remove(key) + c.frequent.Add(key, val) + return val, ok + } + + // No hit + return nil, false +} + +func (c *TwoQueueCache) Add(key, value interface{}) { + c.lock.Lock() + defer c.lock.Unlock() + + // Check if the value is frequently used already, + // and just update the value + if c.frequent.Contains(key) { + c.frequent.Add(key, value) + return + } + + // Check if the value is recently used, and promote + // the value into the frequent list + if c.recent.Contains(key) { + c.recent.Remove(key) + c.frequent.Add(key, value) + return + } + + // If the value was recently evicted, add it to the + // frequently used list + if c.recentEvict.Contains(key) { + c.ensureSpace(true) + c.recentEvict.Remove(key) + c.frequent.Add(key, value) + return + } + + // Add to the recently seen list + c.ensureSpace(false) + c.recent.Add(key, value) + return +} + +// ensureSpace is used to ensure we have space in the cache +func (c *TwoQueueCache) ensureSpace(recentEvict bool) { + // If we have space, nothing to do + recentLen := c.recent.Len() + freqLen := c.frequent.Len() + if recentLen+freqLen < c.size { + return + } + + // If the recent buffer is larger than + // the target, evict from there + if recentLen > 0 && (recentLen > c.recentSize || (recentLen == c.recentSize && !recentEvict)) { + k, _, _ := c.recent.RemoveOldest() + c.recentEvict.Add(k, nil) + return + } + + // Remove from the frequent list otherwise + c.frequent.RemoveOldest() +} + +func (c *TwoQueueCache) Len() int { + c.lock.RLock() + defer c.lock.RUnlock() + return c.recent.Len() + c.frequent.Len() +} + +func (c *TwoQueueCache) Keys() []interface{} { + c.lock.RLock() + defer c.lock.RUnlock() + k1 := c.frequent.Keys() + k2 := c.recent.Keys() + return append(k1, k2...) +} + +func (c *TwoQueueCache) Remove(key interface{}) { + c.lock.Lock() + defer c.lock.Unlock() + if c.frequent.Remove(key) { + return + } + if c.recent.Remove(key) { + return + } + if c.recentEvict.Remove(key) { + return + } +} + +func (c *TwoQueueCache) Purge() { + c.lock.Lock() + defer c.lock.Unlock() + c.recent.Purge() + c.frequent.Purge() + c.recentEvict.Purge() +} + +func (c *TwoQueueCache) Contains(key interface{}) bool { + c.lock.RLock() + defer c.lock.RUnlock() + return c.frequent.Contains(key) || c.recent.Contains(key) +} + +func (c *TwoQueueCache) Peek(key interface{}) (interface{}, bool) { + c.lock.RLock() + defer c.lock.RUnlock() + if val, ok := c.frequent.Peek(key); ok { + return val, ok + } + return c.recent.Peek(key) +} diff --git a/vendor/github.com/hashicorp/golang-lru/README.md b/vendor/github.com/hashicorp/golang-lru/README.md new file mode 100644 index 0000000000..33e58cfaf9 --- /dev/null +++ b/vendor/github.com/hashicorp/golang-lru/README.md @@ -0,0 +1,25 @@ +golang-lru +========== + +This provides the `lru` package which implements a fixed-size +thread safe LRU cache. It is based on the cache in Groupcache. + +Documentation +============= + +Full docs are available on [Godoc](http://godoc.org/github.com/hashicorp/golang-lru) + +Example +======= + +Using the LRU is very simple: + +```go +l, _ := New(128) +for i := 0; i < 256; i++ { + l.Add(i, nil) +} +if l.Len() != 128 { + panic(fmt.Sprintf("bad len: %v", l.Len())) +} +``` diff --git a/vendor/github.com/hashicorp/golang-lru/arc.go b/vendor/github.com/hashicorp/golang-lru/arc.go new file mode 100644 index 0000000000..a2a2528173 --- /dev/null +++ b/vendor/github.com/hashicorp/golang-lru/arc.go @@ -0,0 +1,257 @@ +package lru + +import ( + "sync" + + "github.com/hashicorp/golang-lru/simplelru" +) + +// ARCCache is a thread-safe fixed size Adaptive Replacement Cache (ARC). +// ARC is an enhancement over the standard LRU cache in that tracks both +// frequency and recency of use. This avoids a burst in access to new +// entries from evicting the frequently used older entries. It adds some +// additional tracking overhead to a standard LRU cache, computationally +// it is roughly 2x the cost, and the extra memory overhead is linear +// with the size of the cache. ARC has been patented by IBM, but is +// similar to the TwoQueueCache (2Q) which requires setting parameters. +type ARCCache struct { + size int // Size is the total capacity of the cache + p int // P is the dynamic preference towards T1 or T2 + + t1 *simplelru.LRU // T1 is the LRU for recently accessed items + b1 *simplelru.LRU // B1 is the LRU for evictions from t1 + + t2 *simplelru.LRU // T2 is the LRU for frequently accessed items + b2 *simplelru.LRU // B2 is the LRU for evictions from t2 + + lock sync.RWMutex +} + +// NewARC creates an ARC of the given size +func NewARC(size int) (*ARCCache, error) { + // Create the sub LRUs + b1, err := simplelru.NewLRU(size, nil) + if err != nil { + return nil, err + } + b2, err := simplelru.NewLRU(size, nil) + if err != nil { + return nil, err + } + t1, err := simplelru.NewLRU(size, nil) + if err != nil { + return nil, err + } + t2, err := simplelru.NewLRU(size, nil) + if err != nil { + return nil, err + } + + // Initialize the ARC + c := &ARCCache{ + size: size, + p: 0, + t1: t1, + b1: b1, + t2: t2, + b2: b2, + } + return c, nil +} + +// Get looks up a key's value from the cache. +func (c *ARCCache) Get(key interface{}) (interface{}, bool) { + c.lock.Lock() + defer c.lock.Unlock() + + // Ff the value is contained in T1 (recent), then + // promote it to T2 (frequent) + if val, ok := c.t1.Peek(key); ok { + c.t1.Remove(key) + c.t2.Add(key, val) + return val, ok + } + + // Check if the value is contained in T2 (frequent) + if val, ok := c.t2.Get(key); ok { + return val, ok + } + + // No hit + return nil, false +} + +// Add adds a value to the cache. +func (c *ARCCache) Add(key, value interface{}) { + c.lock.Lock() + defer c.lock.Unlock() + + // Check if the value is contained in T1 (recent), and potentially + // promote it to frequent T2 + if c.t1.Contains(key) { + c.t1.Remove(key) + c.t2.Add(key, value) + return + } + + // Check if the value is already in T2 (frequent) and update it + if c.t2.Contains(key) { + c.t2.Add(key, value) + return + } + + // Check if this value was recently evicted as part of the + // recently used list + if c.b1.Contains(key) { + // T1 set is too small, increase P appropriately + delta := 1 + b1Len := c.b1.Len() + b2Len := c.b2.Len() + if b2Len > b1Len { + delta = b2Len / b1Len + } + if c.p+delta >= c.size { + c.p = c.size + } else { + c.p += delta + } + + // Potentially need to make room in the cache + if c.t1.Len()+c.t2.Len() >= c.size { + c.replace(false) + } + + // Remove from B1 + c.b1.Remove(key) + + // Add the key to the frequently used list + c.t2.Add(key, value) + return + } + + // Check if this value was recently evicted as part of the + // frequently used list + if c.b2.Contains(key) { + // T2 set is too small, decrease P appropriately + delta := 1 + b1Len := c.b1.Len() + b2Len := c.b2.Len() + if b1Len > b2Len { + delta = b1Len / b2Len + } + if delta >= c.p { + c.p = 0 + } else { + c.p -= delta + } + + // Potentially need to make room in the cache + if c.t1.Len()+c.t2.Len() >= c.size { + c.replace(true) + } + + // Remove from B2 + c.b2.Remove(key) + + // Add the key to the frequntly used list + c.t2.Add(key, value) + return + } + + // Potentially need to make room in the cache + if c.t1.Len()+c.t2.Len() >= c.size { + c.replace(false) + } + + // Keep the size of the ghost buffers trim + if c.b1.Len() > c.size-c.p { + c.b1.RemoveOldest() + } + if c.b2.Len() > c.p { + c.b2.RemoveOldest() + } + + // Add to the recently seen list + c.t1.Add(key, value) + return +} + +// replace is used to adaptively evict from either T1 or T2 +// based on the current learned value of P +func (c *ARCCache) replace(b2ContainsKey bool) { + t1Len := c.t1.Len() + if t1Len > 0 && (t1Len > c.p || (t1Len == c.p && b2ContainsKey)) { + k, _, ok := c.t1.RemoveOldest() + if ok { + c.b1.Add(k, nil) + } + } else { + k, _, ok := c.t2.RemoveOldest() + if ok { + c.b2.Add(k, nil) + } + } +} + +// Len returns the number of cached entries +func (c *ARCCache) Len() int { + c.lock.RLock() + defer c.lock.RUnlock() + return c.t1.Len() + c.t2.Len() +} + +// Keys returns all the cached keys +func (c *ARCCache) Keys() []interface{} { + c.lock.RLock() + defer c.lock.RUnlock() + k1 := c.t1.Keys() + k2 := c.t2.Keys() + return append(k1, k2...) +} + +// Remove is used to purge a key from the cache +func (c *ARCCache) Remove(key interface{}) { + c.lock.Lock() + defer c.lock.Unlock() + if c.t1.Remove(key) { + return + } + if c.t2.Remove(key) { + return + } + if c.b1.Remove(key) { + return + } + if c.b2.Remove(key) { + return + } +} + +// Purge is used to clear the cache +func (c *ARCCache) Purge() { + c.lock.Lock() + defer c.lock.Unlock() + c.t1.Purge() + c.t2.Purge() + c.b1.Purge() + c.b2.Purge() +} + +// Contains is used to check if the cache contains a key +// without updating recency or frequency. +func (c *ARCCache) Contains(key interface{}) bool { + c.lock.RLock() + defer c.lock.RUnlock() + return c.t1.Contains(key) || c.t2.Contains(key) +} + +// Peek is used to inspect the cache value of a key +// without updating recency or frequency. +func (c *ARCCache) Peek(key interface{}) (interface{}, bool) { + c.lock.RLock() + defer c.lock.RUnlock() + if val, ok := c.t1.Peek(key); ok { + return val, ok + } + return c.t2.Peek(key) +} diff --git a/vendor/github.com/hashicorp/golang-lru/lru.go b/vendor/github.com/hashicorp/golang-lru/lru.go new file mode 100644 index 0000000000..a6285f989e --- /dev/null +++ b/vendor/github.com/hashicorp/golang-lru/lru.go @@ -0,0 +1,114 @@ +// This package provides a simple LRU cache. It is based on the +// LRU implementation in groupcache: +// https://github.com/golang/groupcache/tree/master/lru +package lru + +import ( + "sync" + + "github.com/hashicorp/golang-lru/simplelru" +) + +// Cache is a thread-safe fixed size LRU cache. +type Cache struct { + lru *simplelru.LRU + lock sync.RWMutex +} + +// New creates an LRU of the given size +func New(size int) (*Cache, error) { + return NewWithEvict(size, nil) +} + +// NewWithEvict constructs a fixed size cache with the given eviction +// callback. +func NewWithEvict(size int, onEvicted func(key interface{}, value interface{})) (*Cache, error) { + lru, err := simplelru.NewLRU(size, simplelru.EvictCallback(onEvicted)) + if err != nil { + return nil, err + } + c := &Cache{ + lru: lru, + } + return c, nil +} + +// Purge is used to completely clear the cache +func (c *Cache) Purge() { + c.lock.Lock() + c.lru.Purge() + c.lock.Unlock() +} + +// Add adds a value to the cache. Returns true if an eviction occurred. +func (c *Cache) Add(key, value interface{}) bool { + c.lock.Lock() + defer c.lock.Unlock() + return c.lru.Add(key, value) +} + +// Get looks up a key's value from the cache. +func (c *Cache) Get(key interface{}) (interface{}, bool) { + c.lock.Lock() + defer c.lock.Unlock() + return c.lru.Get(key) +} + +// Check if a key is in the cache, without updating the recent-ness +// or deleting it for being stale. +func (c *Cache) Contains(key interface{}) bool { + c.lock.RLock() + defer c.lock.RUnlock() + return c.lru.Contains(key) +} + +// Returns the key value (or undefined if not found) without updating +// the "recently used"-ness of the key. +func (c *Cache) Peek(key interface{}) (interface{}, bool) { + c.lock.RLock() + defer c.lock.RUnlock() + return c.lru.Peek(key) +} + +// ContainsOrAdd checks if a key is in the cache without updating the +// recent-ness or deleting it for being stale, and if not, adds the value. +// Returns whether found and whether an eviction occurred. +func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evict bool) { + c.lock.Lock() + defer c.lock.Unlock() + + if c.lru.Contains(key) { + return true, false + } else { + evict := c.lru.Add(key, value) + return false, evict + } +} + +// Remove removes the provided key from the cache. +func (c *Cache) Remove(key interface{}) { + c.lock.Lock() + c.lru.Remove(key) + c.lock.Unlock() +} + +// RemoveOldest removes the oldest item from the cache. +func (c *Cache) RemoveOldest() { + c.lock.Lock() + c.lru.RemoveOldest() + c.lock.Unlock() +} + +// Keys returns a slice of the keys in the cache, from oldest to newest. +func (c *Cache) Keys() []interface{} { + c.lock.RLock() + defer c.lock.RUnlock() + return c.lru.Keys() +} + +// Len returns the number of items in the cache. +func (c *Cache) Len() int { + c.lock.RLock() + defer c.lock.RUnlock() + return c.lru.Len() +} diff --git a/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go b/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go new file mode 100644 index 0000000000..cb416b394f --- /dev/null +++ b/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go @@ -0,0 +1,160 @@ +package simplelru + +import ( + "container/list" + "errors" +) + +// EvictCallback is used to get a callback when a cache entry is evicted +type EvictCallback func(key interface{}, value interface{}) + +// LRU implements a non-thread safe fixed size LRU cache +type LRU struct { + size int + evictList *list.List + items map[interface{}]*list.Element + onEvict EvictCallback +} + +// entry is used to hold a value in the evictList +type entry struct { + key interface{} + value interface{} +} + +// NewLRU constructs an LRU of the given size +func NewLRU(size int, onEvict EvictCallback) (*LRU, error) { + if size <= 0 { + return nil, errors.New("Must provide a positive size") + } + c := &LRU{ + size: size, + evictList: list.New(), + items: make(map[interface{}]*list.Element), + onEvict: onEvict, + } + return c, nil +} + +// Purge is used to completely clear the cache +func (c *LRU) Purge() { + for k, v := range c.items { + if c.onEvict != nil { + c.onEvict(k, v.Value.(*entry).value) + } + delete(c.items, k) + } + c.evictList.Init() +} + +// Add adds a value to the cache. Returns true if an eviction occurred. +func (c *LRU) Add(key, value interface{}) bool { + // Check for existing item + if ent, ok := c.items[key]; ok { + c.evictList.MoveToFront(ent) + ent.Value.(*entry).value = value + return false + } + + // Add new item + ent := &entry{key, value} + entry := c.evictList.PushFront(ent) + c.items[key] = entry + + evict := c.evictList.Len() > c.size + // Verify size not exceeded + if evict { + c.removeOldest() + } + return evict +} + +// Get looks up a key's value from the cache. +func (c *LRU) Get(key interface{}) (value interface{}, ok bool) { + if ent, ok := c.items[key]; ok { + c.evictList.MoveToFront(ent) + return ent.Value.(*entry).value, true + } + return +} + +// Check if a key is in the cache, without updating the recent-ness +// or deleting it for being stale. +func (c *LRU) Contains(key interface{}) (ok bool) { + _, ok = c.items[key] + return ok +} + +// Returns the key value (or undefined if not found) without updating +// the "recently used"-ness of the key. +func (c *LRU) Peek(key interface{}) (value interface{}, ok bool) { + if ent, ok := c.items[key]; ok { + return ent.Value.(*entry).value, true + } + return nil, ok +} + +// Remove removes the provided key from the cache, returning if the +// key was contained. +func (c *LRU) Remove(key interface{}) bool { + if ent, ok := c.items[key]; ok { + c.removeElement(ent) + return true + } + return false +} + +// RemoveOldest removes the oldest item from the cache. +func (c *LRU) RemoveOldest() (interface{}, interface{}, bool) { + ent := c.evictList.Back() + if ent != nil { + c.removeElement(ent) + kv := ent.Value.(*entry) + return kv.key, kv.value, true + } + return nil, nil, false +} + +// GetOldest returns the oldest entry +func (c *LRU) GetOldest() (interface{}, interface{}, bool) { + ent := c.evictList.Back() + if ent != nil { + kv := ent.Value.(*entry) + return kv.key, kv.value, true + } + return nil, nil, false +} + +// Keys returns a slice of the keys in the cache, from oldest to newest. +func (c *LRU) Keys() []interface{} { + keys := make([]interface{}, len(c.items)) + i := 0 + for ent := c.evictList.Back(); ent != nil; ent = ent.Prev() { + keys[i] = ent.Value.(*entry).key + i++ + } + return keys +} + +// Len returns the number of items in the cache. +func (c *LRU) Len() int { + return c.evictList.Len() +} + +// removeOldest removes the oldest item from the cache. +func (c *LRU) removeOldest() { + ent := c.evictList.Back() + if ent != nil { + c.removeElement(ent) + } +} + +// removeElement is used to remove a given list element from the cache +func (c *LRU) removeElement(e *list.Element) { + c.evictList.Remove(e) + kv := e.Value.(*entry) + delete(c.items, kv.key) + if c.onEvict != nil { + c.onEvict(kv.key, kv.value) + } +} diff --git a/vendor/github.com/hashicorp/memberlist/LICENSE b/vendor/github.com/hashicorp/memberlist/LICENSE new file mode 100644 index 0000000000..c33dcc7c92 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/LICENSE @@ -0,0 +1,354 @@ +Mozilla Public License, version 2.0 + +1. Definitions + +1.1. “Contributor” + + means each individual or legal entity that creates, contributes to the + creation of, or owns Covered Software. + +1.2. “Contributor Version” + + means the combination of the Contributions of others (if any) used by a + Contributor and that particular Contributor’s Contribution. + +1.3. “Contribution” + + means Covered Software of a particular Contributor. + +1.4. “Covered Software” + + means Source Code Form to which the initial Contributor has attached the + notice in Exhibit A, the Executable Form of such Source Code Form, and + Modifications of such Source Code Form, in each case including portions + thereof. + +1.5. “Incompatible With Secondary Licenses” + means + + a. that the initial Contributor has attached the notice described in + Exhibit B to the Covered Software; or + + b. that the Covered Software was made available under the terms of version + 1.1 or earlier of the License, but not also under the terms of a + Secondary License. + +1.6. “Executable Form” + + means any form of the work other than Source Code Form. + +1.7. “Larger Work” + + means a work that combines Covered Software with other material, in a separate + file or files, that is not Covered Software. + +1.8. “License” + + means this document. + +1.9. “Licensable” + + means having the right to grant, to the maximum extent possible, whether at the + time of the initial grant or subsequently, any and all of the rights conveyed by + this License. + +1.10. “Modifications” + + means any of the following: + + a. any file in Source Code Form that results from an addition to, deletion + from, or modification of the contents of Covered Software; or + + b. any new file in Source Code Form that contains any Covered Software. + +1.11. “Patent Claims” of a Contributor + + means any patent claim(s), including without limitation, method, process, + and apparatus claims, in any patent Licensable by such Contributor that + would be infringed, but for the grant of the License, by the making, + using, selling, offering for sale, having made, import, or transfer of + either its Contributions or its Contributor Version. + +1.12. “Secondary License” + + means either the GNU General Public License, Version 2.0, the GNU Lesser + General Public License, Version 2.1, the GNU Affero General Public + License, Version 3.0, or any later versions of those licenses. + +1.13. “Source Code Form” + + means the form of the work preferred for making modifications. + +1.14. “You” (or “Your”) + + means an individual or a legal entity exercising rights under this + License. For legal entities, “You” includes any entity that controls, is + controlled by, or is under common control with You. For purposes of this + definition, “control” means (a) the power, direct or indirect, to cause + the direction or management of such entity, whether by contract or + otherwise, or (b) ownership of more than fifty percent (50%) of the + outstanding shares or beneficial ownership of such entity. + + +2. License Grants and Conditions + +2.1. Grants + + Each Contributor hereby grants You a world-wide, royalty-free, + non-exclusive license: + + a. under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or as + part of a Larger Work; and + + b. under Patent Claims of such Contributor to make, use, sell, offer for + sale, have made, import, and otherwise transfer either its Contributions + or its Contributor Version. + +2.2. Effective Date + + The licenses granted in Section 2.1 with respect to any Contribution become + effective for each Contribution on the date the Contributor first distributes + such Contribution. + +2.3. Limitations on Grant Scope + + The licenses granted in this Section 2 are the only rights granted under this + License. No additional rights or licenses will be implied from the distribution + or licensing of Covered Software under this License. Notwithstanding Section + 2.1(b) above, no patent license is granted by a Contributor: + + a. for any code that a Contributor has removed from Covered Software; or + + b. for infringements caused by: (i) Your and any other third party’s + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + + c. under Patent Claims infringed by Covered Software in the absence of its + Contributions. + + This License does not grant any rights in the trademarks, service marks, or + logos of any Contributor (except as may be necessary to comply with the + notice requirements in Section 3.4). + +2.4. Subsequent Licenses + + No Contributor makes additional grants as a result of Your choice to + distribute the Covered Software under a subsequent version of this License + (see Section 10.2) or under the terms of a Secondary License (if permitted + under the terms of Section 3.3). + +2.5. Representation + + Each Contributor represents that the Contributor believes its Contributions + are its original creation(s) or it has sufficient rights to grant the + rights to its Contributions conveyed by this License. + +2.6. Fair Use + + This License is not intended to limit any rights You have under applicable + copyright doctrines of fair use, fair dealing, or other equivalents. + +2.7. Conditions + + Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in + Section 2.1. + + +3. Responsibilities + +3.1. Distribution of Source Form + + All distribution of Covered Software in Source Code Form, including any + Modifications that You create or to which You contribute, must be under the + terms of this License. You must inform recipients that the Source Code Form + of the Covered Software is governed by the terms of this License, and how + they can obtain a copy of this License. You may not attempt to alter or + restrict the recipients’ rights in the Source Code Form. + +3.2. Distribution of Executable Form + + If You distribute Covered Software in Executable Form then: + + a. such Covered Software must also be made available in Source Code Form, + as described in Section 3.1, and You must inform recipients of the + Executable Form how they can obtain a copy of such Source Code Form by + reasonable means in a timely manner, at a charge no more than the cost + of distribution to the recipient; and + + b. You may distribute such Executable Form under the terms of this License, + or sublicense it under different terms, provided that the license for + the Executable Form does not attempt to limit or alter the recipients’ + rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + + You may create and distribute a Larger Work under terms of Your choice, + provided that You also comply with the requirements of this License for the + Covered Software. If the Larger Work is a combination of Covered Software + with a work governed by one or more Secondary Licenses, and the Covered + Software is not Incompatible With Secondary Licenses, this License permits + You to additionally distribute such Covered Software under the terms of + such Secondary License(s), so that the recipient of the Larger Work may, at + their option, further distribute the Covered Software under the terms of + either this License or such Secondary License(s). + +3.4. Notices + + You may not remove or alter the substance of any license notices (including + copyright notices, patent notices, disclaimers of warranty, or limitations + of liability) contained within the Source Code Form of the Covered + Software, except that You may alter any license notices to the extent + required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + + You may choose to offer, and to charge a fee for, warranty, support, + indemnity or liability obligations to one or more recipients of Covered + Software. However, You may do so only on Your own behalf, and not on behalf + of any Contributor. You must make it absolutely clear that any such + warranty, support, indemnity, or liability obligation is offered by You + alone, and You hereby agree to indemnify every Contributor for any + liability incurred by such Contributor as a result of warranty, support, + indemnity or liability terms You offer. You may include additional + disclaimers of warranty and limitations of liability specific to any + jurisdiction. + +4. Inability to Comply Due to Statute or Regulation + + If it is impossible for You to comply with any of the terms of this License + with respect to some or all of the Covered Software due to statute, judicial + order, or regulation then You must: (a) comply with the terms of this License + to the maximum extent possible; and (b) describe the limitations and the code + they affect. Such description must be placed in a text file included with all + distributions of the Covered Software under this License. Except to the + extent prohibited by statute or regulation, such description must be + sufficiently detailed for a recipient of ordinary skill to be able to + understand it. + +5. Termination + +5.1. The rights granted under this License will terminate automatically if You + fail to comply with any of its terms. However, if You become compliant, + then the rights granted under this License from a particular Contributor + are reinstated (a) provisionally, unless and until such Contributor + explicitly and finally terminates Your grants, and (b) on an ongoing basis, + if such Contributor fails to notify You of the non-compliance by some + reasonable means prior to 60 days after You have come back into compliance. + Moreover, Your grants from a particular Contributor are reinstated on an + ongoing basis if such Contributor notifies You of the non-compliance by + some reasonable means, this is the first time You have received notice of + non-compliance with this License from such Contributor, and You become + compliant prior to 30 days after Your receipt of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent + infringement claim (excluding declaratory judgment actions, counter-claims, + and cross-claims) alleging that a Contributor Version directly or + indirectly infringes any patent, then the rights granted to You by any and + all Contributors for the Covered Software under Section 2.1 of this License + shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user + license agreements (excluding distributors and resellers) which have been + validly granted by You or Your distributors under this License prior to + termination shall survive termination. + +6. Disclaimer of Warranty + + Covered Software is provided under this License on an “as is” basis, without + warranty of any kind, either expressed, implied, or statutory, including, + without limitation, warranties that the Covered Software is free of defects, + merchantable, fit for a particular purpose or non-infringing. The entire + risk as to the quality and performance of the Covered Software is with You. + Should any Covered Software prove defective in any respect, You (not any + Contributor) assume the cost of any necessary servicing, repair, or + correction. This disclaimer of warranty constitutes an essential part of this + License. No use of any Covered Software is authorized under this License + except under this disclaimer. + +7. Limitation of Liability + + Under no circumstances and under no legal theory, whether tort (including + negligence), contract, or otherwise, shall any Contributor, or anyone who + distributes Covered Software as permitted above, be liable to You for any + direct, indirect, special, incidental, or consequential damages of any + character including, without limitation, damages for lost profits, loss of + goodwill, work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses, even if such party shall have been + informed of the possibility of such damages. This limitation of liability + shall not apply to liability for death or personal injury resulting from such + party’s negligence to the extent applicable law prohibits such limitation. + Some jurisdictions do not allow the exclusion or limitation of incidental or + consequential damages, so this exclusion and limitation may not apply to You. + +8. Litigation + + Any litigation relating to this License may be brought only in the courts of + a jurisdiction where the defendant maintains its principal place of business + and such litigation shall be governed by laws of that jurisdiction, without + reference to its conflict-of-law provisions. Nothing in this Section shall + prevent a party’s ability to bring cross-claims or counter-claims. + +9. Miscellaneous + + This License represents the complete agreement concerning the subject matter + hereof. If any provision of this License is held to be unenforceable, such + provision shall be reformed only to the extent necessary to make it + enforceable. Any law or regulation which provides that the language of a + contract shall be construed against the drafter shall not be used to construe + this License against a Contributor. + + +10. Versions of the License + +10.1. New Versions + + Mozilla Foundation is the license steward. Except as provided in Section + 10.3, no one other than the license steward has the right to modify or + publish new versions of this License. Each version will be given a + distinguishing version number. + +10.2. Effect of New Versions + + You may distribute the Covered Software under the terms of the version of + the License under which You originally received the Covered Software, or + under the terms of any subsequent version published by the license + steward. + +10.3. Modified Versions + + If you create software not governed by this License, and you want to + create a new license for such software, you may create and use a modified + version of this License if you rename the license and remove any + references to the name of the license steward (except to note that such + modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses + If You choose to distribute Source Code Form that is Incompatible With + Secondary Licenses under the terms of this version of the License, the + notice described in Exhibit B of this License must be attached. + +Exhibit A - Source Code Form License Notice + + This Source Code Form is subject to the + terms of the Mozilla Public License, v. + 2.0. If a copy of the MPL was not + distributed with this file, You can + obtain one at + http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular file, then +You may include the notice in a location (such as a LICENSE file in a relevant +directory) where a recipient would be likely to look for such a notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - “Incompatible With Secondary Licenses” Notice + + This Source Code Form is “Incompatible + With Secondary Licenses”, as defined by + the Mozilla Public License, v. 2.0. + diff --git a/vendor/github.com/hashicorp/memberlist/Makefile b/vendor/github.com/hashicorp/memberlist/Makefile new file mode 100644 index 0000000000..56ef6c28c6 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/Makefile @@ -0,0 +1,14 @@ +test: subnet + go test ./... + +integ: subnet + INTEG_TESTS=yes go test ./... + +subnet: + ./test/setup_subnet.sh + +cov: + gocov test github.com/hashicorp/memberlist | gocov-html > /tmp/coverage.html + open /tmp/coverage.html + +.PNONY: test cov integ diff --git a/vendor/github.com/hashicorp/memberlist/README.md b/vendor/github.com/hashicorp/memberlist/README.md new file mode 100644 index 0000000000..fc605a59b4 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/README.md @@ -0,0 +1,144 @@ +# memberlist [![GoDoc](https://godoc.org/github.com/hashicorp/memberlist?status.png)](https://godoc.org/github.com/hashicorp/memberlist) + +memberlist is a [Go](http://www.golang.org) library that manages cluster +membership and member failure detection using a gossip based protocol. + +The use cases for such a library are far-reaching: all distributed systems +require membership, and memberlist is a re-usable solution to managing +cluster membership and node failure detection. + +memberlist is eventually consistent but converges quickly on average. +The speed at which it converges can be heavily tuned via various knobs +on the protocol. Node failures are detected and network partitions are partially +tolerated by attempting to communicate to potentially dead nodes through +multiple routes. + +## Building + +If you wish to build memberlist you'll need Go version 1.2+ installed. + +Please check your installation with: + +``` +go version +``` + +## Usage + +Memberlist is surprisingly simple to use. An example is shown below: + +```go +/* Create the initial memberlist from a safe configuration. + Please reference the godoc for other default config types. + http://godoc.org/github.com/hashicorp/memberlist#Config +*/ +list, err := memberlist.Create(memberlist.DefaultLocalConfig()) +if err != nil { + panic("Failed to create memberlist: " + err.Error()) +} + +// Join an existing cluster by specifying at least one known member. +n, err := list.Join([]string{"1.2.3.4"}) +if err != nil { + panic("Failed to join cluster: " + err.Error()) +} + +// Ask for members of the cluster +for _, member := range list.Members() { + fmt.Printf("Member: %s %s\n", member.Name, member.Addr) +} + +// Continue doing whatever you need, memberlist will maintain membership +// information in the background. Delegates can be used for receiving +// events when members join or leave. +``` + +The most difficult part of memberlist is configuring it since it has many +available knobs in order to tune state propagation delay and convergence times. +Memberlist provides a default configuration that offers a good starting point, +but errs on the side of caution, choosing values that are optimized for +higher convergence at the cost of higher bandwidth usage. + +For complete documentation, see the associated [Godoc](http://godoc.org/github.com/hashicorp/memberlist). + +## Protocol + +memberlist is based on ["SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol"](http://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf), +with a few minor adaptations, mostly to increase propagation speed and +convergence rate. + +A high level overview of the memberlist protocol (based on SWIM) is +described below, but for details please read the full +[SWIM paper](http://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf) +followed by the memberlist source. We welcome any questions related +to the protocol on our issue tracker. + +### Protocol Description + +memberlist begins by joining an existing cluster or starting a new +cluster. If starting a new cluster, additional nodes are expected to join +it. New nodes in an existing cluster must be given the address of at +least one existing member in order to join the cluster. The new member +does a full state sync with the existing member over TCP and begins gossiping its +existence to the cluster. + +Gossip is done over UDP with a configurable but fixed fanout and interval. +This ensures that network usage is constant with regards to number of nodes, as opposed to +exponential growth that can occur with traditional heartbeat mechanisms. +Complete state exchanges with a random node are done periodically over +TCP, but much less often than gossip messages. This increases the likelihood +that the membership list converges properly since the full state is exchanged +and merged. The interval between full state exchanges is configurable or can +be disabled entirely. + +Failure detection is done by periodic random probing using a configurable interval. +If the node fails to ack within a reasonable time (typically some multiple +of RTT), then an indirect probe as well as a direct TCP probe are attempted. An +indirect probe asks a configurable number of random nodes to probe the same node, +in case there are network issues causing our own node to fail the probe. The direct +TCP probe is used to help identify the common situation where networking is +misconfigured to allow TCP but not UDP. Without the TCP probe, a UDP-isolated node +would think all other nodes were suspect and could cause churn in the cluster when +it attempts a TCP-based state exchange with another node. It is not desirable to +operate with only TCP connectivity because convergence will be much slower, but it +is enabled so that memberlist can detect this situation and alert operators. + +If both our probe, the indirect probes, and the direct TCP probe fail within a +configurable time, then the node is marked "suspicious" and this knowledge is +gossiped to the cluster. A suspicious node is still considered a member of +cluster. If the suspect member of the cluster does not dispute the suspicion +within a configurable period of time, the node is finally considered dead, +and this state is then gossiped to the cluster. + +This is a brief and incomplete description of the protocol. For a better idea, +please read the +[SWIM paper](http://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf) +in its entirety, along with the memberlist source code. + +### Changes from SWIM + +As mentioned earlier, the memberlist protocol is based on SWIM but includes +minor changes, mostly to increase propagation speed and convergence rates. + +The changes from SWIM are noted here: + +* memberlist does a full state sync over TCP periodically. SWIM only propagates + changes over gossip. While both eventually reach convergence, the full state + sync increases the likelihood that nodes are fully converged more quickly, + at the expense of more bandwidth usage. This feature can be totally disabled + if you wish. + +* memberlist has a dedicated gossip layer separate from the failure detection + protocol. SWIM only piggybacks gossip messages on top of probe/ack messages. + memberlist also piggybacks gossip messages on top of probe/ack messages, but + also will periodically send out dedicated gossip messages on their own. This + feature lets you have a higher gossip rate (for example once per 200ms) + and a slower failure detection rate (such as once per second), resulting + in overall faster convergence rates and data propagation speeds. This feature + can be totally disabed as well, if you wish. + +* memberlist stores around the state of dead nodes for a set amount of time, + so that when full syncs are requested, the requester also receives information + about dead nodes. Because SWIM doesn't do full syncs, SWIM deletes dead node + state immediately upon learning that the node is dead. This change again helps + the cluster converge more quickly. diff --git a/vendor/github.com/hashicorp/memberlist/alive_delegate.go b/vendor/github.com/hashicorp/memberlist/alive_delegate.go new file mode 100644 index 0000000000..51a0ba9054 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/alive_delegate.go @@ -0,0 +1,14 @@ +package memberlist + +// AliveDelegate is used to involve a client in processing +// a node "alive" message. When a node joins, either through +// a UDP gossip or TCP push/pull, we update the state of +// that node via an alive message. This can be used to filter +// a node out and prevent it from being considered a peer +// using application specific logic. +type AliveDelegate interface { + // NotifyMerge is invoked when a merge could take place. + // Provides a list of the nodes known by the peer. If + // the return value is non-nil, the merge is canceled. + NotifyAlive(peer *Node) error +} diff --git a/vendor/github.com/hashicorp/memberlist/awareness.go b/vendor/github.com/hashicorp/memberlist/awareness.go new file mode 100644 index 0000000000..ea95c75388 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/awareness.go @@ -0,0 +1,69 @@ +package memberlist + +import ( + "sync" + "time" + + "github.com/armon/go-metrics" +) + +// awareness manages a simple metric for tracking the estimated health of the +// local node. Health is primary the node's ability to respond in the soft +// real-time manner required for correct health checking of other nodes in the +// cluster. +type awareness struct { + sync.RWMutex + + // max is the upper threshold for the timeout scale (the score will be + // constrained to be from 0 <= score < max). + max int + + // score is the current awareness score. Lower values are healthier and + // zero is the minimum value. + score int +} + +// newAwareness returns a new awareness object. +func newAwareness(max int) *awareness { + return &awareness{ + max: max, + score: 0, + } +} + +// ApplyDelta takes the given delta and applies it to the score in a thread-safe +// manner. It also enforces a floor of zero and a max of max, so deltas may not +// change the overall score if it's railed at one of the extremes. +func (a *awareness) ApplyDelta(delta int) { + a.Lock() + initial := a.score + a.score += delta + if a.score < 0 { + a.score = 0 + } else if a.score > (a.max - 1) { + a.score = (a.max - 1) + } + final := a.score + a.Unlock() + + if initial != final { + metrics.SetGauge([]string{"memberlist", "health", "score"}, float32(final)) + } +} + +// GetHealthScore returns the raw health score. +func (a *awareness) GetHealthScore() int { + a.RLock() + score := a.score + a.RUnlock() + return score +} + +// ScaleTimeout takes the given duration and scales it based on the current +// score. Less healthyness will lead to longer timeouts. +func (a *awareness) ScaleTimeout(timeout time.Duration) time.Duration { + a.RLock() + score := a.score + a.RUnlock() + return timeout * (time.Duration(score) + 1) +} diff --git a/vendor/github.com/hashicorp/memberlist/broadcast.go b/vendor/github.com/hashicorp/memberlist/broadcast.go new file mode 100644 index 0000000000..f7e85a119c --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/broadcast.go @@ -0,0 +1,100 @@ +package memberlist + +/* +The broadcast mechanism works by maintaining a sorted list of messages to be +sent out. When a message is to be broadcast, the retransmit count +is set to zero and appended to the queue. The retransmit count serves +as the "priority", ensuring that newer messages get sent first. Once +a message hits the retransmit limit, it is removed from the queue. + +Additionally, older entries can be invalidated by new messages that +are contradictory. For example, if we send "{suspect M1 inc: 1}, +then a following {alive M1 inc: 2} will invalidate that message +*/ + +type memberlistBroadcast struct { + node string + msg []byte + notify chan struct{} +} + +func (b *memberlistBroadcast) Invalidates(other Broadcast) bool { + // Check if that broadcast is a memberlist type + mb, ok := other.(*memberlistBroadcast) + if !ok { + return false + } + + // Invalidates any message about the same node + return b.node == mb.node +} + +func (b *memberlistBroadcast) Message() []byte { + return b.msg +} + +func (b *memberlistBroadcast) Finished() { + select { + case b.notify <- struct{}{}: + default: + } +} + +// encodeAndBroadcast encodes a message and enqueues it for broadcast. Fails +// silently if there is an encoding error. +func (m *Memberlist) encodeAndBroadcast(node string, msgType messageType, msg interface{}) { + m.encodeBroadcastNotify(node, msgType, msg, nil) +} + +// encodeBroadcastNotify encodes a message and enqueues it for broadcast +// and notifies the given channel when transmission is finished. Fails +// silently if there is an encoding error. +func (m *Memberlist) encodeBroadcastNotify(node string, msgType messageType, msg interface{}, notify chan struct{}) { + buf, err := encode(msgType, msg) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed to encode message for broadcast: %s", err) + } else { + m.queueBroadcast(node, buf.Bytes(), notify) + } +} + +// queueBroadcast is used to start dissemination of a message. It will be +// sent up to a configured number of times. The message could potentially +// be invalidated by a future message about the same node +func (m *Memberlist) queueBroadcast(node string, msg []byte, notify chan struct{}) { + b := &memberlistBroadcast{node, msg, notify} + m.broadcasts.QueueBroadcast(b) +} + +// getBroadcasts is used to return a slice of broadcasts to send up to +// a maximum byte size, while imposing a per-broadcast overhead. This is used +// to fill a UDP packet with piggybacked data +func (m *Memberlist) getBroadcasts(overhead, limit int) [][]byte { + // Get memberlist messages first + toSend := m.broadcasts.GetBroadcasts(overhead, limit) + + // Check if the user has anything to broadcast + d := m.config.Delegate + if d != nil { + // Determine the bytes used already + bytesUsed := 0 + for _, msg := range toSend { + bytesUsed += len(msg) + overhead + } + + // Check space remaining for user messages + avail := limit - bytesUsed + if avail > overhead+userMsgOverhead { + userMsgs := d.GetBroadcasts(overhead+userMsgOverhead, avail) + + // Frame each user message + for _, msg := range userMsgs { + buf := make([]byte, 1, len(msg)+1) + buf[0] = byte(userMsg) + buf = append(buf, msg...) + toSend = append(toSend, buf) + } + } + } + return toSend +} diff --git a/vendor/github.com/hashicorp/memberlist/config.go b/vendor/github.com/hashicorp/memberlist/config.go new file mode 100644 index 0000000000..1c13bfcd36 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/config.go @@ -0,0 +1,277 @@ +package memberlist + +import ( + "io" + "log" + "os" + "time" +) + +type Config struct { + // The name of this node. This must be unique in the cluster. + Name string + + // Configuration related to what address to bind to and ports to + // listen on. The port is used for both UDP and TCP gossip. + // It is assumed other nodes are running on this port, but they + // do not need to. + BindAddr string + BindPort int + + // Configuration related to what address to advertise to other + // cluster members. Used for nat traversal. + AdvertiseAddr string + AdvertisePort int + + // ProtocolVersion is the configured protocol version that we + // will _speak_. This must be between ProtocolVersionMin and + // ProtocolVersionMax. + ProtocolVersion uint8 + + // TCPTimeout is the timeout for establishing a TCP connection with + // a remote node for a full state sync. + TCPTimeout time.Duration + + // IndirectChecks is the number of nodes that will be asked to perform + // an indirect probe of a node in the case a direct probe fails. Memberlist + // waits for an ack from any single indirect node, so increasing this + // number will increase the likelihood that an indirect probe will succeed + // at the expense of bandwidth. + IndirectChecks int + + // RetransmitMult is the multiplier for the number of retransmissions + // that are attempted for messages broadcasted over gossip. The actual + // count of retransmissions is calculated using the formula: + // + // Retransmits = RetransmitMult * log(N+1) + // + // This allows the retransmits to scale properly with cluster size. The + // higher the multiplier, the more likely a failed broadcast is to converge + // at the expense of increased bandwidth. + RetransmitMult int + + // SuspicionMult is the multiplier for determining the time an + // inaccessible node is considered suspect before declaring it dead. + // The actual timeout is calculated using the formula: + // + // SuspicionTimeout = SuspicionMult * log(N+1) * ProbeInterval + // + // This allows the timeout to scale properly with expected propagation + // delay with a larger cluster size. The higher the multiplier, the longer + // an inaccessible node is considered part of the cluster before declaring + // it dead, giving that suspect node more time to refute if it is indeed + // still alive. + SuspicionMult int + + // SuspicionMaxTimeoutMult is the multiplier applied to the + // SuspicionTimeout used as an upper bound on detection time. This max + // timeout is calculated using the formula: + // + // SuspicionMaxTimeout = SuspicionMaxTimeoutMult * SuspicionTimeout + // + // If everything is working properly, confirmations from other nodes will + // accelerate suspicion timers in a manner which will cause the timeout + // to reach the base SuspicionTimeout before that elapses, so this value + // will typically only come into play if a node is experiencing issues + // communicating with other nodes. It should be set to a something fairly + // large so that a node having problems will have a lot of chances to + // recover before falsely declaring other nodes as failed, but short + // enough for a legitimately isolated node to still make progress marking + // nodes failed in a reasonable amount of time. + SuspicionMaxTimeoutMult int + + // PushPullInterval is the interval between complete state syncs. + // Complete state syncs are done with a single node over TCP and are + // quite expensive relative to standard gossiped messages. Setting this + // to zero will disable state push/pull syncs completely. + // + // Setting this interval lower (more frequent) will increase convergence + // speeds across larger clusters at the expense of increased bandwidth + // usage. + PushPullInterval time.Duration + + // ProbeInterval and ProbeTimeout are used to configure probing + // behavior for memberlist. + // + // ProbeInterval is the interval between random node probes. Setting + // this lower (more frequent) will cause the memberlist cluster to detect + // failed nodes more quickly at the expense of increased bandwidth usage. + // + // ProbeTimeout is the timeout to wait for an ack from a probed node + // before assuming it is unhealthy. This should be set to 99-percentile + // of RTT (round-trip time) on your network. + ProbeInterval time.Duration + ProbeTimeout time.Duration + + // DisableTcpPings will turn off the fallback TCP pings that are attempted + // if the direct UDP ping fails. These get pipelined along with the + // indirect UDP pings. + DisableTcpPings bool + + // AwarenessMaxMultiplier will increase the probe interval if the node + // becomes aware that it might be degraded and not meeting the soft real + // time requirements to reliably probe other nodes. + AwarenessMaxMultiplier int + + // GossipInterval and GossipNodes are used to configure the gossip + // behavior of memberlist. + // + // GossipInterval is the interval between sending messages that need + // to be gossiped that haven't been able to piggyback on probing messages. + // If this is set to zero, non-piggyback gossip is disabled. By lowering + // this value (more frequent) gossip messages are propagated across + // the cluster more quickly at the expense of increased bandwidth. + // + // GossipNodes is the number of random nodes to send gossip messages to + // per GossipInterval. Increasing this number causes the gossip messages + // to propagate across the cluster more quickly at the expense of + // increased bandwidth. + // + // GossipToTheDeadTime is the interval after which a node has died that + // we will still try to gossip to it. This gives it a chance to refute. + GossipInterval time.Duration + GossipNodes int + GossipToTheDeadTime time.Duration + + // EnableCompression is used to control message compression. This can + // be used to reduce bandwidth usage at the cost of slightly more CPU + // utilization. This is only available starting at protocol version 1. + EnableCompression bool + + // SecretKey is used to initialize the primary encryption key in a keyring. + // The primary encryption key is the only key used to encrypt messages and + // the first key used while attempting to decrypt messages. Providing a + // value for this primary key will enable message-level encryption and + // verification, and automatically install the key onto the keyring. + // The value should be either 16, 24, or 32 bytes to select AES-128, + // AES-192, or AES-256. + SecretKey []byte + + // The keyring holds all of the encryption keys used internally. It is + // automatically initialized using the SecretKey and SecretKeys values. + Keyring *Keyring + + // Delegate and Events are delegates for receiving and providing + // data to memberlist via callback mechanisms. For Delegate, see + // the Delegate interface. For Events, see the EventDelegate interface. + // + // The DelegateProtocolMin/Max are used to guarantee protocol-compatibility + // for any custom messages that the delegate might do (broadcasts, + // local/remote state, etc.). If you don't set these, then the protocol + // versions will just be zero, and version compliance won't be done. + Delegate Delegate + DelegateProtocolVersion uint8 + DelegateProtocolMin uint8 + DelegateProtocolMax uint8 + Events EventDelegate + Conflict ConflictDelegate + Merge MergeDelegate + Ping PingDelegate + Alive AliveDelegate + + // DNSConfigPath points to the system's DNS config file, usually located + // at /etc/resolv.conf. It can be overridden via config for easier testing. + DNSConfigPath string + + // LogOutput is the writer where logs should be sent. If this is not + // set, logging will go to stderr by default. You cannot specify both LogOutput + // and Logger at the same time. + LogOutput io.Writer + + // Logger is a custom logger which you provide. If Logger is set, it will use + // this for the internal logger. If Logger is not set, it will fall back to the + // behavior for using LogOutput. You cannot specify both LogOutput and Logger + // at the same time. + Logger *log.Logger + + // Size of Memberlist's internal channel which handles UDP messages. The + // size of this determines the size of the queue which Memberlist will keep + // while UDP messages are handled. + HandoffQueueDepth int + + // Maximum number of bytes that memberlist expects UDP messages to be. A safe + // value for this is typically 1400 bytes (which is the default.) However, + // depending on your network's MTU (Maximum Transmission Unit) you may be able + // to increase this. + UDPBufferSize int +} + +// DefaultLANConfig returns a sane set of configurations for Memberlist. +// It uses the hostname as the node name, and otherwise sets very conservative +// values that are sane for most LAN environments. The default configuration +// errs on the side of caution, choosing values that are optimized +// for higher convergence at the cost of higher bandwidth usage. Regardless, +// these values are a good starting point when getting started with memberlist. +func DefaultLANConfig() *Config { + hostname, _ := os.Hostname() + return &Config{ + Name: hostname, + BindAddr: "0.0.0.0", + BindPort: 7946, + AdvertiseAddr: "", + AdvertisePort: 7946, + ProtocolVersion: ProtocolVersion2Compatible, + TCPTimeout: 10 * time.Second, // Timeout after 10 seconds + IndirectChecks: 3, // Use 3 nodes for the indirect ping + RetransmitMult: 4, // Retransmit a message 4 * log(N+1) nodes + SuspicionMult: 5, // Suspect a node for 5 * log(N+1) * Interval + SuspicionMaxTimeoutMult: 6, // For 10k nodes this will give a max timeout of 120 seconds + PushPullInterval: 30 * time.Second, // Low frequency + ProbeTimeout: 500 * time.Millisecond, // Reasonable RTT time for LAN + ProbeInterval: 1 * time.Second, // Failure check every second + DisableTcpPings: false, // TCP pings are safe, even with mixed versions + AwarenessMaxMultiplier: 8, // Probe interval backs off to 8 seconds + + GossipNodes: 3, // Gossip to 3 nodes + GossipInterval: 200 * time.Millisecond, // Gossip more rapidly + GossipToTheDeadTime: 30 * time.Second, // Same as push/pull + + EnableCompression: true, // Enable compression by default + + SecretKey: nil, + Keyring: nil, + + DNSConfigPath: "/etc/resolv.conf", + + HandoffQueueDepth: 1024, + UDPBufferSize: 1400, + } +} + +// DefaultWANConfig works like DefaultConfig, however it returns a configuration +// that is optimized for most WAN environments. The default configuration is +// still very conservative and errs on the side of caution. +func DefaultWANConfig() *Config { + conf := DefaultLANConfig() + conf.TCPTimeout = 30 * time.Second + conf.SuspicionMult = 6 + conf.PushPullInterval = 60 * time.Second + conf.ProbeTimeout = 3 * time.Second + conf.ProbeInterval = 5 * time.Second + conf.GossipNodes = 4 // Gossip less frequently, but to an additional node + conf.GossipInterval = 500 * time.Millisecond + conf.GossipToTheDeadTime = 60 * time.Second + return conf +} + +// DefaultLocalConfig works like DefaultConfig, however it returns a configuration +// that is optimized for a local loopback environments. The default configuration is +// still very conservative and errs on the side of caution. +func DefaultLocalConfig() *Config { + conf := DefaultLANConfig() + conf.TCPTimeout = time.Second + conf.IndirectChecks = 1 + conf.RetransmitMult = 2 + conf.SuspicionMult = 3 + conf.PushPullInterval = 15 * time.Second + conf.ProbeTimeout = 200 * time.Millisecond + conf.ProbeInterval = time.Second + conf.GossipInterval = 100 * time.Millisecond + conf.GossipToTheDeadTime = 15 * time.Second + return conf +} + +// Returns whether or not encryption is enabled +func (c *Config) EncryptionEnabled() bool { + return c.Keyring != nil && len(c.Keyring.GetKeys()) > 0 +} diff --git a/vendor/github.com/hashicorp/memberlist/conflict_delegate.go b/vendor/github.com/hashicorp/memberlist/conflict_delegate.go new file mode 100644 index 0000000000..f52b136eba --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/conflict_delegate.go @@ -0,0 +1,10 @@ +package memberlist + +// ConflictDelegate is a used to inform a client that +// a node has attempted to join which would result in a +// name conflict. This happens if two clients are configured +// with the same name but different addresses. +type ConflictDelegate interface { + // NotifyConflict is invoked when a name conflict is detected + NotifyConflict(existing, other *Node) +} diff --git a/vendor/github.com/hashicorp/memberlist/delegate.go b/vendor/github.com/hashicorp/memberlist/delegate.go new file mode 100644 index 0000000000..66aa2da796 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/delegate.go @@ -0,0 +1,37 @@ +package memberlist + +// Delegate is the interface that clients must implement if they want to hook +// into the gossip layer of Memberlist. All the methods must be thread-safe, +// as they can and generally will be called concurrently. +type Delegate interface { + // NodeMeta is used to retrieve meta-data about the current node + // when broadcasting an alive message. It's length is limited to + // the given byte size. This metadata is available in the Node structure. + NodeMeta(limit int) []byte + + // NotifyMsg is called when a user-data message is received. + // Care should be taken that this method does not block, since doing + // so would block the entire UDP packet receive loop. Additionally, the byte + // slice may be modified after the call returns, so it should be copied if needed. + NotifyMsg([]byte) + + // GetBroadcasts is called when user data messages can be broadcast. + // It can return a list of buffers to send. Each buffer should assume an + // overhead as provided with a limit on the total byte size allowed. + // The total byte size of the resulting data to send must not exceed + // the limit. Care should be taken that this method does not block, + // since doing so would block the entire UDP packet receive loop. + GetBroadcasts(overhead, limit int) [][]byte + + // LocalState is used for a TCP Push/Pull. This is sent to + // the remote side in addition to the membership information. Any + // data can be sent here. See MergeRemoteState as well. The `join` + // boolean indicates this is for a join instead of a push/pull. + LocalState(join bool) []byte + + // MergeRemoteState is invoked after a TCP Push/Pull. This is the + // state received from the remote side and is the result of the + // remote side's LocalState call. The 'join' + // boolean indicates this is for a join instead of a push/pull. + MergeRemoteState(buf []byte, join bool) +} diff --git a/vendor/github.com/hashicorp/memberlist/event_delegate.go b/vendor/github.com/hashicorp/memberlist/event_delegate.go new file mode 100644 index 0000000000..35e2a56fdd --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/event_delegate.go @@ -0,0 +1,61 @@ +package memberlist + +// EventDelegate is a simpler delegate that is used only to receive +// notifications about members joining and leaving. The methods in this +// delegate may be called by multiple goroutines, but never concurrently. +// This allows you to reason about ordering. +type EventDelegate interface { + // NotifyJoin is invoked when a node is detected to have joined. + // The Node argument must not be modified. + NotifyJoin(*Node) + + // NotifyLeave is invoked when a node is detected to have left. + // The Node argument must not be modified. + NotifyLeave(*Node) + + // NotifyUpdate is invoked when a node is detected to have + // updated, usually involving the meta data. The Node argument + // must not be modified. + NotifyUpdate(*Node) +} + +// ChannelEventDelegate is used to enable an application to receive +// events about joins and leaves over a channel instead of a direct +// function call. +// +// Care must be taken that events are processed in a timely manner from +// the channel, since this delegate will block until an event can be sent. +type ChannelEventDelegate struct { + Ch chan<- NodeEvent +} + +// NodeEventType are the types of events that can be sent from the +// ChannelEventDelegate. +type NodeEventType int + +const ( + NodeJoin NodeEventType = iota + NodeLeave + NodeUpdate +) + +// NodeEvent is a single event related to node activity in the memberlist. +// The Node member of this struct must not be directly modified. It is passed +// as a pointer to avoid unnecessary copies. If you wish to modify the node, +// make a copy first. +type NodeEvent struct { + Event NodeEventType + Node *Node +} + +func (c *ChannelEventDelegate) NotifyJoin(n *Node) { + c.Ch <- NodeEvent{NodeJoin, n} +} + +func (c *ChannelEventDelegate) NotifyLeave(n *Node) { + c.Ch <- NodeEvent{NodeLeave, n} +} + +func (c *ChannelEventDelegate) NotifyUpdate(n *Node) { + c.Ch <- NodeEvent{NodeUpdate, n} +} diff --git a/vendor/github.com/hashicorp/memberlist/keyring.go b/vendor/github.com/hashicorp/memberlist/keyring.go new file mode 100644 index 0000000000..a2774a0ce0 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/keyring.go @@ -0,0 +1,160 @@ +package memberlist + +import ( + "bytes" + "fmt" + "sync" +) + +type Keyring struct { + // Keys stores the key data used during encryption and decryption. It is + // ordered in such a way where the first key (index 0) is the primary key, + // which is used for encrypting messages, and is the first key tried during + // message decryption. + keys [][]byte + + // The keyring lock is used while performing IO operations on the keyring. + l sync.Mutex +} + +// Init allocates substructures +func (k *Keyring) init() { + k.keys = make([][]byte, 0) +} + +// NewKeyring constructs a new container for a set of encryption keys. The +// keyring contains all key data used internally by memberlist. +// +// While creating a new keyring, you must do one of: +// - Omit keys and primary key, effectively disabling encryption +// - Pass a set of keys plus the primary key +// - Pass only a primary key +// +// If only a primary key is passed, then it will be automatically added to the +// keyring. If creating a keyring with multiple keys, one key must be designated +// primary by passing it as the primaryKey. If the primaryKey does not exist in +// the list of secondary keys, it will be automatically added at position 0. +// +// A key should be either 16, 24, or 32 bytes to select AES-128, +// AES-192, or AES-256. +func NewKeyring(keys [][]byte, primaryKey []byte) (*Keyring, error) { + keyring := &Keyring{} + keyring.init() + + if len(keys) > 0 || len(primaryKey) > 0 { + if len(primaryKey) == 0 { + return nil, fmt.Errorf("Empty primary key not allowed") + } + if err := keyring.AddKey(primaryKey); err != nil { + return nil, err + } + for _, key := range keys { + if err := keyring.AddKey(key); err != nil { + return nil, err + } + } + } + + return keyring, nil +} + +// ValidateKey will check to see if the key is valid and returns an error if not. +// +// key should be either 16, 24, or 32 bytes to select AES-128, +// AES-192, or AES-256. +func ValidateKey(key []byte) error { + if l := len(key); l != 16 && l != 24 && l != 32 { + return fmt.Errorf("key size must be 16, 24 or 32 bytes") + } + return nil +} + +// AddKey will install a new key on the ring. Adding a key to the ring will make +// it available for use in decryption. If the key already exists on the ring, +// this function will just return noop. +// +// key should be either 16, 24, or 32 bytes to select AES-128, +// AES-192, or AES-256. +func (k *Keyring) AddKey(key []byte) error { + if err := ValidateKey(key); err != nil { + return err + } + + // No-op if key is already installed + for _, installedKey := range k.keys { + if bytes.Equal(installedKey, key) { + return nil + } + } + + keys := append(k.keys, key) + primaryKey := k.GetPrimaryKey() + if primaryKey == nil { + primaryKey = key + } + k.installKeys(keys, primaryKey) + return nil +} + +// UseKey changes the key used to encrypt messages. This is the only key used to +// encrypt messages, so peers should know this key before this method is called. +func (k *Keyring) UseKey(key []byte) error { + for _, installedKey := range k.keys { + if bytes.Equal(key, installedKey) { + k.installKeys(k.keys, key) + return nil + } + } + return fmt.Errorf("Requested key is not in the keyring") +} + +// RemoveKey drops a key from the keyring. This will return an error if the key +// requested for removal is currently at position 0 (primary key). +func (k *Keyring) RemoveKey(key []byte) error { + if bytes.Equal(key, k.keys[0]) { + return fmt.Errorf("Removing the primary key is not allowed") + } + for i, installedKey := range k.keys { + if bytes.Equal(key, installedKey) { + keys := append(k.keys[:i], k.keys[i+1:]...) + k.installKeys(keys, k.keys[0]) + } + } + return nil +} + +// installKeys will take out a lock on the keyring, and replace the keys with a +// new set of keys. The key indicated by primaryKey will be installed as the new +// primary key. +func (k *Keyring) installKeys(keys [][]byte, primaryKey []byte) { + k.l.Lock() + defer k.l.Unlock() + + newKeys := [][]byte{primaryKey} + for _, key := range keys { + if !bytes.Equal(key, primaryKey) { + newKeys = append(newKeys, key) + } + } + k.keys = newKeys +} + +// GetKeys returns the current set of keys on the ring. +func (k *Keyring) GetKeys() [][]byte { + k.l.Lock() + defer k.l.Unlock() + + return k.keys +} + +// GetPrimaryKey returns the key on the ring at position 0. This is the key used +// for encrypting messages, and is the first key tried for decrypting messages. +func (k *Keyring) GetPrimaryKey() (key []byte) { + k.l.Lock() + defer k.l.Unlock() + + if len(k.keys) > 0 { + key = k.keys[0] + } + return +} diff --git a/vendor/github.com/hashicorp/memberlist/logging.go b/vendor/github.com/hashicorp/memberlist/logging.go new file mode 100644 index 0000000000..f31acfb2fa --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/logging.go @@ -0,0 +1,22 @@ +package memberlist + +import ( + "fmt" + "net" +) + +func LogAddress(addr net.Addr) string { + if addr == nil { + return "from=" + } + + return fmt.Sprintf("from=%s", addr.String()) +} + +func LogConn(conn net.Conn) string { + if conn == nil { + return LogAddress(nil) + } + + return LogAddress(conn.RemoteAddr()) +} diff --git a/vendor/github.com/hashicorp/memberlist/memberlist.go b/vendor/github.com/hashicorp/memberlist/memberlist.go new file mode 100644 index 0000000000..371e3294b0 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/memberlist.go @@ -0,0 +1,660 @@ +/* +memberlist is a library that manages cluster +membership and member failure detection using a gossip based protocol. + +The use cases for such a library are far-reaching: all distributed systems +require membership, and memberlist is a re-usable solution to managing +cluster membership and node failure detection. + +memberlist is eventually consistent but converges quickly on average. +The speed at which it converges can be heavily tuned via various knobs +on the protocol. Node failures are detected and network partitions are partially +tolerated by attempting to communicate to potentially dead nodes through +multiple routes. +*/ +package memberlist + +import ( + "fmt" + "log" + "net" + "os" + "strconv" + "strings" + "sync" + "time" + + "github.com/hashicorp/go-multierror" + sockaddr "github.com/hashicorp/go-sockaddr" + "github.com/miekg/dns" +) + +type Memberlist struct { + sequenceNum uint32 // Local sequence number + incarnation uint32 // Local incarnation number + numNodes uint32 // Number of known nodes (estimate) + + config *Config + shutdown bool + shutdownCh chan struct{} + leave bool + leaveBroadcast chan struct{} + + udpListener *net.UDPConn + tcpListener *net.TCPListener + handoff chan msgHandoff + + nodeLock sync.RWMutex + nodes []*nodeState // Known nodes + nodeMap map[string]*nodeState // Maps Addr.String() -> NodeState + nodeTimers map[string]*suspicion // Maps Addr.String() -> suspicion timer + awareness *awareness + + tickerLock sync.Mutex + tickers []*time.Ticker + stopTick chan struct{} + probeIndex int + + ackLock sync.Mutex + ackHandlers map[uint32]*ackHandler + + broadcasts *TransmitLimitedQueue + + logger *log.Logger +} + +// newMemberlist creates the network listeners. +// Does not schedule execution of background maintenance. +func newMemberlist(conf *Config) (*Memberlist, error) { + if conf.ProtocolVersion < ProtocolVersionMin { + return nil, fmt.Errorf("Protocol version '%d' too low. Must be in range: [%d, %d]", + conf.ProtocolVersion, ProtocolVersionMin, ProtocolVersionMax) + } else if conf.ProtocolVersion > ProtocolVersionMax { + return nil, fmt.Errorf("Protocol version '%d' too high. Must be in range: [%d, %d]", + conf.ProtocolVersion, ProtocolVersionMin, ProtocolVersionMax) + } + + if len(conf.SecretKey) > 0 { + if conf.Keyring == nil { + keyring, err := NewKeyring(nil, conf.SecretKey) + if err != nil { + return nil, err + } + conf.Keyring = keyring + } else { + if err := conf.Keyring.AddKey(conf.SecretKey); err != nil { + return nil, err + } + if err := conf.Keyring.UseKey(conf.SecretKey); err != nil { + return nil, err + } + } + } + + tcpAddr := &net.TCPAddr{IP: net.ParseIP(conf.BindAddr), Port: conf.BindPort} + tcpLn, err := net.ListenTCP("tcp", tcpAddr) + if err != nil { + return nil, fmt.Errorf("Failed to start TCP listener. Err: %s", err) + } + if conf.BindPort == 0 { + conf.BindPort = tcpLn.Addr().(*net.TCPAddr).Port + } + + udpAddr := &net.UDPAddr{IP: net.ParseIP(conf.BindAddr), Port: conf.BindPort} + udpLn, err := net.ListenUDP("udp", udpAddr) + if err != nil { + tcpLn.Close() + return nil, fmt.Errorf("Failed to start UDP listener. Err: %s", err) + } + + // Set the UDP receive window size + setUDPRecvBuf(udpLn) + + if conf.LogOutput != nil && conf.Logger != nil { + return nil, fmt.Errorf("Cannot specify both LogOutput and Logger. Please choose a single log configuration setting.") + } + + logDest := conf.LogOutput + if logDest == nil { + logDest = os.Stderr + } + + logger := conf.Logger + if logger == nil { + logger = log.New(logDest, "", log.LstdFlags) + } + + m := &Memberlist{ + config: conf, + shutdownCh: make(chan struct{}), + leaveBroadcast: make(chan struct{}, 1), + udpListener: udpLn, + tcpListener: tcpLn, + handoff: make(chan msgHandoff, conf.HandoffQueueDepth), + nodeMap: make(map[string]*nodeState), + nodeTimers: make(map[string]*suspicion), + awareness: newAwareness(conf.AwarenessMaxMultiplier), + ackHandlers: make(map[uint32]*ackHandler), + broadcasts: &TransmitLimitedQueue{RetransmitMult: conf.RetransmitMult}, + logger: logger, + } + m.broadcasts.NumNodes = func() int { + return m.estNumNodes() + } + go m.tcpListen() + go m.udpListen() + go m.udpHandler() + return m, nil +} + +// Create will create a new Memberlist using the given configuration. +// This will not connect to any other node (see Join) yet, but will start +// all the listeners to allow other nodes to join this memberlist. +// After creating a Memberlist, the configuration given should not be +// modified by the user anymore. +func Create(conf *Config) (*Memberlist, error) { + m, err := newMemberlist(conf) + if err != nil { + return nil, err + } + if err := m.setAlive(); err != nil { + m.Shutdown() + return nil, err + } + m.schedule() + return m, nil +} + +// Join is used to take an existing Memberlist and attempt to join a cluster +// by contacting all the given hosts and performing a state sync. Initially, +// the Memberlist only contains our own state, so doing this will cause +// remote nodes to become aware of the existence of this node, effectively +// joining the cluster. +// +// This returns the number of hosts successfully contacted and an error if +// none could be reached. If an error is returned, the node did not successfully +// join the cluster. +func (m *Memberlist) Join(existing []string) (int, error) { + numSuccess := 0 + var errs error + for _, exist := range existing { + addrs, err := m.resolveAddr(exist) + if err != nil { + err = fmt.Errorf("Failed to resolve %s: %v", exist, err) + errs = multierror.Append(errs, err) + m.logger.Printf("[WARN] memberlist: %v", err) + continue + } + + for _, addr := range addrs { + if err := m.pushPullNode(addr.ip, addr.port, true); err != nil { + err = fmt.Errorf("Failed to join %s: %v", addr.ip, err) + errs = multierror.Append(errs, err) + m.logger.Printf("[DEBUG] memberlist: %v", err) + continue + } + numSuccess++ + } + + } + if numSuccess > 0 { + errs = nil + } + return numSuccess, errs +} + +// ipPort holds information about a node we want to try to join. +type ipPort struct { + ip net.IP + port uint16 +} + +// tcpLookupIP is a helper to initiate a TCP-based DNS lookup for the given host. +// The built-in Go resolver will do a UDP lookup first, and will only use TCP if +// the response has the truncate bit set, which isn't common on DNS servers like +// Consul's. By doing the TCP lookup directly, we get the best chance for the +// largest list of hosts to join. Since joins are relatively rare events, it's ok +// to do this rather expensive operation. +func (m *Memberlist) tcpLookupIP(host string, defaultPort uint16) ([]ipPort, error) { + // Don't attempt any TCP lookups against non-fully qualified domain + // names, since those will likely come from the resolv.conf file. + if !strings.Contains(host, ".") { + return nil, nil + } + + // Make sure the domain name is terminated with a dot (we know there's + // at least one character at this point). + dn := host + if dn[len(dn)-1] != '.' { + dn = dn + "." + } + + // See if we can find a server to try. + cc, err := dns.ClientConfigFromFile(m.config.DNSConfigPath) + if err != nil { + return nil, err + } + if len(cc.Servers) > 0 { + // We support host:port in the DNS config, but need to add the + // default port if one is not supplied. + server := cc.Servers[0] + if !hasPort(server) { + server = net.JoinHostPort(server, cc.Port) + } + + // Do the lookup. + c := new(dns.Client) + c.Net = "tcp" + msg := new(dns.Msg) + msg.SetQuestion(dn, dns.TypeANY) + in, _, err := c.Exchange(msg, server) + if err != nil { + return nil, err + } + + // Handle any IPs we get back that we can attempt to join. + var ips []ipPort + for _, r := range in.Answer { + switch rr := r.(type) { + case (*dns.A): + ips = append(ips, ipPort{rr.A, defaultPort}) + case (*dns.AAAA): + ips = append(ips, ipPort{rr.AAAA, defaultPort}) + case (*dns.CNAME): + m.logger.Printf("[DEBUG] memberlist: Ignoring CNAME RR in TCP-first answer for '%s'", host) + } + } + return ips, nil + } + + return nil, nil +} + +// resolveAddr is used to resolve the address into an address, +// port, and error. If no port is given, use the default +func (m *Memberlist) resolveAddr(hostStr string) ([]ipPort, error) { + // Normalize the incoming string to host:port so we can apply Go's + // parser to it. + port := uint16(0) + if !hasPort(hostStr) { + hostStr += ":" + strconv.Itoa(m.config.BindPort) + } + host, sport, err := net.SplitHostPort(hostStr) + if err != nil { + return nil, err + } + + // This will capture the supplied port, or the default one added above. + lport, err := strconv.ParseUint(sport, 10, 16) + if err != nil { + return nil, err + } + port = uint16(lport) + + // If it looks like an IP address we are done. The SplitHostPort() above + // will make sure the host part is in good shape for parsing, even for + // IPv6 addresses. + if ip := net.ParseIP(host); ip != nil { + return []ipPort{ipPort{ip, port}}, nil + } + + // First try TCP so we have the best chance for the largest list of + // hosts to join. If this fails it's not fatal since this isn't a standard + // way to query DNS, and we have a fallback below. + ips, err := m.tcpLookupIP(host, port) + if err != nil { + m.logger.Printf("[DEBUG] memberlist: TCP-first lookup failed for '%s', falling back to UDP: %s", hostStr, err) + } + if len(ips) > 0 { + return ips, nil + } + + // If TCP didn't yield anything then use the normal Go resolver which + // will try UDP, then might possibly try TCP again if the UDP response + // indicates it was truncated. + ans, err := net.LookupIP(host) + if err != nil { + return nil, err + } + ips = make([]ipPort, 0, len(ans)) + for _, ip := range ans { + ips = append(ips, ipPort{ip, port}) + } + return ips, nil +} + +// setAlive is used to mark this node as being alive. This is the same +// as if we received an alive notification our own network channel for +// ourself. +func (m *Memberlist) setAlive() error { + var advertiseAddr net.IP + var advertisePort int + if m.config.AdvertiseAddr != "" { + // If AdvertiseAddr is not empty, then advertise + // the given address and port. + ip := net.ParseIP(m.config.AdvertiseAddr) + if ip == nil { + return fmt.Errorf("Failed to parse advertise address!") + } + + // Ensure IPv4 conversion if necessary + if ip4 := ip.To4(); ip4 != nil { + ip = ip4 + } + + advertiseAddr = ip + advertisePort = m.config.AdvertisePort + } else { + if m.config.BindAddr == "0.0.0.0" { + // Otherwise, if we're not bound to a specific IP, let's use a suitable + // private IP address. + var err error + m.config.AdvertiseAddr, err = sockaddr.GetPrivateIP() + if err != nil { + return fmt.Errorf("Failed to get interface addresses: %v", err) + } + if m.config.AdvertiseAddr == "" { + return fmt.Errorf("No private IP address found, and explicit IP not provided") + } + + advertiseAddr = net.ParseIP(m.config.AdvertiseAddr) + if advertiseAddr == nil { + return fmt.Errorf("Failed to parse advertise address: %q", m.config.AdvertiseAddr) + } + } else { + // Use the IP that we're bound to. + addr := m.tcpListener.Addr().(*net.TCPAddr) + advertiseAddr = addr.IP + } + + // Use the port we are bound to. + advertisePort = m.tcpListener.Addr().(*net.TCPAddr).Port + } + + // Check if this is a public address without encryption + ipAddr, err := sockaddr.NewIPAddr(advertiseAddr.String()) + if err != nil { + return fmt.Errorf("Failed to parse interface addresses: %v", err) + } + + ifAddrs := []sockaddr.IfAddr{ + sockaddr.IfAddr{ + SockAddr: ipAddr, + }, + } + + _, publicIfs, err := sockaddr.IfByRFC("6890", ifAddrs) + if len(publicIfs) > 0 && !m.config.EncryptionEnabled() { + m.logger.Printf("[WARN] memberlist: Binding to public address without encryption!") + } + + // Get the node meta data + var meta []byte + if m.config.Delegate != nil { + meta = m.config.Delegate.NodeMeta(MetaMaxSize) + if len(meta) > MetaMaxSize { + panic("Node meta data provided is longer than the limit") + } + } + + a := alive{ + Incarnation: m.nextIncarnation(), + Node: m.config.Name, + Addr: advertiseAddr, + Port: uint16(advertisePort), + Meta: meta, + Vsn: []uint8{ + ProtocolVersionMin, ProtocolVersionMax, m.config.ProtocolVersion, + m.config.DelegateProtocolMin, m.config.DelegateProtocolMax, + m.config.DelegateProtocolVersion, + }, + } + m.aliveNode(&a, nil, true) + + return nil +} + +// LocalNode is used to return the local Node +func (m *Memberlist) LocalNode() *Node { + m.nodeLock.RLock() + defer m.nodeLock.RUnlock() + state := m.nodeMap[m.config.Name] + return &state.Node +} + +// UpdateNode is used to trigger re-advertising the local node. This is +// primarily used with a Delegate to support dynamic updates to the local +// meta data. This will block until the update message is successfully +// broadcasted to a member of the cluster, if any exist or until a specified +// timeout is reached. +func (m *Memberlist) UpdateNode(timeout time.Duration) error { + // Get the node meta data + var meta []byte + if m.config.Delegate != nil { + meta = m.config.Delegate.NodeMeta(MetaMaxSize) + if len(meta) > MetaMaxSize { + panic("Node meta data provided is longer than the limit") + } + } + + // Get the existing node + m.nodeLock.RLock() + state := m.nodeMap[m.config.Name] + m.nodeLock.RUnlock() + + // Format a new alive message + a := alive{ + Incarnation: m.nextIncarnation(), + Node: m.config.Name, + Addr: state.Addr, + Port: state.Port, + Meta: meta, + Vsn: []uint8{ + ProtocolVersionMin, ProtocolVersionMax, m.config.ProtocolVersion, + m.config.DelegateProtocolMin, m.config.DelegateProtocolMax, + m.config.DelegateProtocolVersion, + }, + } + notifyCh := make(chan struct{}) + m.aliveNode(&a, notifyCh, true) + + // Wait for the broadcast or a timeout + if m.anyAlive() { + var timeoutCh <-chan time.Time + if timeout > 0 { + timeoutCh = time.After(timeout) + } + select { + case <-notifyCh: + case <-timeoutCh: + return fmt.Errorf("timeout waiting for update broadcast") + } + } + return nil +} + +// SendTo is used to directly send a message to another node, without +// the use of the gossip mechanism. This will encode the message as a +// user-data message, which a delegate will receive through NotifyMsg +// The actual data is transmitted over UDP, which means this is a +// best-effort transmission mechanism, and the maximum size of the +// message is the size of a single UDP datagram, after compression. +// This method is DEPRECATED in favor or SendToUDP +func (m *Memberlist) SendTo(to net.Addr, msg []byte) error { + // Encode as a user message + buf := make([]byte, 1, len(msg)+1) + buf[0] = byte(userMsg) + buf = append(buf, msg...) + + // Send the message + return m.rawSendMsgUDP(to, nil, buf) +} + +// SendToUDP is used to directly send a message to another node, without +// the use of the gossip mechanism. This will encode the message as a +// user-data message, which a delegate will receive through NotifyMsg +// The actual data is transmitted over UDP, which means this is a +// best-effort transmission mechanism, and the maximum size of the +// message is the size of a single UDP datagram, after compression +func (m *Memberlist) SendToUDP(to *Node, msg []byte) error { + // Encode as a user message + buf := make([]byte, 1, len(msg)+1) + buf[0] = byte(userMsg) + buf = append(buf, msg...) + + // Send the message + destAddr := &net.UDPAddr{IP: to.Addr, Port: int(to.Port)} + return m.rawSendMsgUDP(destAddr, to, buf) +} + +// SendToTCP is used to directly send a message to another node, without +// the use of the gossip mechanism. This will encode the message as a +// user-data message, which a delegate will receive through NotifyMsg +// The actual data is transmitted over TCP, which means delivery +// is guaranteed if no error is returned. There is no limit +// to the size of the message +func (m *Memberlist) SendToTCP(to *Node, msg []byte) error { + // Send the message + destAddr := &net.TCPAddr{IP: to.Addr, Port: int(to.Port)} + return m.sendTCPUserMsg(destAddr, msg) +} + +// Members returns a list of all known live nodes. The node structures +// returned must not be modified. If you wish to modify a Node, make a +// copy first. +func (m *Memberlist) Members() []*Node { + m.nodeLock.RLock() + defer m.nodeLock.RUnlock() + + nodes := make([]*Node, 0, len(m.nodes)) + for _, n := range m.nodes { + if n.State != stateDead { + nodes = append(nodes, &n.Node) + } + } + + return nodes +} + +// NumMembers returns the number of alive nodes currently known. Between +// the time of calling this and calling Members, the number of alive nodes +// may have changed, so this shouldn't be used to determine how many +// members will be returned by Members. +func (m *Memberlist) NumMembers() (alive int) { + m.nodeLock.RLock() + defer m.nodeLock.RUnlock() + + for _, n := range m.nodes { + if n.State != stateDead { + alive++ + } + } + + return +} + +// Leave will broadcast a leave message but will not shutdown the background +// listeners, meaning the node will continue participating in gossip and state +// updates. +// +// This will block until the leave message is successfully broadcasted to +// a member of the cluster, if any exist or until a specified timeout +// is reached. +// +// This method is safe to call multiple times, but must not be called +// after the cluster is already shut down. +func (m *Memberlist) Leave(timeout time.Duration) error { + m.nodeLock.Lock() + // We can't defer m.nodeLock.Unlock() because m.deadNode will also try to + // acquire a lock so we need to Unlock before that. + + if m.shutdown { + m.nodeLock.Unlock() + panic("leave after shutdown") + } + + if !m.leave { + m.leave = true + + state, ok := m.nodeMap[m.config.Name] + m.nodeLock.Unlock() + if !ok { + m.logger.Printf("[WARN] memberlist: Leave but we're not in the node map.") + return nil + } + + d := dead{ + Incarnation: state.Incarnation, + Node: state.Name, + } + m.deadNode(&d) + + // Block until the broadcast goes out + if m.anyAlive() { + var timeoutCh <-chan time.Time + if timeout > 0 { + timeoutCh = time.After(timeout) + } + select { + case <-m.leaveBroadcast: + case <-timeoutCh: + return fmt.Errorf("timeout waiting for leave broadcast") + } + } + } else { + m.nodeLock.Unlock() + } + + return nil +} + +// Check for any other alive node. +func (m *Memberlist) anyAlive() bool { + m.nodeLock.RLock() + defer m.nodeLock.RUnlock() + for _, n := range m.nodes { + if n.State != stateDead && n.Name != m.config.Name { + return true + } + } + return false +} + +// GetHealthScore gives this instance's idea of how well it is meeting the soft +// real-time requirements of the protocol. Lower numbers are better, and zero +// means "totally healthy". +func (m *Memberlist) GetHealthScore() int { + return m.awareness.GetHealthScore() +} + +// ProtocolVersion returns the protocol version currently in use by +// this memberlist. +func (m *Memberlist) ProtocolVersion() uint8 { + // NOTE: This method exists so that in the future we can control + // any locking if necessary, if we change the protocol version at + // runtime, etc. + return m.config.ProtocolVersion +} + +// Shutdown will stop any background maintanence of network activity +// for this memberlist, causing it to appear "dead". A leave message +// will not be broadcasted prior, so the cluster being left will have +// to detect this node's shutdown using probing. If you wish to more +// gracefully exit the cluster, call Leave prior to shutting down. +// +// This method is safe to call multiple times. +func (m *Memberlist) Shutdown() error { + m.nodeLock.Lock() + defer m.nodeLock.Unlock() + + if m.shutdown { + return nil + } + + m.shutdown = true + close(m.shutdownCh) + m.deschedule() + m.udpListener.Close() + m.tcpListener.Close() + return nil +} diff --git a/vendor/github.com/hashicorp/memberlist/merge_delegate.go b/vendor/github.com/hashicorp/memberlist/merge_delegate.go new file mode 100644 index 0000000000..89afb59f20 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/merge_delegate.go @@ -0,0 +1,14 @@ +package memberlist + +// MergeDelegate is used to involve a client in +// a potential cluster merge operation. Namely, when +// a node does a TCP push/pull (as part of a join), +// the delegate is involved and allowed to cancel the join +// based on custom logic. The merge delegate is NOT invoked +// as part of the push-pull anti-entropy. +type MergeDelegate interface { + // NotifyMerge is invoked when a merge could take place. + // Provides a list of the nodes known by the peer. If + // the return value is non-nil, the merge is canceled. + NotifyMerge(peers []*Node) error +} diff --git a/vendor/github.com/hashicorp/memberlist/net.go b/vendor/github.com/hashicorp/memberlist/net.go new file mode 100644 index 0000000000..e47da411ec --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/net.go @@ -0,0 +1,1123 @@ +package memberlist + +import ( + "bufio" + "bytes" + "encoding/binary" + "fmt" + "hash/crc32" + "io" + "net" + "time" + + "github.com/armon/go-metrics" + "github.com/hashicorp/go-msgpack/codec" +) + +// This is the minimum and maximum protocol version that we can +// _understand_. We're allowed to speak at any version within this +// range. This range is inclusive. +const ( + ProtocolVersionMin uint8 = 1 + + // Version 3 added support for TCP pings but we kept the default + // protocol version at 2 to ease transition to this new feature. + // A memberlist speaking version 2 of the protocol will attempt + // to TCP ping another memberlist who understands version 3 or + // greater. + // + // Version 4 added support for nacks as part of indirect probes. + // A memberlist speaking version 2 of the protocol will expect + // nacks from another memberlist who understands version 4 or + // greater, and likewise nacks will be sent to memberlists who + // understand version 4 or greater. + ProtocolVersion2Compatible = 2 + + ProtocolVersionMax = 5 +) + +// messageType is an integer ID of a type of message that can be received +// on network channels from other members. +type messageType uint8 + +// The list of available message types. +const ( + pingMsg messageType = iota + indirectPingMsg + ackRespMsg + suspectMsg + aliveMsg + deadMsg + pushPullMsg + compoundMsg + userMsg // User mesg, not handled by us + compressMsg + encryptMsg + nackRespMsg + hasCrcMsg +) + +// compressionType is used to specify the compression algorithm +type compressionType uint8 + +const ( + lzwAlgo compressionType = iota +) + +const ( + MetaMaxSize = 512 // Maximum size for node meta data + compoundHeaderOverhead = 2 // Assumed header overhead + compoundOverhead = 2 // Assumed overhead per entry in compoundHeader + udpBufSize = 65536 + udpRecvBuf = 2 * 1024 * 1024 + userMsgOverhead = 1 + blockingWarning = 10 * time.Millisecond // Warn if a UDP packet takes this long to process + maxPushStateBytes = 10 * 1024 * 1024 +) + +// ping request sent directly to node +type ping struct { + SeqNo uint32 + + // Node is sent so the target can verify they are + // the intended recipient. This is to protect again an agent + // restart with a new name. + Node string +} + +// indirect ping sent to an indirect ndoe +type indirectPingReq struct { + SeqNo uint32 + Target []byte + Port uint16 + Node string + Nack bool // true if we'd like a nack back +} + +// ack response is sent for a ping +type ackResp struct { + SeqNo uint32 + Payload []byte +} + +// nack response is sent for an indirect ping when the pinger doesn't hear from +// the ping-ee within the configured timeout. This lets the original node know +// that the indirect ping attempt happened but didn't succeed. +type nackResp struct { + SeqNo uint32 +} + +// suspect is broadcast when we suspect a node is dead +type suspect struct { + Incarnation uint32 + Node string + From string // Include who is suspecting +} + +// alive is broadcast when we know a node is alive. +// Overloaded for nodes joining +type alive struct { + Incarnation uint32 + Node string + Addr []byte + Port uint16 + Meta []byte + + // The versions of the protocol/delegate that are being spoken, order: + // pmin, pmax, pcur, dmin, dmax, dcur + Vsn []uint8 +} + +// dead is broadcast when we confirm a node is dead +// Overloaded for nodes leaving +type dead struct { + Incarnation uint32 + Node string + From string // Include who is suspecting +} + +// pushPullHeader is used to inform the +// otherside how many states we are transferring +type pushPullHeader struct { + Nodes int + UserStateLen int // Encodes the byte lengh of user state + Join bool // Is this a join request or a anti-entropy run +} + +// userMsgHeader is used to encapsulate a userMsg +type userMsgHeader struct { + UserMsgLen int // Encodes the byte lengh of user state +} + +// pushNodeState is used for pushPullReq when we are +// transferring out node states +type pushNodeState struct { + Name string + Addr []byte + Port uint16 + Meta []byte + Incarnation uint32 + State nodeStateType + Vsn []uint8 // Protocol versions +} + +// compress is used to wrap an underlying payload +// using a specified compression algorithm +type compress struct { + Algo compressionType + Buf []byte +} + +// msgHandoff is used to transfer a message between goroutines +type msgHandoff struct { + msgType messageType + buf []byte + from net.Addr +} + +// encryptionVersion returns the encryption version to use +func (m *Memberlist) encryptionVersion() encryptionVersion { + switch m.ProtocolVersion() { + case 1: + return 0 + default: + return 1 + } +} + +// setUDPRecvBuf is used to resize the UDP receive window. The function +// attempts to set the read buffer to `udpRecvBuf` but backs off until +// the read buffer can be set. +func setUDPRecvBuf(c *net.UDPConn) { + size := udpRecvBuf + for { + if err := c.SetReadBuffer(size); err == nil { + break + } + size = size / 2 + } +} + +// tcpListen listens for and handles incoming connections +func (m *Memberlist) tcpListen() { + for { + conn, err := m.tcpListener.AcceptTCP() + if err != nil { + if m.shutdown { + break + } + m.logger.Printf("[ERR] memberlist: Error accepting TCP connection: %s", err) + continue + } + go m.handleConn(conn) + } +} + +// handleConn handles a single incoming TCP connection +func (m *Memberlist) handleConn(conn *net.TCPConn) { + m.logger.Printf("[DEBUG] memberlist: TCP connection %s", LogConn(conn)) + + defer conn.Close() + metrics.IncrCounter([]string{"memberlist", "tcp", "accept"}, 1) + + conn.SetDeadline(time.Now().Add(m.config.TCPTimeout)) + msgType, bufConn, dec, err := m.readTCP(conn) + if err != nil { + if err != io.EOF { + m.logger.Printf("[ERR] memberlist: failed to receive: %s %s", err, LogConn(conn)) + } + return + } + + switch msgType { + case userMsg: + if err := m.readUserMsg(bufConn, dec); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to receive user message: %s %s", err, LogConn(conn)) + } + case pushPullMsg: + join, remoteNodes, userState, err := m.readRemoteState(bufConn, dec) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed to read remote state: %s %s", err, LogConn(conn)) + return + } + + if err := m.sendLocalState(conn, join); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to push local state: %s %s", err, LogConn(conn)) + return + } + + if err := m.mergeRemoteState(join, remoteNodes, userState); err != nil { + m.logger.Printf("[ERR] memberlist: Failed push/pull merge: %s %s", err, LogConn(conn)) + return + } + case pingMsg: + var p ping + if err := dec.Decode(&p); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode TCP ping: %s %s", err, LogConn(conn)) + return + } + + if p.Node != "" && p.Node != m.config.Name { + m.logger.Printf("[WARN] memberlist: Got ping for unexpected node %s %s", p.Node, LogConn(conn)) + return + } + + ack := ackResp{p.SeqNo, nil} + out, err := encode(ackRespMsg, &ack) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed to encode TCP ack: %s", err) + return + } + + err = m.rawSendMsgTCP(conn, out.Bytes()) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send TCP ack: %s %s", err, LogConn(conn)) + return + } + default: + m.logger.Printf("[ERR] memberlist: Received invalid msgType (%d) %s", msgType, LogConn(conn)) + } +} + +// udpListen listens for and handles incoming UDP packets +func (m *Memberlist) udpListen() { + var n int + var addr net.Addr + var err error + var lastPacket time.Time + for { + // Do a check for potentially blocking operations + if !lastPacket.IsZero() && time.Now().Sub(lastPacket) > blockingWarning { + diff := time.Now().Sub(lastPacket) + m.logger.Printf( + "[DEBUG] memberlist: Potential blocking operation. Last command took %v", + diff) + } + + // Create a new buffer + // TODO: Use Sync.Pool eventually + buf := make([]byte, udpBufSize) + + // Read a packet + n, addr, err = m.udpListener.ReadFrom(buf) + if err != nil { + if m.shutdown { + break + } + m.logger.Printf("[ERR] memberlist: Error reading UDP packet: %s", err) + continue + } + + // Capture the reception time of the packet as close to the + // system calls as possible. + lastPacket = time.Now() + + // Check the length + if n < 1 { + m.logger.Printf("[ERR] memberlist: UDP packet too short (%d bytes) %s", + len(buf), LogAddress(addr)) + continue + } + + // Ingest this packet + metrics.IncrCounter([]string{"memberlist", "udp", "received"}, float32(n)) + m.ingestPacket(buf[:n], addr, lastPacket) + } +} + +func (m *Memberlist) ingestPacket(buf []byte, from net.Addr, timestamp time.Time) { + // Check if encryption is enabled + if m.config.EncryptionEnabled() { + // Decrypt the payload + plain, err := decryptPayload(m.config.Keyring.GetKeys(), buf, nil) + if err != nil { + m.logger.Printf("[ERR] memberlist: Decrypt packet failed: %v %s", err, LogAddress(from)) + return + } + + // Continue processing the plaintext buffer + buf = plain + } + + // See if there's a checksum included to verify the contents of the message + if len(buf) >= 5 && messageType(buf[0]) == hasCrcMsg { + crc := crc32.ChecksumIEEE(buf[5:]) + expected := binary.BigEndian.Uint32(buf[1:5]) + if crc != expected { + m.logger.Printf("[WARN] memberlist: Got invalid checksum for UDP packet: %x, %x", crc, expected) + return + } + m.handleCommand(buf[5:], from, timestamp) + } else { + m.handleCommand(buf, from, timestamp) + } +} + +func (m *Memberlist) handleCommand(buf []byte, from net.Addr, timestamp time.Time) { + // Decode the message type + msgType := messageType(buf[0]) + buf = buf[1:] + + // Switch on the msgType + switch msgType { + case compoundMsg: + m.handleCompound(buf, from, timestamp) + case compressMsg: + m.handleCompressed(buf, from, timestamp) + + case pingMsg: + m.handlePing(buf, from) + case indirectPingMsg: + m.handleIndirectPing(buf, from) + case ackRespMsg: + m.handleAck(buf, from, timestamp) + case nackRespMsg: + m.handleNack(buf, from) + + case suspectMsg: + fallthrough + case aliveMsg: + fallthrough + case deadMsg: + fallthrough + case userMsg: + select { + case m.handoff <- msgHandoff{msgType, buf, from}: + default: + m.logger.Printf("[WARN] memberlist: UDP handler queue full, dropping message (%d) %s", msgType, LogAddress(from)) + } + + default: + m.logger.Printf("[ERR] memberlist: UDP msg type (%d) not supported %s", msgType, LogAddress(from)) + } +} + +// udpHandler processes messages received over UDP, but is decoupled +// from the listener to avoid blocking the listener which may cause +// ping/ack messages to be delayed. +func (m *Memberlist) udpHandler() { + for { + select { + case msg := <-m.handoff: + msgType := msg.msgType + buf := msg.buf + from := msg.from + + switch msgType { + case suspectMsg: + m.handleSuspect(buf, from) + case aliveMsg: + m.handleAlive(buf, from) + case deadMsg: + m.handleDead(buf, from) + case userMsg: + m.handleUser(buf, from) + default: + m.logger.Printf("[ERR] memberlist: UDP msg type (%d) not supported %s (handler)", msgType, LogAddress(from)) + } + + case <-m.shutdownCh: + return + } + } +} + +func (m *Memberlist) handleCompound(buf []byte, from net.Addr, timestamp time.Time) { + // Decode the parts + trunc, parts, err := decodeCompoundMessage(buf) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode compound request: %s %s", err, LogAddress(from)) + return + } + + // Log any truncation + if trunc > 0 { + m.logger.Printf("[WARN] memberlist: Compound request had %d truncated messages %s", trunc, LogAddress(from)) + } + + // Handle each message + for _, part := range parts { + m.handleCommand(part, from, timestamp) + } +} + +func (m *Memberlist) handlePing(buf []byte, from net.Addr) { + var p ping + if err := decode(buf, &p); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode ping request: %s %s", err, LogAddress(from)) + return + } + // If node is provided, verify that it is for us + if p.Node != "" && p.Node != m.config.Name { + m.logger.Printf("[WARN] memberlist: Got ping for unexpected node '%s' %s", p.Node, LogAddress(from)) + return + } + var ack ackResp + ack.SeqNo = p.SeqNo + if m.config.Ping != nil { + ack.Payload = m.config.Ping.AckPayload() + } + if err := m.encodeAndSendMsg(from, ackRespMsg, &ack); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send ack: %s %s", err, LogAddress(from)) + } +} + +func (m *Memberlist) handleIndirectPing(buf []byte, from net.Addr) { + var ind indirectPingReq + if err := decode(buf, &ind); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode indirect ping request: %s %s", err, LogAddress(from)) + return + } + + // For proto versions < 2, there is no port provided. Mask old + // behavior by using the configured port. + if m.ProtocolVersion() < 2 || ind.Port == 0 { + ind.Port = uint16(m.config.BindPort) + } + + // Send a ping to the correct host. + localSeqNo := m.nextSeqNo() + ping := ping{SeqNo: localSeqNo, Node: ind.Node} + destAddr := &net.UDPAddr{IP: ind.Target, Port: int(ind.Port)} + + // Setup a response handler to relay the ack + cancelCh := make(chan struct{}) + respHandler := func(payload []byte, timestamp time.Time) { + // Try to prevent the nack if we've caught it in time. + close(cancelCh) + + // Forward the ack back to the requestor. + ack := ackResp{ind.SeqNo, nil} + if err := m.encodeAndSendMsg(from, ackRespMsg, &ack); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to forward ack: %s %s", err, LogAddress(from)) + } + } + m.setAckHandler(localSeqNo, respHandler, m.config.ProbeTimeout) + + // Send the ping. + if err := m.encodeAndSendMsg(destAddr, pingMsg, &ping); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send ping: %s %s", err, LogAddress(from)) + } + + // Setup a timer to fire off a nack if no ack is seen in time. + if ind.Nack { + go func() { + select { + case <-cancelCh: + return + case <-time.After(m.config.ProbeTimeout): + nack := nackResp{ind.SeqNo} + if err := m.encodeAndSendMsg(from, nackRespMsg, &nack); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send nack: %s %s", err, LogAddress(from)) + } + } + }() + } +} + +func (m *Memberlist) handleAck(buf []byte, from net.Addr, timestamp time.Time) { + var ack ackResp + if err := decode(buf, &ack); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode ack response: %s %s", err, LogAddress(from)) + return + } + m.invokeAckHandler(ack, timestamp) +} + +func (m *Memberlist) handleNack(buf []byte, from net.Addr) { + var nack nackResp + if err := decode(buf, &nack); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode nack response: %s %s", err, LogAddress(from)) + return + } + m.invokeNackHandler(nack) +} + +func (m *Memberlist) handleSuspect(buf []byte, from net.Addr) { + var sus suspect + if err := decode(buf, &sus); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode suspect message: %s %s", err, LogAddress(from)) + return + } + m.suspectNode(&sus) +} + +func (m *Memberlist) handleAlive(buf []byte, from net.Addr) { + var live alive + if err := decode(buf, &live); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode alive message: %s %s", err, LogAddress(from)) + return + } + + // For proto versions < 2, there is no port provided. Mask old + // behavior by using the configured port + if m.ProtocolVersion() < 2 || live.Port == 0 { + live.Port = uint16(m.config.BindPort) + } + + m.aliveNode(&live, nil, false) +} + +func (m *Memberlist) handleDead(buf []byte, from net.Addr) { + var d dead + if err := decode(buf, &d); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decode dead message: %s %s", err, LogAddress(from)) + return + } + m.deadNode(&d) +} + +// handleUser is used to notify channels of incoming user data +func (m *Memberlist) handleUser(buf []byte, from net.Addr) { + d := m.config.Delegate + if d != nil { + d.NotifyMsg(buf) + } +} + +// handleCompressed is used to unpack a compressed message +func (m *Memberlist) handleCompressed(buf []byte, from net.Addr, timestamp time.Time) { + // Try to decode the payload + payload, err := decompressPayload(buf) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed to decompress payload: %v %s", err, LogAddress(from)) + return + } + + // Recursively handle the payload + m.handleCommand(payload, from, timestamp) +} + +// encodeAndSendMsg is used to combine the encoding and sending steps +func (m *Memberlist) encodeAndSendMsg(to net.Addr, msgType messageType, msg interface{}) error { + out, err := encode(msgType, msg) + if err != nil { + return err + } + if err := m.sendMsg(to, out.Bytes()); err != nil { + return err + } + return nil +} + +// sendMsg is used to send a UDP message to another host. It will opportunistically +// create a compoundMsg and piggy back other broadcasts +func (m *Memberlist) sendMsg(to net.Addr, msg []byte) error { + // Check if we can piggy back any messages + bytesAvail := m.config.UDPBufferSize - len(msg) - compoundHeaderOverhead + if m.config.EncryptionEnabled() { + bytesAvail -= encryptOverhead(m.encryptionVersion()) + } + extra := m.getBroadcasts(compoundOverhead, bytesAvail) + + // Fast path if nothing to piggypack + if len(extra) == 0 { + return m.rawSendMsgUDP(to, nil, msg) + } + + // Join all the messages + msgs := make([][]byte, 0, 1+len(extra)) + msgs = append(msgs, msg) + msgs = append(msgs, extra...) + + // Create a compound message + compound := makeCompoundMessage(msgs) + + // Send the message + return m.rawSendMsgUDP(to, nil, compound.Bytes()) +} + +// rawSendMsgUDP is used to send a UDP message to another host without modification +func (m *Memberlist) rawSendMsgUDP(addr net.Addr, node *Node, msg []byte) error { + // Check if we have compression enabled + if m.config.EnableCompression { + buf, err := compressPayload(msg) + if err != nil { + m.logger.Printf("[WARN] memberlist: Failed to compress payload: %v", err) + } else { + // Only use compression if it reduced the size + if buf.Len() < len(msg) { + msg = buf.Bytes() + } + } + } + + // Try to look up the destination node + if node == nil { + toAddr, _, err := net.SplitHostPort(addr.String()) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed to parse address %q: %v", addr.String(), err) + return err + } + m.nodeLock.RLock() + nodeState, ok := m.nodeMap[toAddr] + m.nodeLock.RUnlock() + if ok { + node = &nodeState.Node + } + } + + // Add a CRC to the end of the payload if the recipient understands + // ProtocolVersion >= 5 + if node != nil && node.PMax >= 5 { + crc := crc32.ChecksumIEEE(msg) + header := make([]byte, 5, 5+len(msg)) + header[0] = byte(hasCrcMsg) + binary.BigEndian.PutUint32(header[1:], crc) + msg = append(header, msg...) + } + + // Check if we have encryption enabled + if m.config.EncryptionEnabled() { + // Encrypt the payload + var buf bytes.Buffer + primaryKey := m.config.Keyring.GetPrimaryKey() + err := encryptPayload(m.encryptionVersion(), primaryKey, msg, nil, &buf) + if err != nil { + m.logger.Printf("[ERR] memberlist: Encryption of message failed: %v", err) + return err + } + msg = buf.Bytes() + } + + metrics.IncrCounter([]string{"memberlist", "udp", "sent"}, float32(len(msg))) + _, err := m.udpListener.WriteTo(msg, addr) + return err +} + +// rawSendMsgTCP is used to send a TCP message to another host without modification +func (m *Memberlist) rawSendMsgTCP(conn net.Conn, sendBuf []byte) error { + // Check if compresion is enabled + if m.config.EnableCompression { + compBuf, err := compressPayload(sendBuf) + if err != nil { + m.logger.Printf("[ERROR] memberlist: Failed to compress payload: %v", err) + } else { + sendBuf = compBuf.Bytes() + } + } + + // Check if encryption is enabled + if m.config.EncryptionEnabled() { + crypt, err := m.encryptLocalState(sendBuf) + if err != nil { + m.logger.Printf("[ERROR] memberlist: Failed to encrypt local state: %v", err) + return err + } + sendBuf = crypt + } + + // Write out the entire send buffer + metrics.IncrCounter([]string{"memberlist", "tcp", "sent"}, float32(len(sendBuf))) + + if n, err := conn.Write(sendBuf); err != nil { + return err + } else if n != len(sendBuf) { + return fmt.Errorf("only %d of %d bytes written", n, len(sendBuf)) + } + + return nil +} + +// sendTCPUserMsg is used to send a TCP userMsg to another host +func (m *Memberlist) sendTCPUserMsg(to net.Addr, sendBuf []byte) error { + dialer := net.Dialer{Timeout: m.config.TCPTimeout} + conn, err := dialer.Dial("tcp", to.String()) + if err != nil { + return err + } + defer conn.Close() + + bufConn := bytes.NewBuffer(nil) + + if err := bufConn.WriteByte(byte(userMsg)); err != nil { + return err + } + + // Send our node state + header := userMsgHeader{UserMsgLen: len(sendBuf)} + hd := codec.MsgpackHandle{} + enc := codec.NewEncoder(bufConn, &hd) + + if err := enc.Encode(&header); err != nil { + return err + } + + if _, err := bufConn.Write(sendBuf); err != nil { + return err + } + + return m.rawSendMsgTCP(conn, bufConn.Bytes()) +} + +// sendAndReceiveState is used to initiate a push/pull over TCP with a remote node +func (m *Memberlist) sendAndReceiveState(addr []byte, port uint16, join bool) ([]pushNodeState, []byte, error) { + // Attempt to connect + dialer := net.Dialer{Timeout: m.config.TCPTimeout} + dest := net.TCPAddr{IP: addr, Port: int(port)} + conn, err := dialer.Dial("tcp", dest.String()) + if err != nil { + return nil, nil, err + } + defer conn.Close() + m.logger.Printf("[DEBUG] memberlist: Initiating push/pull sync with: %s", conn.RemoteAddr()) + metrics.IncrCounter([]string{"memberlist", "tcp", "connect"}, 1) + + // Send our state + if err := m.sendLocalState(conn, join); err != nil { + return nil, nil, err + } + + conn.SetDeadline(time.Now().Add(m.config.TCPTimeout)) + msgType, bufConn, dec, err := m.readTCP(conn) + if err != nil { + return nil, nil, err + } + + // Quit if not push/pull + if msgType != pushPullMsg { + err := fmt.Errorf("received invalid msgType (%d), expected pushPullMsg (%d) %s", msgType, pushPullMsg, LogConn(conn)) + return nil, nil, err + } + + // Read remote state + _, remoteNodes, userState, err := m.readRemoteState(bufConn, dec) + return remoteNodes, userState, err +} + +// sendLocalState is invoked to send our local state over a tcp connection +func (m *Memberlist) sendLocalState(conn net.Conn, join bool) error { + // Setup a deadline + conn.SetDeadline(time.Now().Add(m.config.TCPTimeout)) + + // Prepare the local node state + m.nodeLock.RLock() + localNodes := make([]pushNodeState, len(m.nodes)) + for idx, n := range m.nodes { + localNodes[idx].Name = n.Name + localNodes[idx].Addr = n.Addr + localNodes[idx].Port = n.Port + localNodes[idx].Incarnation = n.Incarnation + localNodes[idx].State = n.State + localNodes[idx].Meta = n.Meta + localNodes[idx].Vsn = []uint8{ + n.PMin, n.PMax, n.PCur, + n.DMin, n.DMax, n.DCur, + } + } + m.nodeLock.RUnlock() + + // Get the delegate state + var userData []byte + if m.config.Delegate != nil { + userData = m.config.Delegate.LocalState(join) + } + + // Create a bytes buffer writer + bufConn := bytes.NewBuffer(nil) + + // Send our node state + header := pushPullHeader{Nodes: len(localNodes), UserStateLen: len(userData), Join: join} + hd := codec.MsgpackHandle{} + enc := codec.NewEncoder(bufConn, &hd) + + // Begin state push + if _, err := bufConn.Write([]byte{byte(pushPullMsg)}); err != nil { + return err + } + + if err := enc.Encode(&header); err != nil { + return err + } + for i := 0; i < header.Nodes; i++ { + if err := enc.Encode(&localNodes[i]); err != nil { + return err + } + } + + // Write the user state as well + if userData != nil { + if _, err := bufConn.Write(userData); err != nil { + return err + } + } + + // Get the send buffer + return m.rawSendMsgTCP(conn, bufConn.Bytes()) +} + +// encryptLocalState is used to help encrypt local state before sending +func (m *Memberlist) encryptLocalState(sendBuf []byte) ([]byte, error) { + var buf bytes.Buffer + + // Write the encryptMsg byte + buf.WriteByte(byte(encryptMsg)) + + // Write the size of the message + sizeBuf := make([]byte, 4) + encVsn := m.encryptionVersion() + encLen := encryptedLength(encVsn, len(sendBuf)) + binary.BigEndian.PutUint32(sizeBuf, uint32(encLen)) + buf.Write(sizeBuf) + + // Write the encrypted cipher text to the buffer + key := m.config.Keyring.GetPrimaryKey() + err := encryptPayload(encVsn, key, sendBuf, buf.Bytes()[:5], &buf) + if err != nil { + return nil, err + } + return buf.Bytes(), nil +} + +// decryptRemoteState is used to help decrypt the remote state +func (m *Memberlist) decryptRemoteState(bufConn io.Reader) ([]byte, error) { + // Read in enough to determine message length + cipherText := bytes.NewBuffer(nil) + cipherText.WriteByte(byte(encryptMsg)) + _, err := io.CopyN(cipherText, bufConn, 4) + if err != nil { + return nil, err + } + + // Ensure we aren't asked to download too much. This is to guard against + // an attack vector where a huge amount of state is sent + moreBytes := binary.BigEndian.Uint32(cipherText.Bytes()[1:5]) + if moreBytes > maxPushStateBytes { + return nil, fmt.Errorf("Remote node state is larger than limit (%d)", moreBytes) + } + + // Read in the rest of the payload + _, err = io.CopyN(cipherText, bufConn, int64(moreBytes)) + if err != nil { + return nil, err + } + + // Decrypt the cipherText + dataBytes := cipherText.Bytes()[:5] + cipherBytes := cipherText.Bytes()[5:] + + // Decrypt the payload + keys := m.config.Keyring.GetKeys() + return decryptPayload(keys, cipherBytes, dataBytes) +} + +// readTCP is used to read the start of a TCP stream. +// it decrypts and decompresses the stream if necessary +func (m *Memberlist) readTCP(conn net.Conn) (messageType, io.Reader, *codec.Decoder, error) { + // Created a buffered reader + var bufConn io.Reader = bufio.NewReader(conn) + + // Read the message type + buf := [1]byte{0} + if _, err := bufConn.Read(buf[:]); err != nil { + return 0, nil, nil, err + } + msgType := messageType(buf[0]) + + // Check if the message is encrypted + if msgType == encryptMsg { + if !m.config.EncryptionEnabled() { + return 0, nil, nil, + fmt.Errorf("Remote state is encrypted and encryption is not configured") + } + + plain, err := m.decryptRemoteState(bufConn) + if err != nil { + return 0, nil, nil, err + } + + // Reset message type and bufConn + msgType = messageType(plain[0]) + bufConn = bytes.NewReader(plain[1:]) + } else if m.config.EncryptionEnabled() { + return 0, nil, nil, + fmt.Errorf("Encryption is configured but remote state is not encrypted") + } + + // Get the msgPack decoders + hd := codec.MsgpackHandle{} + dec := codec.NewDecoder(bufConn, &hd) + + // Check if we have a compressed message + if msgType == compressMsg { + var c compress + if err := dec.Decode(&c); err != nil { + return 0, nil, nil, err + } + decomp, err := decompressBuffer(&c) + if err != nil { + return 0, nil, nil, err + } + + // Reset the message type + msgType = messageType(decomp[0]) + + // Create a new bufConn + bufConn = bytes.NewReader(decomp[1:]) + + // Create a new decoder + dec = codec.NewDecoder(bufConn, &hd) + } + + return msgType, bufConn, dec, nil +} + +// readRemoteState is used to read the remote state from a connection +func (m *Memberlist) readRemoteState(bufConn io.Reader, dec *codec.Decoder) (bool, []pushNodeState, []byte, error) { + // Read the push/pull header + var header pushPullHeader + if err := dec.Decode(&header); err != nil { + return false, nil, nil, err + } + + // Allocate space for the transfer + remoteNodes := make([]pushNodeState, header.Nodes) + + // Try to decode all the states + for i := 0; i < header.Nodes; i++ { + if err := dec.Decode(&remoteNodes[i]); err != nil { + return false, nil, nil, err + } + } + + // Read the remote user state into a buffer + var userBuf []byte + if header.UserStateLen > 0 { + userBuf = make([]byte, header.UserStateLen) + bytes, err := io.ReadAtLeast(bufConn, userBuf, header.UserStateLen) + if err == nil && bytes != header.UserStateLen { + err = fmt.Errorf( + "Failed to read full user state (%d / %d)", + bytes, header.UserStateLen) + } + if err != nil { + return false, nil, nil, err + } + } + + // For proto versions < 2, there is no port provided. Mask old + // behavior by using the configured port + for idx := range remoteNodes { + if m.ProtocolVersion() < 2 || remoteNodes[idx].Port == 0 { + remoteNodes[idx].Port = uint16(m.config.BindPort) + } + } + + return header.Join, remoteNodes, userBuf, nil +} + +// mergeRemoteState is used to merge the remote state with our local state +func (m *Memberlist) mergeRemoteState(join bool, remoteNodes []pushNodeState, userBuf []byte) error { + if err := m.verifyProtocol(remoteNodes); err != nil { + return err + } + + // Invoke the merge delegate if any + if join && m.config.Merge != nil { + nodes := make([]*Node, len(remoteNodes)) + for idx, n := range remoteNodes { + nodes[idx] = &Node{ + Name: n.Name, + Addr: n.Addr, + Port: n.Port, + Meta: n.Meta, + PMin: n.Vsn[0], + PMax: n.Vsn[1], + PCur: n.Vsn[2], + DMin: n.Vsn[3], + DMax: n.Vsn[4], + DCur: n.Vsn[5], + } + } + if err := m.config.Merge.NotifyMerge(nodes); err != nil { + return err + } + } + + // Merge the membership state + m.mergeState(remoteNodes) + + // Invoke the delegate for user state + if userBuf != nil && m.config.Delegate != nil { + m.config.Delegate.MergeRemoteState(userBuf, join) + } + return nil +} + +// readUserMsg is used to decode a userMsg from a TCP stream +func (m *Memberlist) readUserMsg(bufConn io.Reader, dec *codec.Decoder) error { + // Read the user message header + var header userMsgHeader + if err := dec.Decode(&header); err != nil { + return err + } + + // Read the user message into a buffer + var userBuf []byte + if header.UserMsgLen > 0 { + userBuf = make([]byte, header.UserMsgLen) + bytes, err := io.ReadAtLeast(bufConn, userBuf, header.UserMsgLen) + if err == nil && bytes != header.UserMsgLen { + err = fmt.Errorf( + "Failed to read full user message (%d / %d)", + bytes, header.UserMsgLen) + } + if err != nil { + return err + } + + d := m.config.Delegate + if d != nil { + d.NotifyMsg(userBuf) + } + } + + return nil +} + +// sendPingAndWaitForAck makes a TCP connection to the given address, sends +// a ping, and waits for an ack. All of this is done as a series of blocking +// operations, given the deadline. The bool return parameter is true if we +// we able to round trip a ping to the other node. +func (m *Memberlist) sendPingAndWaitForAck(destAddr net.Addr, ping ping, deadline time.Time) (bool, error) { + dialer := net.Dialer{Deadline: deadline} + conn, err := dialer.Dial("tcp", destAddr.String()) + if err != nil { + // If the node is actually dead we expect this to fail, so we + // shouldn't spam the logs with it. After this point, errors + // with the connection are real, unexpected errors and should + // get propagated up. + return false, nil + } + defer conn.Close() + conn.SetDeadline(deadline) + + out, err := encode(pingMsg, &ping) + if err != nil { + return false, err + } + + if err = m.rawSendMsgTCP(conn, out.Bytes()); err != nil { + return false, err + } + + msgType, _, dec, err := m.readTCP(conn) + if err != nil { + return false, err + } + + if msgType != ackRespMsg { + return false, fmt.Errorf("Unexpected msgType (%d) from TCP ping %s", msgType, LogConn(conn)) + } + + var ack ackResp + if err = dec.Decode(&ack); err != nil { + return false, err + } + + if ack.SeqNo != ping.SeqNo { + return false, fmt.Errorf("Sequence number from ack (%d) doesn't match ping (%d) from TCP ping %s", ack.SeqNo, ping.SeqNo, LogConn(conn)) + } + + return true, nil +} diff --git a/vendor/github.com/hashicorp/memberlist/ping_delegate.go b/vendor/github.com/hashicorp/memberlist/ping_delegate.go new file mode 100644 index 0000000000..1566c8b3d5 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/ping_delegate.go @@ -0,0 +1,14 @@ +package memberlist + +import "time" + +// PingDelegate is used to notify an observer how long it took for a ping message to +// complete a round trip. It can also be used for writing arbitrary byte slices +// into ack messages. Note that in order to be meaningful for RTT estimates, this +// delegate does not apply to indirect pings, nor fallback pings sent over TCP. +type PingDelegate interface { + // AckPayload is invoked when an ack is being sent; the returned bytes will be appended to the ack + AckPayload() []byte + // NotifyPing is invoked when an ack for a ping is received + NotifyPingComplete(other *Node, rtt time.Duration, payload []byte) +} diff --git a/vendor/github.com/hashicorp/memberlist/queue.go b/vendor/github.com/hashicorp/memberlist/queue.go new file mode 100644 index 0000000000..994b90ff10 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/queue.go @@ -0,0 +1,167 @@ +package memberlist + +import ( + "sort" + "sync" +) + +// TransmitLimitedQueue is used to queue messages to broadcast to +// the cluster (via gossip) but limits the number of transmits per +// message. It also prioritizes messages with lower transmit counts +// (hence newer messages). +type TransmitLimitedQueue struct { + // NumNodes returns the number of nodes in the cluster. This is + // used to determine the retransmit count, which is calculated + // based on the log of this. + NumNodes func() int + + // RetransmitMult is the multiplier used to determine the maximum + // number of retransmissions attempted. + RetransmitMult int + + sync.Mutex + bcQueue limitedBroadcasts +} + +type limitedBroadcast struct { + transmits int // Number of transmissions attempted. + b Broadcast +} +type limitedBroadcasts []*limitedBroadcast + +// Broadcast is something that can be broadcasted via gossip to +// the memberlist cluster. +type Broadcast interface { + // Invalidates checks if enqueuing the current broadcast + // invalidates a previous broadcast + Invalidates(b Broadcast) bool + + // Returns a byte form of the message + Message() []byte + + // Finished is invoked when the message will no longer + // be broadcast, either due to invalidation or to the + // transmit limit being reached + Finished() +} + +// QueueBroadcast is used to enqueue a broadcast +func (q *TransmitLimitedQueue) QueueBroadcast(b Broadcast) { + q.Lock() + defer q.Unlock() + + // Check if this message invalidates another + n := len(q.bcQueue) + for i := 0; i < n; i++ { + if b.Invalidates(q.bcQueue[i].b) { + q.bcQueue[i].b.Finished() + copy(q.bcQueue[i:], q.bcQueue[i+1:]) + q.bcQueue[n-1] = nil + q.bcQueue = q.bcQueue[:n-1] + n-- + } + } + + // Append to the queue + q.bcQueue = append(q.bcQueue, &limitedBroadcast{0, b}) +} + +// GetBroadcasts is used to get a number of broadcasts, up to a byte limit +// and applying a per-message overhead as provided. +func (q *TransmitLimitedQueue) GetBroadcasts(overhead, limit int) [][]byte { + q.Lock() + defer q.Unlock() + + // Fast path the default case + if len(q.bcQueue) == 0 { + return nil + } + + transmitLimit := retransmitLimit(q.RetransmitMult, q.NumNodes()) + bytesUsed := 0 + var toSend [][]byte + + for i := len(q.bcQueue) - 1; i >= 0; i-- { + // Check if this is within our limits + b := q.bcQueue[i] + msg := b.b.Message() + if bytesUsed+overhead+len(msg) > limit { + continue + } + + // Add to slice to send + bytesUsed += overhead + len(msg) + toSend = append(toSend, msg) + + // Check if we should stop transmission + b.transmits++ + if b.transmits >= transmitLimit { + b.b.Finished() + n := len(q.bcQueue) + q.bcQueue[i], q.bcQueue[n-1] = q.bcQueue[n-1], nil + q.bcQueue = q.bcQueue[:n-1] + } + } + + // If we are sending anything, we need to re-sort to deal + // with adjusted transmit counts + if len(toSend) > 0 { + q.bcQueue.Sort() + } + return toSend +} + +// NumQueued returns the number of queued messages +func (q *TransmitLimitedQueue) NumQueued() int { + q.Lock() + defer q.Unlock() + return len(q.bcQueue) +} + +// Reset clears all the queued messages +func (q *TransmitLimitedQueue) Reset() { + q.Lock() + defer q.Unlock() + for _, b := range q.bcQueue { + b.b.Finished() + } + q.bcQueue = nil +} + +// Prune will retain the maxRetain latest messages, and the rest +// will be discarded. This can be used to prevent unbounded queue sizes +func (q *TransmitLimitedQueue) Prune(maxRetain int) { + q.Lock() + defer q.Unlock() + + // Do nothing if queue size is less than the limit + n := len(q.bcQueue) + if n < maxRetain { + return + } + + // Invalidate the messages we will be removing + for i := 0; i < n-maxRetain; i++ { + q.bcQueue[i].b.Finished() + } + + // Move the messages, and retain only the last maxRetain + copy(q.bcQueue[0:], q.bcQueue[n-maxRetain:]) + q.bcQueue = q.bcQueue[:maxRetain] +} + +func (b limitedBroadcasts) Len() int { + return len(b) +} + +func (b limitedBroadcasts) Less(i, j int) bool { + return b[i].transmits < b[j].transmits +} + +func (b limitedBroadcasts) Swap(i, j int) { + b[i], b[j] = b[j], b[i] +} + +func (b limitedBroadcasts) Sort() { + sort.Sort(sort.Reverse(b)) +} diff --git a/vendor/github.com/hashicorp/memberlist/security.go b/vendor/github.com/hashicorp/memberlist/security.go new file mode 100644 index 0000000000..d90114eb0c --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/security.go @@ -0,0 +1,198 @@ +package memberlist + +import ( + "bytes" + "crypto/aes" + "crypto/cipher" + "crypto/rand" + "fmt" + "io" +) + +/* + +Encrypted messages are prefixed with an encryptionVersion byte +that is used for us to be able to properly encode/decode. We +currently support the following versions: + + 0 - AES-GCM 128, using PKCS7 padding + 1 - AES-GCM 128, no padding. Padding not needed, caused bloat. + +*/ +type encryptionVersion uint8 + +const ( + minEncryptionVersion encryptionVersion = 0 + maxEncryptionVersion encryptionVersion = 1 +) + +const ( + versionSize = 1 + nonceSize = 12 + tagSize = 16 + maxPadOverhead = 16 + blockSize = aes.BlockSize +) + +// pkcs7encode is used to pad a byte buffer to a specific block size using +// the PKCS7 algorithm. "Ignores" some bytes to compensate for IV +func pkcs7encode(buf *bytes.Buffer, ignore, blockSize int) { + n := buf.Len() - ignore + more := blockSize - (n % blockSize) + for i := 0; i < more; i++ { + buf.WriteByte(byte(more)) + } +} + +// pkcs7decode is used to decode a buffer that has been padded +func pkcs7decode(buf []byte, blockSize int) []byte { + if len(buf) == 0 { + panic("Cannot decode a PKCS7 buffer of zero length") + } + n := len(buf) + last := buf[n-1] + n -= int(last) + return buf[:n] +} + +// encryptOverhead returns the maximum possible overhead of encryption by version +func encryptOverhead(vsn encryptionVersion) int { + switch vsn { + case 0: + return 45 // Version: 1, IV: 12, Padding: 16, Tag: 16 + case 1: + return 29 // Version: 1, IV: 12, Tag: 16 + default: + panic("unsupported version") + } +} + +// encryptedLength is used to compute the buffer size needed +// for a message of given length +func encryptedLength(vsn encryptionVersion, inp int) int { + // If we are on version 1, there is no padding + if vsn >= 1 { + return versionSize + nonceSize + inp + tagSize + } + + // Determine the padding size + padding := blockSize - (inp % blockSize) + + // Sum the extra parts to get total size + return versionSize + nonceSize + inp + padding + tagSize +} + +// encryptPayload is used to encrypt a message with a given key. +// We make use of AES-128 in GCM mode. New byte buffer is the version, +// nonce, ciphertext and tag +func encryptPayload(vsn encryptionVersion, key []byte, msg []byte, data []byte, dst *bytes.Buffer) error { + // Get the AES block cipher + aesBlock, err := aes.NewCipher(key) + if err != nil { + return err + } + + // Get the GCM cipher mode + gcm, err := cipher.NewGCM(aesBlock) + if err != nil { + return err + } + + // Grow the buffer to make room for everything + offset := dst.Len() + dst.Grow(encryptedLength(vsn, len(msg))) + + // Write the encryption version + dst.WriteByte(byte(vsn)) + + // Add a random nonce + io.CopyN(dst, rand.Reader, nonceSize) + afterNonce := dst.Len() + + // Ensure we are correctly padded (only version 0) + if vsn == 0 { + io.Copy(dst, bytes.NewReader(msg)) + pkcs7encode(dst, offset+versionSize+nonceSize, aes.BlockSize) + } + + // Encrypt message using GCM + slice := dst.Bytes()[offset:] + nonce := slice[versionSize : versionSize+nonceSize] + + // Message source depends on the encryption version. + // Version 0 uses padding, version 1 does not + var src []byte + if vsn == 0 { + src = slice[versionSize+nonceSize:] + } else { + src = msg + } + out := gcm.Seal(nil, nonce, src, data) + + // Truncate the plaintext, and write the cipher text + dst.Truncate(afterNonce) + dst.Write(out) + return nil +} + +// decryptMessage performs the actual decryption of ciphertext. This is in its +// own function to allow it to be called on all keys easily. +func decryptMessage(key, msg []byte, data []byte) ([]byte, error) { + // Get the AES block cipher + aesBlock, err := aes.NewCipher(key) + if err != nil { + return nil, err + } + + // Get the GCM cipher mode + gcm, err := cipher.NewGCM(aesBlock) + if err != nil { + return nil, err + } + + // Decrypt the message + nonce := msg[versionSize : versionSize+nonceSize] + ciphertext := msg[versionSize+nonceSize:] + plain, err := gcm.Open(nil, nonce, ciphertext, data) + if err != nil { + return nil, err + } + + // Success! + return plain, nil +} + +// decryptPayload is used to decrypt a message with a given key, +// and verify it's contents. Any padding will be removed, and a +// slice to the plaintext is returned. Decryption is done IN PLACE! +func decryptPayload(keys [][]byte, msg []byte, data []byte) ([]byte, error) { + // Ensure we have at least one byte + if len(msg) == 0 { + return nil, fmt.Errorf("Cannot decrypt empty payload") + } + + // Verify the version + vsn := encryptionVersion(msg[0]) + if vsn > maxEncryptionVersion { + return nil, fmt.Errorf("Unsupported encryption version %d", msg[0]) + } + + // Ensure the length is sane + if len(msg) < encryptedLength(vsn, 0) { + return nil, fmt.Errorf("Payload is too small to decrypt: %d", len(msg)) + } + + for _, key := range keys { + plain, err := decryptMessage(key, msg, data) + if err == nil { + // Remove the PKCS7 padding for vsn 0 + if vsn == 0 { + return pkcs7decode(plain, aes.BlockSize), nil + } else { + return plain, nil + } + } + } + + return nil, fmt.Errorf("No installed keys could decrypt the message") +} diff --git a/vendor/github.com/hashicorp/memberlist/state.go b/vendor/github.com/hashicorp/memberlist/state.go new file mode 100644 index 0000000000..6b9122f08c --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/state.go @@ -0,0 +1,1151 @@ +package memberlist + +import ( + "bytes" + "fmt" + "math" + "math/rand" + "net" + "sync/atomic" + "time" + + "github.com/armon/go-metrics" +) + +type nodeStateType int + +const ( + stateAlive nodeStateType = iota + stateSuspect + stateDead +) + +// Node represents a node in the cluster. +type Node struct { + Name string + Addr net.IP + Port uint16 + Meta []byte // Metadata from the delegate for this node. + PMin uint8 // Minimum protocol version this understands + PMax uint8 // Maximum protocol version this understands + PCur uint8 // Current version node is speaking + DMin uint8 // Min protocol version for the delegate to understand + DMax uint8 // Max protocol version for the delegate to understand + DCur uint8 // Current version delegate is speaking +} + +// NodeState is used to manage our state view of another node +type nodeState struct { + Node + Incarnation uint32 // Last known incarnation number + State nodeStateType // Current state + StateChange time.Time // Time last state change happened +} + +// ackHandler is used to register handlers for incoming acks and nacks. +type ackHandler struct { + ackFn func([]byte, time.Time) + nackFn func() + timer *time.Timer +} + +// NoPingResponseError is used to indicate a 'ping' packet was +// successfully issued but no response was received +type NoPingResponseError struct { + node string +} + +func (f NoPingResponseError) Error() string { + return fmt.Sprintf("No response from node %s", f.node) +} + +// Schedule is used to ensure the Tick is performed periodically. This +// function is safe to call multiple times. If the memberlist is already +// scheduled, then it won't do anything. +func (m *Memberlist) schedule() { + m.tickerLock.Lock() + defer m.tickerLock.Unlock() + + // If we already have tickers, then don't do anything, since we're + // scheduled + if len(m.tickers) > 0 { + return + } + + // Create the stop tick channel, a blocking channel. We close this + // when we should stop the tickers. + stopCh := make(chan struct{}) + + // Create a new probeTicker + if m.config.ProbeInterval > 0 { + t := time.NewTicker(m.config.ProbeInterval) + go m.triggerFunc(m.config.ProbeInterval, t.C, stopCh, m.probe) + m.tickers = append(m.tickers, t) + } + + // Create a push pull ticker if needed + if m.config.PushPullInterval > 0 { + go m.pushPullTrigger(stopCh) + } + + // Create a gossip ticker if needed + if m.config.GossipInterval > 0 && m.config.GossipNodes > 0 { + t := time.NewTicker(m.config.GossipInterval) + go m.triggerFunc(m.config.GossipInterval, t.C, stopCh, m.gossip) + m.tickers = append(m.tickers, t) + } + + // If we made any tickers, then record the stopTick channel for + // later. + if len(m.tickers) > 0 { + m.stopTick = stopCh + } +} + +// triggerFunc is used to trigger a function call each time a +// message is received until a stop tick arrives. +func (m *Memberlist) triggerFunc(stagger time.Duration, C <-chan time.Time, stop <-chan struct{}, f func()) { + // Use a random stagger to avoid syncronizing + randStagger := time.Duration(uint64(rand.Int63()) % uint64(stagger)) + select { + case <-time.After(randStagger): + case <-stop: + return + } + for { + select { + case <-C: + f() + case <-stop: + return + } + } +} + +// pushPullTrigger is used to periodically trigger a push/pull until +// a stop tick arrives. We don't use triggerFunc since the push/pull +// timer is dynamically scaled based on cluster size to avoid network +// saturation +func (m *Memberlist) pushPullTrigger(stop <-chan struct{}) { + interval := m.config.PushPullInterval + + // Use a random stagger to avoid syncronizing + randStagger := time.Duration(uint64(rand.Int63()) % uint64(interval)) + select { + case <-time.After(randStagger): + case <-stop: + return + } + + // Tick using a dynamic timer + for { + tickTime := pushPullScale(interval, m.estNumNodes()) + select { + case <-time.After(tickTime): + m.pushPull() + case <-stop: + return + } + } +} + +// Deschedule is used to stop the background maintenance. This is safe +// to call multiple times. +func (m *Memberlist) deschedule() { + m.tickerLock.Lock() + defer m.tickerLock.Unlock() + + // If we have no tickers, then we aren't scheduled. + if len(m.tickers) == 0 { + return + } + + // Close the stop channel so all the ticker listeners stop. + close(m.stopTick) + + // Explicitly stop all the tickers themselves so they don't take + // up any more resources, and get rid of the list. + for _, t := range m.tickers { + t.Stop() + } + m.tickers = nil +} + +// Tick is used to perform a single round of failure detection and gossip +func (m *Memberlist) probe() { + // Track the number of indexes we've considered probing + numCheck := 0 +START: + m.nodeLock.RLock() + + // Make sure we don't wrap around infinitely + if numCheck >= len(m.nodes) { + m.nodeLock.RUnlock() + return + } + + // Handle the wrap around case + if m.probeIndex >= len(m.nodes) { + m.nodeLock.RUnlock() + m.resetNodes() + m.probeIndex = 0 + numCheck++ + goto START + } + + // Determine if we should probe this node + skip := false + var node nodeState + + node = *m.nodes[m.probeIndex] + if node.Name == m.config.Name { + skip = true + } else if node.State == stateDead { + skip = true + } + + // Potentially skip + m.nodeLock.RUnlock() + m.probeIndex++ + if skip { + numCheck++ + goto START + } + + // Probe the specific node + m.probeNode(&node) +} + +// probeNode handles a single round of failure checking on a node. +func (m *Memberlist) probeNode(node *nodeState) { + defer metrics.MeasureSince([]string{"memberlist", "probeNode"}, time.Now()) + + // We use our health awareness to scale the overall probe interval, so we + // slow down if we detect problems. The ticker that calls us can handle + // us running over the base interval, and will skip missed ticks. + probeInterval := m.awareness.ScaleTimeout(m.config.ProbeInterval) + if probeInterval > m.config.ProbeInterval { + metrics.IncrCounter([]string{"memberlist", "degraded", "probe"}, 1) + } + + // Prepare a ping message and setup an ack handler. + ping := ping{SeqNo: m.nextSeqNo(), Node: node.Name} + ackCh := make(chan ackMessage, m.config.IndirectChecks+1) + nackCh := make(chan struct{}, m.config.IndirectChecks+1) + m.setProbeChannels(ping.SeqNo, ackCh, nackCh, probeInterval) + + // Send a ping to the node. If this node looks like it's suspect or dead, + // also tack on a suspect message so that it has a chance to refute as + // soon as possible. + deadline := time.Now().Add(probeInterval) + destAddr := &net.UDPAddr{IP: node.Addr, Port: int(node.Port)} + if node.State == stateAlive { + if err := m.encodeAndSendMsg(destAddr, pingMsg, &ping); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send ping: %s", err) + return + } + } else { + var msgs [][]byte + if buf, err := encode(pingMsg, &ping); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to encode ping message: %s", err) + return + } else { + msgs = append(msgs, buf.Bytes()) + } + s := suspect{Incarnation: node.Incarnation, Node: node.Name, From: m.config.Name} + if buf, err := encode(suspectMsg, &s); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to encode suspect message: %s", err) + return + } else { + msgs = append(msgs, buf.Bytes()) + } + + compound := makeCompoundMessage(msgs) + if err := m.rawSendMsgUDP(destAddr, &node.Node, compound.Bytes()); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send compound ping and suspect message to %s: %s", destAddr, err) + return + } + } + + // Mark the sent time here, which should be after any pre-processing and + // system calls to do the actual send. This probably under-reports a bit, + // but it's the best we can do. + sent := time.Now() + + // Arrange for our self-awareness to get updated. At this point we've + // sent the ping, so any return statement means the probe succeeded + // which will improve our health until we get to the failure scenarios + // at the end of this function, which will alter this delta variable + // accordingly. + awarenessDelta := -1 + defer func() { + m.awareness.ApplyDelta(awarenessDelta) + }() + + // Wait for response or round-trip-time. + select { + case v := <-ackCh: + if v.Complete == true { + if m.config.Ping != nil { + rtt := v.Timestamp.Sub(sent) + m.config.Ping.NotifyPingComplete(&node.Node, rtt, v.Payload) + } + return + } + + // As an edge case, if we get a timeout, we need to re-enqueue it + // here to break out of the select below. + if v.Complete == false { + ackCh <- v + } + case <-time.After(m.config.ProbeTimeout): + // Note that we don't scale this timeout based on awareness and + // the health score. That's because we don't really expect waiting + // longer to help get UDP through. Since health does extend the + // probe interval it will give the TCP fallback more time, which + // is more active in dealing with lost packets, and it gives more + // time to wait for indirect acks/nacks. + m.logger.Printf("[DEBUG] memberlist: Failed UDP ping: %v (timeout reached)", node.Name) + } + + // Get some random live nodes. + m.nodeLock.RLock() + kNodes := kRandomNodes(m.config.IndirectChecks, m.nodes, func(n *nodeState) bool { + return n.Name == m.config.Name || + n.Name == node.Name || + n.State != stateAlive + }) + m.nodeLock.RUnlock() + + // Attempt an indirect ping. + expectedNacks := 0 + ind := indirectPingReq{SeqNo: ping.SeqNo, Target: node.Addr, Port: node.Port, Node: node.Name} + for _, peer := range kNodes { + // We only expect nack to be sent from peers who understand + // version 4 of the protocol. + if ind.Nack = peer.PMax >= 4; ind.Nack { + expectedNacks++ + } + + destAddr := &net.UDPAddr{IP: peer.Addr, Port: int(peer.Port)} + if err := m.encodeAndSendMsg(destAddr, indirectPingMsg, &ind); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send indirect ping: %s", err) + } + } + + // Also make an attempt to contact the node directly over TCP. This + // helps prevent confused clients who get isolated from UDP traffic + // but can still speak TCP (which also means they can possibly report + // misinformation to other nodes via anti-entropy), avoiding flapping in + // the cluster. + // + // This is a little unusual because we will attempt a TCP ping to any + // member who understands version 3 of the protocol, regardless of + // which protocol version we are speaking. That's why we've included a + // config option to turn this off if desired. + fallbackCh := make(chan bool, 1) + if (!m.config.DisableTcpPings) && (node.PMax >= 3) { + destAddr := &net.TCPAddr{IP: node.Addr, Port: int(node.Port)} + go func() { + defer close(fallbackCh) + didContact, err := m.sendPingAndWaitForAck(destAddr, ping, deadline) + if err != nil { + m.logger.Printf("[ERR] memberlist: Failed TCP fallback ping: %s", err) + } else { + fallbackCh <- didContact + } + }() + } else { + close(fallbackCh) + } + + // Wait for the acks or timeout. Note that we don't check the fallback + // channel here because we want to issue a warning below if that's the + // *only* way we hear back from the peer, so we have to let this time + // out first to allow the normal UDP-based acks to come in. + select { + case v := <-ackCh: + if v.Complete == true { + return + } + } + + // Finally, poll the fallback channel. The timeouts are set such that + // the channel will have something or be closed without having to wait + // any additional time here. + for didContact := range fallbackCh { + if didContact { + m.logger.Printf("[WARN] memberlist: Was able to reach %s via TCP but not UDP, network may be misconfigured and not allowing bidirectional UDP", node.Name) + return + } + } + + // Update our self-awareness based on the results of this failed probe. + // If we don't have peers who will send nacks then we penalize for any + // failed probe as a simple health metric. If we do have peers to nack + // verify, then we can use that as a more sophisticated measure of self- + // health because we assume them to be working, and they can help us + // decide if the probed node was really dead or if it was something wrong + // with ourselves. + awarenessDelta = 0 + if expectedNacks > 0 { + if nackCount := len(nackCh); nackCount < expectedNacks { + awarenessDelta += 2 * (expectedNacks - nackCount) + } + } else { + awarenessDelta += 1 + } + + // No acks received from target, suspect it as failed. + m.logger.Printf("[INFO] memberlist: Suspect %s has failed, no acks received", node.Name) + s := suspect{Incarnation: node.Incarnation, Node: node.Name, From: m.config.Name} + m.suspectNode(&s) +} + +// Ping initiates a ping to the node with the specified name. +func (m *Memberlist) Ping(node string, addr net.Addr) (time.Duration, error) { + // Prepare a ping message and setup an ack handler. + ping := ping{SeqNo: m.nextSeqNo(), Node: node} + ackCh := make(chan ackMessage, m.config.IndirectChecks+1) + m.setProbeChannels(ping.SeqNo, ackCh, nil, m.config.ProbeInterval) + + // Send a ping to the node. + if err := m.encodeAndSendMsg(addr, pingMsg, &ping); err != nil { + return 0, err + } + + // Mark the sent time here, which should be after any pre-processing and + // system calls to do the actual send. This probably under-reports a bit, + // but it's the best we can do. + sent := time.Now() + + // Wait for response or timeout. + select { + case v := <-ackCh: + if v.Complete == true { + return v.Timestamp.Sub(sent), nil + } + case <-time.After(m.config.ProbeTimeout): + // Timeout, return an error below. + } + + m.logger.Printf("[DEBUG] memberlist: Failed UDP ping: %v (timeout reached)", node) + return 0, NoPingResponseError{ping.Node} +} + +// resetNodes is used when the tick wraps around. It will reap the +// dead nodes and shuffle the node list. +func (m *Memberlist) resetNodes() { + m.nodeLock.Lock() + defer m.nodeLock.Unlock() + + // Move dead nodes, but respect gossip to the dead interval + deadIdx := moveDeadNodes(m.nodes, m.config.GossipToTheDeadTime) + + // Deregister the dead nodes + for i := deadIdx; i < len(m.nodes); i++ { + delete(m.nodeMap, m.nodes[i].Name) + m.nodes[i] = nil + } + + // Trim the nodes to exclude the dead nodes + m.nodes = m.nodes[0:deadIdx] + + // Update numNodes after we've trimmed the dead nodes + atomic.StoreUint32(&m.numNodes, uint32(deadIdx)) + + // Shuffle live nodes + shuffleNodes(m.nodes) +} + +// gossip is invoked every GossipInterval period to broadcast our gossip +// messages to a few random nodes. +func (m *Memberlist) gossip() { + defer metrics.MeasureSince([]string{"memberlist", "gossip"}, time.Now()) + + // Get some random live, suspect, or recently dead nodes + m.nodeLock.RLock() + kNodes := kRandomNodes(m.config.GossipNodes, m.nodes, func(n *nodeState) bool { + if n.Name == m.config.Name { + return true + } + + switch n.State { + case stateAlive, stateSuspect: + return false + + case stateDead: + return time.Since(n.StateChange) > m.config.GossipToTheDeadTime + + default: + return true + } + }) + m.nodeLock.RUnlock() + + // Compute the bytes available + bytesAvail := m.config.UDPBufferSize - compoundHeaderOverhead + if m.config.EncryptionEnabled() { + bytesAvail -= encryptOverhead(m.encryptionVersion()) + } + + for _, node := range kNodes { + // Get any pending broadcasts + msgs := m.getBroadcasts(compoundOverhead, bytesAvail) + if len(msgs) == 0 { + return + } + + destAddr := &net.UDPAddr{IP: node.Addr, Port: int(node.Port)} + + if len(msgs) == 1 { + // Send single message as is + if err := m.rawSendMsgUDP(destAddr, &node.Node, msgs[0]); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send gossip to %s: %s", destAddr, err) + } + } else { + // Otherwise create and send a compound message + compound := makeCompoundMessage(msgs) + if err := m.rawSendMsgUDP(destAddr, &node.Node, compound.Bytes()); err != nil { + m.logger.Printf("[ERR] memberlist: Failed to send gossip to %s: %s", destAddr, err) + } + } + } +} + +// pushPull is invoked periodically to randomly perform a complete state +// exchange. Used to ensure a high level of convergence, but is also +// reasonably expensive as the entire state of this node is exchanged +// with the other node. +func (m *Memberlist) pushPull() { + // Get a random live node + m.nodeLock.RLock() + nodes := kRandomNodes(1, m.nodes, func(n *nodeState) bool { + return n.Name == m.config.Name || + n.State != stateAlive + }) + m.nodeLock.RUnlock() + + // If no nodes, bail + if len(nodes) == 0 { + return + } + node := nodes[0] + + // Attempt a push pull + if err := m.pushPullNode(node.Addr, node.Port, false); err != nil { + m.logger.Printf("[ERR] memberlist: Push/Pull with %s failed: %s", node.Name, err) + } +} + +// pushPullNode does a complete state exchange with a specific node. +func (m *Memberlist) pushPullNode(addr []byte, port uint16, join bool) error { + defer metrics.MeasureSince([]string{"memberlist", "pushPullNode"}, time.Now()) + + // Attempt to send and receive with the node + remote, userState, err := m.sendAndReceiveState(addr, port, join) + if err != nil { + return err + } + + if err := m.mergeRemoteState(join, remote, userState); err != nil { + return err + } + return nil +} + +// verifyProtocol verifies that all the remote nodes can speak with our +// nodes and vice versa on both the core protocol as well as the +// delegate protocol level. +// +// The verification works by finding the maximum minimum and +// minimum maximum understood protocol and delegate versions. In other words, +// it finds the common denominator of protocol and delegate version ranges +// for the entire cluster. +// +// After this, it goes through the entire cluster (local and remote) and +// verifies that everyone's speaking protocol versions satisfy this range. +// If this passes, it means that every node can understand each other. +func (m *Memberlist) verifyProtocol(remote []pushNodeState) error { + m.nodeLock.RLock() + defer m.nodeLock.RUnlock() + + // Maximum minimum understood and minimum maximum understood for both + // the protocol and delegate versions. We use this to verify everyone + // can be understood. + var maxpmin, minpmax uint8 + var maxdmin, mindmax uint8 + minpmax = math.MaxUint8 + mindmax = math.MaxUint8 + + for _, rn := range remote { + // If the node isn't alive, then skip it + if rn.State != stateAlive { + continue + } + + // Skip nodes that don't have versions set, it just means + // their version is zero. + if len(rn.Vsn) == 0 { + continue + } + + if rn.Vsn[0] > maxpmin { + maxpmin = rn.Vsn[0] + } + + if rn.Vsn[1] < minpmax { + minpmax = rn.Vsn[1] + } + + if rn.Vsn[3] > maxdmin { + maxdmin = rn.Vsn[3] + } + + if rn.Vsn[4] < mindmax { + mindmax = rn.Vsn[4] + } + } + + for _, n := range m.nodes { + // Ignore non-alive nodes + if n.State != stateAlive { + continue + } + + if n.PMin > maxpmin { + maxpmin = n.PMin + } + + if n.PMax < minpmax { + minpmax = n.PMax + } + + if n.DMin > maxdmin { + maxdmin = n.DMin + } + + if n.DMax < mindmax { + mindmax = n.DMax + } + } + + // Now that we definitively know the minimum and maximum understood + // version that satisfies the whole cluster, we verify that every + // node in the cluster satisifies this. + for _, n := range remote { + var nPCur, nDCur uint8 + if len(n.Vsn) > 0 { + nPCur = n.Vsn[2] + nDCur = n.Vsn[5] + } + + if nPCur < maxpmin || nPCur > minpmax { + return fmt.Errorf( + "Node '%s' protocol version (%d) is incompatible: [%d, %d]", + n.Name, nPCur, maxpmin, minpmax) + } + + if nDCur < maxdmin || nDCur > mindmax { + return fmt.Errorf( + "Node '%s' delegate protocol version (%d) is incompatible: [%d, %d]", + n.Name, nDCur, maxdmin, mindmax) + } + } + + for _, n := range m.nodes { + nPCur := n.PCur + nDCur := n.DCur + + if nPCur < maxpmin || nPCur > minpmax { + return fmt.Errorf( + "Node '%s' protocol version (%d) is incompatible: [%d, %d]", + n.Name, nPCur, maxpmin, minpmax) + } + + if nDCur < maxdmin || nDCur > mindmax { + return fmt.Errorf( + "Node '%s' delegate protocol version (%d) is incompatible: [%d, %d]", + n.Name, nDCur, maxdmin, mindmax) + } + } + + return nil +} + +// nextSeqNo returns a usable sequence number in a thread safe way +func (m *Memberlist) nextSeqNo() uint32 { + return atomic.AddUint32(&m.sequenceNum, 1) +} + +// nextIncarnation returns the next incarnation number in a thread safe way +func (m *Memberlist) nextIncarnation() uint32 { + return atomic.AddUint32(&m.incarnation, 1) +} + +// skipIncarnation adds the positive offset to the incarnation number. +func (m *Memberlist) skipIncarnation(offset uint32) uint32 { + return atomic.AddUint32(&m.incarnation, offset) +} + +// estNumNodes is used to get the current estimate of the number of nodes +func (m *Memberlist) estNumNodes() int { + return int(atomic.LoadUint32(&m.numNodes)) +} + +type ackMessage struct { + Complete bool + Payload []byte + Timestamp time.Time +} + +// setProbeChannels is used to attach the ackCh to receive a message when an ack +// with a given sequence number is received. The `complete` field of the message +// will be false on timeout. Any nack messages will cause an empty struct to be +// passed to the nackCh, which can be nil if not needed. +func (m *Memberlist) setProbeChannels(seqNo uint32, ackCh chan ackMessage, nackCh chan struct{}, timeout time.Duration) { + // Create handler functions for acks and nacks + ackFn := func(payload []byte, timestamp time.Time) { + select { + case ackCh <- ackMessage{true, payload, timestamp}: + default: + } + } + nackFn := func() { + select { + case nackCh <- struct{}{}: + default: + } + } + + // Add the handlers + ah := &ackHandler{ackFn, nackFn, nil} + m.ackLock.Lock() + m.ackHandlers[seqNo] = ah + m.ackLock.Unlock() + + // Setup a reaping routing + ah.timer = time.AfterFunc(timeout, func() { + m.ackLock.Lock() + delete(m.ackHandlers, seqNo) + m.ackLock.Unlock() + select { + case ackCh <- ackMessage{false, nil, time.Now()}: + default: + } + }) +} + +// setAckHandler is used to attach a handler to be invoked when an ack with a +// given sequence number is received. If a timeout is reached, the handler is +// deleted. This is used for indirect pings so does not configure a function +// for nacks. +func (m *Memberlist) setAckHandler(seqNo uint32, ackFn func([]byte, time.Time), timeout time.Duration) { + // Add the handler + ah := &ackHandler{ackFn, nil, nil} + m.ackLock.Lock() + m.ackHandlers[seqNo] = ah + m.ackLock.Unlock() + + // Setup a reaping routing + ah.timer = time.AfterFunc(timeout, func() { + m.ackLock.Lock() + delete(m.ackHandlers, seqNo) + m.ackLock.Unlock() + }) +} + +// Invokes an ack handler if any is associated, and reaps the handler immediately +func (m *Memberlist) invokeAckHandler(ack ackResp, timestamp time.Time) { + m.ackLock.Lock() + ah, ok := m.ackHandlers[ack.SeqNo] + delete(m.ackHandlers, ack.SeqNo) + m.ackLock.Unlock() + if !ok { + return + } + ah.timer.Stop() + ah.ackFn(ack.Payload, timestamp) +} + +// Invokes nack handler if any is associated. +func (m *Memberlist) invokeNackHandler(nack nackResp) { + m.ackLock.Lock() + ah, ok := m.ackHandlers[nack.SeqNo] + m.ackLock.Unlock() + if !ok || ah.nackFn == nil { + return + } + ah.nackFn() +} + +// refute gossips an alive message in response to incoming information that we +// are suspect or dead. It will make sure the incarnation number beats the given +// accusedInc value, or you can supply 0 to just get the next incarnation number. +// This alters the node state that's passed in so this MUST be called while the +// nodeLock is held. +func (m *Memberlist) refute(me *nodeState, accusedInc uint32) { + // Make sure the incarnation number beats the accusation. + inc := m.nextIncarnation() + if accusedInc >= inc { + inc = m.skipIncarnation(accusedInc - inc + 1) + } + me.Incarnation = inc + + // Decrease our health because we are being asked to refute a problem. + m.awareness.ApplyDelta(1) + + // Format and broadcast an alive message. + a := alive{ + Incarnation: inc, + Node: me.Name, + Addr: me.Addr, + Port: me.Port, + Meta: me.Meta, + Vsn: []uint8{ + me.PMin, me.PMax, me.PCur, + me.DMin, me.DMax, me.DCur, + }, + } + m.encodeAndBroadcast(me.Addr.String(), aliveMsg, a) +} + +// aliveNode is invoked by the network layer when we get a message about a +// live node. +func (m *Memberlist) aliveNode(a *alive, notify chan struct{}, bootstrap bool) { + m.nodeLock.Lock() + defer m.nodeLock.Unlock() + state, ok := m.nodeMap[a.Node] + + // It is possible that during a Leave(), there is already an aliveMsg + // in-queue to be processed but blocked by the locks above. If we let + // that aliveMsg process, it'll cause us to re-join the cluster. This + // ensures that we don't. + if m.leave && a.Node == m.config.Name { + return + } + + // Invoke the Alive delegate if any. This can be used to filter out + // alive messages based on custom logic. For example, using a cluster name. + // Using a merge delegate is not enough, as it is possible for passive + // cluster merging to still occur. + if m.config.Alive != nil { + node := &Node{ + Name: a.Node, + Addr: a.Addr, + Port: a.Port, + Meta: a.Meta, + PMin: a.Vsn[0], + PMax: a.Vsn[1], + PCur: a.Vsn[2], + DMin: a.Vsn[3], + DMax: a.Vsn[4], + DCur: a.Vsn[5], + } + if err := m.config.Alive.NotifyAlive(node); err != nil { + m.logger.Printf("[WARN] memberlist: ignoring alive message for '%s': %s", + a.Node, err) + return + } + } + + // Check if we've never seen this node before, and if not, then + // store this node in our node map. + if !ok { + state = &nodeState{ + Node: Node{ + Name: a.Node, + Addr: a.Addr, + Port: a.Port, + Meta: a.Meta, + }, + State: stateDead, + } + + // Add to map + m.nodeMap[a.Node] = state + + // Get a random offset. This is important to ensure + // the failure detection bound is low on average. If all + // nodes did an append, failure detection bound would be + // very high. + n := len(m.nodes) + offset := randomOffset(n) + + // Add at the end and swap with the node at the offset + m.nodes = append(m.nodes, state) + m.nodes[offset], m.nodes[n] = m.nodes[n], m.nodes[offset] + + // Update numNodes after we've added a new node + atomic.AddUint32(&m.numNodes, 1) + } + + // Check if this address is different than the existing node + if !bytes.Equal([]byte(state.Addr), a.Addr) || state.Port != a.Port { + m.logger.Printf("[ERR] memberlist: Conflicting address for %s. Mine: %v:%d Theirs: %v:%d", + state.Name, state.Addr, state.Port, net.IP(a.Addr), a.Port) + + // Inform the conflict delegate if provided + if m.config.Conflict != nil { + other := Node{ + Name: a.Node, + Addr: a.Addr, + Port: a.Port, + Meta: a.Meta, + } + m.config.Conflict.NotifyConflict(&state.Node, &other) + } + return + } + + // Bail if the incarnation number is older, and this is not about us + isLocalNode := state.Name == m.config.Name + if a.Incarnation <= state.Incarnation && !isLocalNode { + return + } + + // Bail if strictly less and this is about us + if a.Incarnation < state.Incarnation && isLocalNode { + return + } + + // Clear out any suspicion timer that may be in effect. + delete(m.nodeTimers, a.Node) + + // Store the old state and meta data + oldState := state.State + oldMeta := state.Meta + + // If this is us we need to refute, otherwise re-broadcast + if !bootstrap && isLocalNode { + // Compute the version vector + versions := []uint8{ + state.PMin, state.PMax, state.PCur, + state.DMin, state.DMax, state.DCur, + } + + // If the Incarnation is the same, we need special handling, since it + // possible for the following situation to happen: + // 1) Start with configuration C, join cluster + // 2) Hard fail / Kill / Shutdown + // 3) Restart with configuration C', join cluster + // + // In this case, other nodes and the local node see the same incarnation, + // but the values may not be the same. For this reason, we always + // need to do an equality check for this Incarnation. In most cases, + // we just ignore, but we may need to refute. + // + if a.Incarnation == state.Incarnation && + bytes.Equal(a.Meta, state.Meta) && + bytes.Equal(a.Vsn, versions) { + return + } + + m.refute(state, a.Incarnation) + m.logger.Printf("[WARN] memberlist: Refuting an alive message") + } else { + m.encodeBroadcastNotify(a.Node, aliveMsg, a, notify) + + // Update protocol versions if it arrived + if len(a.Vsn) > 0 { + state.PMin = a.Vsn[0] + state.PMax = a.Vsn[1] + state.PCur = a.Vsn[2] + state.DMin = a.Vsn[3] + state.DMax = a.Vsn[4] + state.DCur = a.Vsn[5] + } + + // Update the state and incarnation number + state.Incarnation = a.Incarnation + state.Meta = a.Meta + if state.State != stateAlive { + state.State = stateAlive + state.StateChange = time.Now() + } + } + + // Update metrics + metrics.IncrCounter([]string{"memberlist", "msg", "alive"}, 1) + + // Notify the delegate of any relevant updates + if m.config.Events != nil { + if oldState == stateDead { + // if Dead -> Alive, notify of join + m.config.Events.NotifyJoin(&state.Node) + + } else if !bytes.Equal(oldMeta, state.Meta) { + // if Meta changed, trigger an update notification + m.config.Events.NotifyUpdate(&state.Node) + } + } +} + +// suspectNode is invoked by the network layer when we get a message +// about a suspect node +func (m *Memberlist) suspectNode(s *suspect) { + m.nodeLock.Lock() + defer m.nodeLock.Unlock() + state, ok := m.nodeMap[s.Node] + + // If we've never heard about this node before, ignore it + if !ok { + return + } + + // Ignore old incarnation numbers + if s.Incarnation < state.Incarnation { + return + } + + // See if there's a suspicion timer we can confirm. If the info is new + // to us we will go ahead and re-gossip it. This allows for multiple + // independent confirmations to flow even when a node probes a node + // that's already suspect. + if timer, ok := m.nodeTimers[s.Node]; ok { + if timer.Confirm(s.From) { + m.encodeAndBroadcast(s.Node, suspectMsg, s) + } + return + } + + // Ignore non-alive nodes + if state.State != stateAlive { + return + } + + // If this is us we need to refute, otherwise re-broadcast + if state.Name == m.config.Name { + m.refute(state, s.Incarnation) + m.logger.Printf("[WARN] memberlist: Refuting a suspect message (from: %s)", s.From) + return // Do not mark ourself suspect + } else { + m.encodeAndBroadcast(s.Node, suspectMsg, s) + } + + // Update metrics + metrics.IncrCounter([]string{"memberlist", "msg", "suspect"}, 1) + + // Update the state + state.Incarnation = s.Incarnation + state.State = stateSuspect + changeTime := time.Now() + state.StateChange = changeTime + + // Setup a suspicion timer. Given that we don't have any known phase + // relationship with our peers, we set up k such that we hit the nominal + // timeout two probe intervals short of what we expect given the suspicion + // multiplier. + k := m.config.SuspicionMult - 2 + + // If there aren't enough nodes to give the expected confirmations, just + // set k to 0 to say that we don't expect any. Note we subtract 2 from n + // here to take out ourselves and the node being probed. + n := m.estNumNodes() + if n-2 < k { + k = 0 + } + + // Compute the timeouts based on the size of the cluster. + min := suspicionTimeout(m.config.SuspicionMult, n, m.config.ProbeInterval) + max := time.Duration(m.config.SuspicionMaxTimeoutMult) * min + fn := func(numConfirmations int) { + m.nodeLock.Lock() + state, ok := m.nodeMap[s.Node] + timeout := ok && state.State == stateSuspect && state.StateChange == changeTime + m.nodeLock.Unlock() + + if timeout { + if k > 0 && numConfirmations < k { + metrics.IncrCounter([]string{"memberlist", "degraded", "timeout"}, 1) + } + + m.logger.Printf("[INFO] memberlist: Marking %s as failed, suspect timeout reached (%d peer confirmations)", + state.Name, numConfirmations) + d := dead{Incarnation: state.Incarnation, Node: state.Name, From: m.config.Name} + m.deadNode(&d) + } + } + m.nodeTimers[s.Node] = newSuspicion(s.From, k, min, max, fn) +} + +// deadNode is invoked by the network layer when we get a message +// about a dead node +func (m *Memberlist) deadNode(d *dead) { + m.nodeLock.Lock() + defer m.nodeLock.Unlock() + state, ok := m.nodeMap[d.Node] + + // If we've never heard about this node before, ignore it + if !ok { + return + } + + // Ignore old incarnation numbers + if d.Incarnation < state.Incarnation { + return + } + + // Clear out any suspicion timer that may be in effect. + delete(m.nodeTimers, d.Node) + + // Ignore if node is already dead + if state.State == stateDead { + return + } + + // Check if this is us + if state.Name == m.config.Name { + // If we are not leaving we need to refute + if !m.leave { + m.refute(state, d.Incarnation) + m.logger.Printf("[WARN] memberlist: Refuting a dead message (from: %s)", d.From) + return // Do not mark ourself dead + } + + // If we are leaving, we broadcast and wait + m.encodeBroadcastNotify(d.Node, deadMsg, d, m.leaveBroadcast) + } else { + m.encodeAndBroadcast(d.Node, deadMsg, d) + } + + // Update metrics + metrics.IncrCounter([]string{"memberlist", "msg", "dead"}, 1) + + // Update the state + state.Incarnation = d.Incarnation + state.State = stateDead + state.StateChange = time.Now() + + // Notify of death + if m.config.Events != nil { + m.config.Events.NotifyLeave(&state.Node) + } +} + +// mergeState is invoked by the network layer when we get a Push/Pull +// state transfer +func (m *Memberlist) mergeState(remote []pushNodeState) { + for _, r := range remote { + switch r.State { + case stateAlive: + a := alive{ + Incarnation: r.Incarnation, + Node: r.Name, + Addr: r.Addr, + Port: r.Port, + Meta: r.Meta, + Vsn: r.Vsn, + } + m.aliveNode(&a, nil, false) + + case stateDead: + // If the remote node believes a node is dead, we prefer to + // suspect that node instead of declaring it dead instantly + fallthrough + case stateSuspect: + s := suspect{Incarnation: r.Incarnation, Node: r.Name, From: m.config.Name} + m.suspectNode(&s) + } + } +} diff --git a/vendor/github.com/hashicorp/memberlist/suspicion.go b/vendor/github.com/hashicorp/memberlist/suspicion.go new file mode 100644 index 0000000000..5f573e1fc6 --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/suspicion.go @@ -0,0 +1,130 @@ +package memberlist + +import ( + "math" + "sync/atomic" + "time" +) + +// suspicion manages the suspect timer for a node and provides an interface +// to accelerate the timeout as we get more independent confirmations that +// a node is suspect. +type suspicion struct { + // n is the number of independent confirmations we've seen. This must + // be updated using atomic instructions to prevent contention with the + // timer callback. + n int32 + + // k is the number of independent confirmations we'd like to see in + // order to drive the timer to its minimum value. + k int32 + + // min is the minimum timer value. + min time.Duration + + // max is the maximum timer value. + max time.Duration + + // start captures the timestamp when we began the timer. This is used + // so we can calculate durations to feed the timer during updates in + // a way the achieves the overall time we'd like. + start time.Time + + // timer is the underlying timer that implements the timeout. + timer *time.Timer + + // f is the function to call when the timer expires. We hold on to this + // because there are cases where we call it directly. + timeoutFn func() + + // confirmations is a map of "from" nodes that have confirmed a given + // node is suspect. This prevents double counting. + confirmations map[string]struct{} +} + +// newSuspicion returns a timer started with the max time, and that will drive +// to the min time after seeing k or more confirmations. The from node will be +// excluded from confirmations since we might get our own suspicion message +// gossiped back to us. The minimum time will be used if no confirmations are +// called for (k <= 0). +func newSuspicion(from string, k int, min time.Duration, max time.Duration, fn func(int)) *suspicion { + s := &suspicion{ + k: int32(k), + min: min, + max: max, + confirmations: make(map[string]struct{}), + } + + // Exclude the from node from any confirmations. + s.confirmations[from] = struct{}{} + + // Pass the number of confirmations into the timeout function for + // easy telemetry. + s.timeoutFn = func() { + fn(int(atomic.LoadInt32(&s.n))) + } + + // If there aren't any confirmations to be made then take the min + // time from the start. + timeout := max + if k < 1 { + timeout = min + } + s.timer = time.AfterFunc(timeout, s.timeoutFn) + + // Capture the start time right after starting the timer above so + // we should always err on the side of a little longer timeout if + // there's any preemption that separates this and the step above. + s.start = time.Now() + return s +} + +// remainingSuspicionTime takes the state variables of the suspicion timer and +// calculates the remaining time to wait before considering a node dead. The +// return value can be negative, so be prepared to fire the timer immediately in +// that case. +func remainingSuspicionTime(n, k int32, elapsed time.Duration, min, max time.Duration) time.Duration { + frac := math.Log(float64(n)+1.0) / math.Log(float64(k)+1.0) + raw := max.Seconds() - frac*(max.Seconds()-min.Seconds()) + timeout := time.Duration(math.Floor(1000.0*raw)) * time.Millisecond + if timeout < min { + timeout = min + } + + // We have to take into account the amount of time that has passed so + // far, so we get the right overall timeout. + return timeout - elapsed +} + +// Confirm registers that a possibly new peer has also determined the given +// node is suspect. This returns true if this was new information, and false +// if it was a duplicate confirmation, or if we've got enough confirmations to +// hit the minimum. +func (s *suspicion) Confirm(from string) bool { + // If we've got enough confirmations then stop accepting them. + if atomic.LoadInt32(&s.n) >= s.k { + return false + } + + // Only allow one confirmation from each possible peer. + if _, ok := s.confirmations[from]; ok { + return false + } + s.confirmations[from] = struct{}{} + + // Compute the new timeout given the current number of confirmations and + // adjust the timer. If the timeout becomes negative *and* we can cleanly + // stop the timer then we will call the timeout function directly from + // here. + n := atomic.AddInt32(&s.n, 1) + elapsed := time.Now().Sub(s.start) + remaining := remainingSuspicionTime(n, s.k, elapsed, s.min, s.max) + if s.timer.Stop() { + if remaining > 0 { + s.timer.Reset(remaining) + } else { + go s.timeoutFn() + } + } + return true +} diff --git a/vendor/github.com/hashicorp/memberlist/todo.md b/vendor/github.com/hashicorp/memberlist/todo.md new file mode 100644 index 0000000000..009c1d647a --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/todo.md @@ -0,0 +1,6 @@ +# TODO +* Dynamic RTT discovery + * Compute 99th percentile for ping/ack + * Better lower bound for ping/ack, faster failure detection +* Dynamic MTU discovery + * Prevent lost updates, increases efficiency diff --git a/vendor/github.com/hashicorp/memberlist/util.go b/vendor/github.com/hashicorp/memberlist/util.go new file mode 100644 index 0000000000..2ee58ba10b --- /dev/null +++ b/vendor/github.com/hashicorp/memberlist/util.go @@ -0,0 +1,288 @@ +package memberlist + +import ( + "bytes" + "compress/lzw" + "encoding/binary" + "fmt" + "io" + "math" + "math/rand" + "strings" + "time" + + "github.com/hashicorp/go-msgpack/codec" + "github.com/sean-/seed" +) + +// pushPullScale is the minimum number of nodes +// before we start scaling the push/pull timing. The scale +// effect is the log2(Nodes) - log2(pushPullScale). This means +// that the 33rd node will cause us to double the interval, +// while the 65th will triple it. +const pushPullScaleThreshold = 32 + +const ( + // Constant litWidth 2-8 + lzwLitWidth = 8 +) + +func init() { + seed.Init() +} + +// Decode reverses the encode operation on a byte slice input +func decode(buf []byte, out interface{}) error { + r := bytes.NewReader(buf) + hd := codec.MsgpackHandle{} + dec := codec.NewDecoder(r, &hd) + return dec.Decode(out) +} + +// Encode writes an encoded object to a new bytes buffer +func encode(msgType messageType, in interface{}) (*bytes.Buffer, error) { + buf := bytes.NewBuffer(nil) + buf.WriteByte(uint8(msgType)) + hd := codec.MsgpackHandle{} + enc := codec.NewEncoder(buf, &hd) + err := enc.Encode(in) + return buf, err +} + +// Returns a random offset between 0 and n +func randomOffset(n int) int { + if n == 0 { + return 0 + } + return int(rand.Uint32() % uint32(n)) +} + +// suspicionTimeout computes the timeout that should be used when +// a node is suspected +func suspicionTimeout(suspicionMult, n int, interval time.Duration) time.Duration { + nodeScale := math.Max(1.0, math.Log10(math.Max(1.0, float64(n)))) + // multiply by 1000 to keep some precision because time.Duration is an int64 type + timeout := time.Duration(suspicionMult) * time.Duration(nodeScale*1000) * interval / 1000 + return timeout +} + +// retransmitLimit computes the limit of retransmissions +func retransmitLimit(retransmitMult, n int) int { + nodeScale := math.Ceil(math.Log10(float64(n + 1))) + limit := retransmitMult * int(nodeScale) + return limit +} + +// shuffleNodes randomly shuffles the input nodes using the Fisher-Yates shuffle +func shuffleNodes(nodes []*nodeState) { + n := len(nodes) + for i := n - 1; i > 0; i-- { + j := rand.Intn(i + 1) + nodes[i], nodes[j] = nodes[j], nodes[i] + } +} + +// pushPushScale is used to scale the time interval at which push/pull +// syncs take place. It is used to prevent network saturation as the +// cluster size grows +func pushPullScale(interval time.Duration, n int) time.Duration { + // Don't scale until we cross the threshold + if n <= pushPullScaleThreshold { + return interval + } + + multiplier := math.Ceil(math.Log2(float64(n))-math.Log2(pushPullScaleThreshold)) + 1.0 + return time.Duration(multiplier) * interval +} + +// moveDeadNodes moves nodes that are dead and beyond the gossip to the dead interval +// to the end of the slice and returns the index of the first moved node. +func moveDeadNodes(nodes []*nodeState, gossipToTheDeadTime time.Duration) int { + numDead := 0 + n := len(nodes) + for i := 0; i < n-numDead; i++ { + if nodes[i].State != stateDead { + continue + } + + // Respect the gossip to the dead interval + if time.Since(nodes[i].StateChange) <= gossipToTheDeadTime { + continue + } + + // Move this node to the end + nodes[i], nodes[n-numDead-1] = nodes[n-numDead-1], nodes[i] + numDead++ + i-- + } + return n - numDead +} + +// kRandomNodes is used to select up to k random nodes, excluding any nodes where +// the filter function returns true. It is possible that less than k nodes are +// returned. +func kRandomNodes(k int, nodes []*nodeState, filterFn func(*nodeState) bool) []*nodeState { + n := len(nodes) + kNodes := make([]*nodeState, 0, k) +OUTER: + // Probe up to 3*n times, with large n this is not necessary + // since k << n, but with small n we want search to be + // exhaustive + for i := 0; i < 3*n && len(kNodes) < k; i++ { + // Get random node + idx := randomOffset(n) + node := nodes[idx] + + // Give the filter a shot at it. + if filterFn != nil && filterFn(node) { + continue OUTER + } + + // Check if we have this node already + for j := 0; j < len(kNodes); j++ { + if node == kNodes[j] { + continue OUTER + } + } + + // Append the node + kNodes = append(kNodes, node) + } + return kNodes +} + +// makeCompoundMessage takes a list of messages and generates +// a single compound message containing all of them +func makeCompoundMessage(msgs [][]byte) *bytes.Buffer { + // Create a local buffer + buf := bytes.NewBuffer(nil) + + // Write out the type + buf.WriteByte(uint8(compoundMsg)) + + // Write out the number of message + buf.WriteByte(uint8(len(msgs))) + + // Add the message lengths + for _, m := range msgs { + binary.Write(buf, binary.BigEndian, uint16(len(m))) + } + + // Append the messages + for _, m := range msgs { + buf.Write(m) + } + + return buf +} + +// decodeCompoundMessage splits a compound message and returns +// the slices of individual messages. Also returns the number +// of truncated messages and any potential error +func decodeCompoundMessage(buf []byte) (trunc int, parts [][]byte, err error) { + if len(buf) < 1 { + err = fmt.Errorf("missing compound length byte") + return + } + numParts := uint8(buf[0]) + buf = buf[1:] + + // Check we have enough bytes + if len(buf) < int(numParts*2) { + err = fmt.Errorf("truncated len slice") + return + } + + // Decode the lengths + lengths := make([]uint16, numParts) + for i := 0; i < int(numParts); i++ { + lengths[i] = binary.BigEndian.Uint16(buf[i*2 : i*2+2]) + } + buf = buf[numParts*2:] + + // Split each message + for idx, msgLen := range lengths { + if len(buf) < int(msgLen) { + trunc = int(numParts) - idx + return + } + + // Extract the slice, seek past on the buffer + slice := buf[:msgLen] + buf = buf[msgLen:] + parts = append(parts, slice) + } + return +} + +// Given a string of the form "host", "host:port", +// "ipv6::addr" or "[ipv6::address]:port", +// return true if the string includes a port. +func hasPort(s string) bool { + last := strings.LastIndex(s, ":") + if last == -1 { + return false + } + if s[0] == '[' { + return s[last-1] == ']' + } + return strings.Index(s, ":") == last +} + +// compressPayload takes an opaque input buffer, compresses it +// and wraps it in a compress{} message that is encoded. +func compressPayload(inp []byte) (*bytes.Buffer, error) { + var buf bytes.Buffer + compressor := lzw.NewWriter(&buf, lzw.LSB, lzwLitWidth) + + _, err := compressor.Write(inp) + if err != nil { + return nil, err + } + + // Ensure we flush everything out + if err := compressor.Close(); err != nil { + return nil, err + } + + // Create a compressed message + c := compress{ + Algo: lzwAlgo, + Buf: buf.Bytes(), + } + return encode(compressMsg, &c) +} + +// decompressPayload is used to unpack an encoded compress{} +// message and return its payload uncompressed +func decompressPayload(msg []byte) ([]byte, error) { + // Decode the message + var c compress + if err := decode(msg, &c); err != nil { + return nil, err + } + return decompressBuffer(&c) +} + +// decompressBuffer is used to decompress the buffer of +// a single compress message, handling multiple algorithms +func decompressBuffer(c *compress) ([]byte, error) { + // Verify the algorithm + if c.Algo != lzwAlgo { + return nil, fmt.Errorf("Cannot decompress unknown algorithm %d", c.Algo) + } + + // Create a uncompressor + uncomp := lzw.NewReader(bytes.NewReader(c.Buf), lzw.LSB, lzwLitWidth) + defer uncomp.Close() + + // Read all the data + var b bytes.Buffer + _, err := io.Copy(&b, uncomp) + if err != nil { + return nil, err + } + + // Return the uncompressed bytes + return b.Bytes(), nil +} diff --git a/vendor/github.com/hashicorp/raft/LICENSE b/vendor/github.com/hashicorp/raft/LICENSE new file mode 100644 index 0000000000..c33dcc7c92 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/LICENSE @@ -0,0 +1,354 @@ +Mozilla Public License, version 2.0 + +1. Definitions + +1.1. “Contributor” + + means each individual or legal entity that creates, contributes to the + creation of, or owns Covered Software. + +1.2. “Contributor Version” + + means the combination of the Contributions of others (if any) used by a + Contributor and that particular Contributor’s Contribution. + +1.3. “Contribution” + + means Covered Software of a particular Contributor. + +1.4. “Covered Software” + + means Source Code Form to which the initial Contributor has attached the + notice in Exhibit A, the Executable Form of such Source Code Form, and + Modifications of such Source Code Form, in each case including portions + thereof. + +1.5. “Incompatible With Secondary Licenses” + means + + a. that the initial Contributor has attached the notice described in + Exhibit B to the Covered Software; or + + b. that the Covered Software was made available under the terms of version + 1.1 or earlier of the License, but not also under the terms of a + Secondary License. + +1.6. “Executable Form” + + means any form of the work other than Source Code Form. + +1.7. “Larger Work” + + means a work that combines Covered Software with other material, in a separate + file or files, that is not Covered Software. + +1.8. “License” + + means this document. + +1.9. “Licensable” + + means having the right to grant, to the maximum extent possible, whether at the + time of the initial grant or subsequently, any and all of the rights conveyed by + this License. + +1.10. “Modifications” + + means any of the following: + + a. any file in Source Code Form that results from an addition to, deletion + from, or modification of the contents of Covered Software; or + + b. any new file in Source Code Form that contains any Covered Software. + +1.11. “Patent Claims” of a Contributor + + means any patent claim(s), including without limitation, method, process, + and apparatus claims, in any patent Licensable by such Contributor that + would be infringed, but for the grant of the License, by the making, + using, selling, offering for sale, having made, import, or transfer of + either its Contributions or its Contributor Version. + +1.12. “Secondary License” + + means either the GNU General Public License, Version 2.0, the GNU Lesser + General Public License, Version 2.1, the GNU Affero General Public + License, Version 3.0, or any later versions of those licenses. + +1.13. “Source Code Form” + + means the form of the work preferred for making modifications. + +1.14. “You” (or “Your”) + + means an individual or a legal entity exercising rights under this + License. For legal entities, “You” includes any entity that controls, is + controlled by, or is under common control with You. For purposes of this + definition, “control” means (a) the power, direct or indirect, to cause + the direction or management of such entity, whether by contract or + otherwise, or (b) ownership of more than fifty percent (50%) of the + outstanding shares or beneficial ownership of such entity. + + +2. License Grants and Conditions + +2.1. Grants + + Each Contributor hereby grants You a world-wide, royalty-free, + non-exclusive license: + + a. under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or as + part of a Larger Work; and + + b. under Patent Claims of such Contributor to make, use, sell, offer for + sale, have made, import, and otherwise transfer either its Contributions + or its Contributor Version. + +2.2. Effective Date + + The licenses granted in Section 2.1 with respect to any Contribution become + effective for each Contribution on the date the Contributor first distributes + such Contribution. + +2.3. Limitations on Grant Scope + + The licenses granted in this Section 2 are the only rights granted under this + License. No additional rights or licenses will be implied from the distribution + or licensing of Covered Software under this License. Notwithstanding Section + 2.1(b) above, no patent license is granted by a Contributor: + + a. for any code that a Contributor has removed from Covered Software; or + + b. for infringements caused by: (i) Your and any other third party’s + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + + c. under Patent Claims infringed by Covered Software in the absence of its + Contributions. + + This License does not grant any rights in the trademarks, service marks, or + logos of any Contributor (except as may be necessary to comply with the + notice requirements in Section 3.4). + +2.4. Subsequent Licenses + + No Contributor makes additional grants as a result of Your choice to + distribute the Covered Software under a subsequent version of this License + (see Section 10.2) or under the terms of a Secondary License (if permitted + under the terms of Section 3.3). + +2.5. Representation + + Each Contributor represents that the Contributor believes its Contributions + are its original creation(s) or it has sufficient rights to grant the + rights to its Contributions conveyed by this License. + +2.6. Fair Use + + This License is not intended to limit any rights You have under applicable + copyright doctrines of fair use, fair dealing, or other equivalents. + +2.7. Conditions + + Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in + Section 2.1. + + +3. Responsibilities + +3.1. Distribution of Source Form + + All distribution of Covered Software in Source Code Form, including any + Modifications that You create or to which You contribute, must be under the + terms of this License. You must inform recipients that the Source Code Form + of the Covered Software is governed by the terms of this License, and how + they can obtain a copy of this License. You may not attempt to alter or + restrict the recipients’ rights in the Source Code Form. + +3.2. Distribution of Executable Form + + If You distribute Covered Software in Executable Form then: + + a. such Covered Software must also be made available in Source Code Form, + as described in Section 3.1, and You must inform recipients of the + Executable Form how they can obtain a copy of such Source Code Form by + reasonable means in a timely manner, at a charge no more than the cost + of distribution to the recipient; and + + b. You may distribute such Executable Form under the terms of this License, + or sublicense it under different terms, provided that the license for + the Executable Form does not attempt to limit or alter the recipients’ + rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + + You may create and distribute a Larger Work under terms of Your choice, + provided that You also comply with the requirements of this License for the + Covered Software. If the Larger Work is a combination of Covered Software + with a work governed by one or more Secondary Licenses, and the Covered + Software is not Incompatible With Secondary Licenses, this License permits + You to additionally distribute such Covered Software under the terms of + such Secondary License(s), so that the recipient of the Larger Work may, at + their option, further distribute the Covered Software under the terms of + either this License or such Secondary License(s). + +3.4. Notices + + You may not remove or alter the substance of any license notices (including + copyright notices, patent notices, disclaimers of warranty, or limitations + of liability) contained within the Source Code Form of the Covered + Software, except that You may alter any license notices to the extent + required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + + You may choose to offer, and to charge a fee for, warranty, support, + indemnity or liability obligations to one or more recipients of Covered + Software. However, You may do so only on Your own behalf, and not on behalf + of any Contributor. You must make it absolutely clear that any such + warranty, support, indemnity, or liability obligation is offered by You + alone, and You hereby agree to indemnify every Contributor for any + liability incurred by such Contributor as a result of warranty, support, + indemnity or liability terms You offer. You may include additional + disclaimers of warranty and limitations of liability specific to any + jurisdiction. + +4. Inability to Comply Due to Statute or Regulation + + If it is impossible for You to comply with any of the terms of this License + with respect to some or all of the Covered Software due to statute, judicial + order, or regulation then You must: (a) comply with the terms of this License + to the maximum extent possible; and (b) describe the limitations and the code + they affect. Such description must be placed in a text file included with all + distributions of the Covered Software under this License. Except to the + extent prohibited by statute or regulation, such description must be + sufficiently detailed for a recipient of ordinary skill to be able to + understand it. + +5. Termination + +5.1. The rights granted under this License will terminate automatically if You + fail to comply with any of its terms. However, if You become compliant, + then the rights granted under this License from a particular Contributor + are reinstated (a) provisionally, unless and until such Contributor + explicitly and finally terminates Your grants, and (b) on an ongoing basis, + if such Contributor fails to notify You of the non-compliance by some + reasonable means prior to 60 days after You have come back into compliance. + Moreover, Your grants from a particular Contributor are reinstated on an + ongoing basis if such Contributor notifies You of the non-compliance by + some reasonable means, this is the first time You have received notice of + non-compliance with this License from such Contributor, and You become + compliant prior to 30 days after Your receipt of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent + infringement claim (excluding declaratory judgment actions, counter-claims, + and cross-claims) alleging that a Contributor Version directly or + indirectly infringes any patent, then the rights granted to You by any and + all Contributors for the Covered Software under Section 2.1 of this License + shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user + license agreements (excluding distributors and resellers) which have been + validly granted by You or Your distributors under this License prior to + termination shall survive termination. + +6. Disclaimer of Warranty + + Covered Software is provided under this License on an “as is” basis, without + warranty of any kind, either expressed, implied, or statutory, including, + without limitation, warranties that the Covered Software is free of defects, + merchantable, fit for a particular purpose or non-infringing. The entire + risk as to the quality and performance of the Covered Software is with You. + Should any Covered Software prove defective in any respect, You (not any + Contributor) assume the cost of any necessary servicing, repair, or + correction. This disclaimer of warranty constitutes an essential part of this + License. No use of any Covered Software is authorized under this License + except under this disclaimer. + +7. Limitation of Liability + + Under no circumstances and under no legal theory, whether tort (including + negligence), contract, or otherwise, shall any Contributor, or anyone who + distributes Covered Software as permitted above, be liable to You for any + direct, indirect, special, incidental, or consequential damages of any + character including, without limitation, damages for lost profits, loss of + goodwill, work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses, even if such party shall have been + informed of the possibility of such damages. This limitation of liability + shall not apply to liability for death or personal injury resulting from such + party’s negligence to the extent applicable law prohibits such limitation. + Some jurisdictions do not allow the exclusion or limitation of incidental or + consequential damages, so this exclusion and limitation may not apply to You. + +8. Litigation + + Any litigation relating to this License may be brought only in the courts of + a jurisdiction where the defendant maintains its principal place of business + and such litigation shall be governed by laws of that jurisdiction, without + reference to its conflict-of-law provisions. Nothing in this Section shall + prevent a party’s ability to bring cross-claims or counter-claims. + +9. Miscellaneous + + This License represents the complete agreement concerning the subject matter + hereof. If any provision of this License is held to be unenforceable, such + provision shall be reformed only to the extent necessary to make it + enforceable. Any law or regulation which provides that the language of a + contract shall be construed against the drafter shall not be used to construe + this License against a Contributor. + + +10. Versions of the License + +10.1. New Versions + + Mozilla Foundation is the license steward. Except as provided in Section + 10.3, no one other than the license steward has the right to modify or + publish new versions of this License. Each version will be given a + distinguishing version number. + +10.2. Effect of New Versions + + You may distribute the Covered Software under the terms of the version of + the License under which You originally received the Covered Software, or + under the terms of any subsequent version published by the license + steward. + +10.3. Modified Versions + + If you create software not governed by this License, and you want to + create a new license for such software, you may create and use a modified + version of this License if you rename the license and remove any + references to the name of the license steward (except to note that such + modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses + If You choose to distribute Source Code Form that is Incompatible With + Secondary Licenses under the terms of this version of the License, the + notice described in Exhibit B of this License must be attached. + +Exhibit A - Source Code Form License Notice + + This Source Code Form is subject to the + terms of the Mozilla Public License, v. + 2.0. If a copy of the MPL was not + distributed with this file, You can + obtain one at + http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular file, then +You may include the notice in a location (such as a LICENSE file in a relevant +directory) where a recipient would be likely to look for such a notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - “Incompatible With Secondary Licenses” Notice + + This Source Code Form is “Incompatible + With Secondary Licenses”, as defined by + the Mozilla Public License, v. 2.0. + diff --git a/vendor/github.com/hashicorp/raft/Makefile b/vendor/github.com/hashicorp/raft/Makefile new file mode 100644 index 0000000000..49f8299239 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/Makefile @@ -0,0 +1,17 @@ +DEPS = $(go list -f '{{range .TestImports}}{{.}} {{end}}' ./...) + +test: + go test -timeout=60s ./... + +integ: test + INTEG_TESTS=yes go test -timeout=5s -run=Integ ./... + +deps: + go get -d -v ./... + echo $(DEPS) | xargs -n1 go get -d + +cov: + INTEG_TESTS=yes gocov test github.com/hashicorp/raft | gocov-html > /tmp/coverage.html + open /tmp/coverage.html + +.PHONY: test cov integ deps diff --git a/vendor/github.com/hashicorp/raft/README.md b/vendor/github.com/hashicorp/raft/README.md new file mode 100644 index 0000000000..8778b13dc5 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/README.md @@ -0,0 +1,89 @@ +raft [![Build Status](https://travis-ci.org/hashicorp/raft.png)](https://travis-ci.org/hashicorp/raft) +==== + +raft is a [Go](http://www.golang.org) library that manages a replicated +log and can be used with an FSM to manage replicated state machines. It +is library for providing [consensus](http://en.wikipedia.org/wiki/Consensus_(computer_science)). + +The use cases for such a library are far-reaching as replicated state +machines are a key component of many distributed systems. They enable +building Consistent, Partition Tolerant (CP) systems, with limited +fault tolerance as well. + +## Building + +If you wish to build raft you'll need Go version 1.2+ installed. + +Please check your installation with: + +``` +go version +``` + +## Documentation + +For complete documentation, see the associated [Godoc](http://godoc.org/github.com/hashicorp/raft). + +To prevent complications with cgo, the primary backend `MDBStore` is in a separate repository, +called [raft-mdb](http://github.com/hashicorp/raft-mdb). That is the recommended implementation +for the `LogStore` and `StableStore`. + +A pure Go backend using [BoltDB](https://github.com/boltdb/bolt) is also available called +[raft-boltdb](https://github.com/hashicorp/raft-boltdb). It can also be used as a `LogStore` +and `StableStore`. + +## Protocol + +raft is based on ["Raft: In Search of an Understandable Consensus Algorithm"](https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf) + +A high level overview of the Raft protocol is described below, but for details please read the full +[Raft paper](https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf) +followed by the raft source. Any questions about the raft protocol should be sent to the +[raft-dev mailing list](https://groups.google.com/forum/#!forum/raft-dev). + +### Protocol Description + +Raft nodes are always in one of three states: follower, candidate or leader. All +nodes initially start out as a follower. In this state, nodes can accept log entries +from a leader and cast votes. If no entries are received for some time, nodes +self-promote to the candidate state. In the candidate state nodes request votes from +their peers. If a candidate receives a quorum of votes, then it is promoted to a leader. +The leader must accept new log entries and replicate to all the other followers. +In addition, if stale reads are not acceptable, all queries must also be performed on +the leader. + +Once a cluster has a leader, it is able to accept new log entries. A client can +request that a leader append a new log entry, which is an opaque binary blob to +Raft. The leader then writes the entry to durable storage and attempts to replicate +to a quorum of followers. Once the log entry is considered *committed*, it can be +*applied* to a finite state machine. The finite state machine is application specific, +and is implemented using an interface. + +An obvious question relates to the unbounded nature of a replicated log. Raft provides +a mechanism by which the current state is snapshotted, and the log is compacted. Because +of the FSM abstraction, restoring the state of the FSM must result in the same state +as a replay of old logs. This allows Raft to capture the FSM state at a point in time, +and then remove all the logs that were used to reach that state. This is performed automatically +without user intervention, and prevents unbounded disk usage as well as minimizing +time spent replaying logs. + +Lastly, there is the issue of updating the peer set when new servers are joining +or existing servers are leaving. As long as a quorum of nodes is available, this +is not an issue as Raft provides mechanisms to dynamically update the peer set. +If a quorum of nodes is unavailable, then this becomes a very challenging issue. +For example, suppose there are only 2 peers, A and B. The quorum size is also +2, meaning both nodes must agree to commit a log entry. If either A or B fails, +it is now impossible to reach quorum. This means the cluster is unable to add, +or remove a node, or commit any additional log entries. This results in *unavailability*. +At this point, manual intervention would be required to remove either A or B, +and to restart the remaining node in bootstrap mode. + +A Raft cluster of 3 nodes can tolerate a single node failure, while a cluster +of 5 can tolerate 2 node failures. The recommended configuration is to either +run 3 or 5 raft servers. This maximizes availability without +greatly sacrificing performance. + +In terms of performance, Raft is comparable to Paxos. Assuming stable leadership, +committing a log entry requires a single round trip to half of the cluster. +Thus performance is bound by disk I/O and network latency. + diff --git a/vendor/github.com/hashicorp/raft/api.go b/vendor/github.com/hashicorp/raft/api.go new file mode 100644 index 0000000000..2fd78e7840 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/api.go @@ -0,0 +1,1007 @@ +package raft + +import ( + "errors" + "fmt" + "io" + "log" + "os" + "strconv" + "sync" + "time" + + "github.com/armon/go-metrics" +) + +var ( + // ErrLeader is returned when an operation can't be completed on a + // leader node. + ErrLeader = errors.New("node is the leader") + + // ErrNotLeader is returned when an operation can't be completed on a + // follower or candidate node. + ErrNotLeader = errors.New("node is not the leader") + + // ErrLeadershipLost is returned when a leader fails to commit a log entry + // because it's been deposed in the process. + ErrLeadershipLost = errors.New("leadership lost while committing log") + + // ErrAbortedByRestore is returned when a leader fails to commit a log + // entry because it's been superseded by a user snapshot restore. + ErrAbortedByRestore = errors.New("snapshot restored while committing log") + + // ErrRaftShutdown is returned when operations are requested against an + // inactive Raft. + ErrRaftShutdown = errors.New("raft is already shutdown") + + // ErrEnqueueTimeout is returned when a command fails due to a timeout. + ErrEnqueueTimeout = errors.New("timed out enqueuing operation") + + // ErrNothingNewToSnapshot is returned when trying to create a snapshot + // but there's nothing new commited to the FSM since we started. + ErrNothingNewToSnapshot = errors.New("nothing new to snapshot") + + // ErrUnsupportedProtocol is returned when an operation is attempted + // that's not supported by the current protocol version. + ErrUnsupportedProtocol = errors.New("operation not supported with current protocol version") + + // ErrCantBootstrap is returned when attempt is made to bootstrap a + // cluster that already has state present. + ErrCantBootstrap = errors.New("bootstrap only works on new clusters") +) + +// Raft implements a Raft node. +type Raft struct { + raftState + + // protocolVersion is used to inter-operate with Raft servers running + // different versions of the library. See comments in config.go for more + // details. + protocolVersion ProtocolVersion + + // applyCh is used to async send logs to the main thread to + // be committed and applied to the FSM. + applyCh chan *logFuture + + // Configuration provided at Raft initialization + conf Config + + // FSM is the client state machine to apply commands to + fsm FSM + + // fsmMutateCh is used to send state-changing updates to the FSM. This + // receives pointers to commitTuple structures when applying logs or + // pointers to restoreFuture structures when restoring a snapshot. We + // need control over the order of these operations when doing user + // restores so that we finish applying any old log applies before we + // take a user snapshot on the leader, otherwise we might restore the + // snapshot and apply old logs to it that were in the pipe. + fsmMutateCh chan interface{} + + // fsmSnapshotCh is used to trigger a new snapshot being taken + fsmSnapshotCh chan *reqSnapshotFuture + + // lastContact is the last time we had contact from the + // leader node. This can be used to gauge staleness. + lastContact time.Time + lastContactLock sync.RWMutex + + // Leader is the current cluster leader + leader ServerAddress + leaderLock sync.RWMutex + + // leaderCh is used to notify of leadership changes + leaderCh chan bool + + // leaderState used only while state is leader + leaderState leaderState + + // Stores our local server ID, used to avoid sending RPCs to ourself + localID ServerID + + // Stores our local addr + localAddr ServerAddress + + // Used for our logging + logger *log.Logger + + // LogStore provides durable storage for logs + logs LogStore + + // Used to request the leader to make configuration changes. + configurationChangeCh chan *configurationChangeFuture + + // Tracks the latest configuration and latest committed configuration from + // the log/snapshot. + configurations configurations + + // RPC chan comes from the transport layer + rpcCh <-chan RPC + + // Shutdown channel to exit, protected to prevent concurrent exits + shutdown bool + shutdownCh chan struct{} + shutdownLock sync.Mutex + + // snapshots is used to store and retrieve snapshots + snapshots SnapshotStore + + // userSnapshotCh is used for user-triggered snapshots + userSnapshotCh chan *userSnapshotFuture + + // userRestoreCh is used for user-triggered restores of external + // snapshots + userRestoreCh chan *userRestoreFuture + + // stable is a StableStore implementation for durable state + // It provides stable storage for many fields in raftState + stable StableStore + + // The transport layer we use + trans Transport + + // verifyCh is used to async send verify futures to the main thread + // to verify we are still the leader + verifyCh chan *verifyFuture + + // configurationsCh is used to get the configuration data safely from + // outside of the main thread. + configurationsCh chan *configurationsFuture + + // bootstrapCh is used to attempt an initial bootstrap from outside of + // the main thread. + bootstrapCh chan *bootstrapFuture + + // List of observers and the mutex that protects them. The observers list + // is indexed by an artificial ID which is used for deregistration. + observersLock sync.RWMutex + observers map[uint64]*Observer +} + +// BootstrapCluster initializes a server's storage with the given cluster +// configuration. This should only be called at the beginning of time for the +// cluster, and you absolutely must make sure that you call it with the same +// configuration on all the Voter servers. There is no need to bootstrap +// Nonvoter and Staging servers. +// +// One sane approach is to boostrap a single server with a configuration +// listing just itself as a Voter, then invoke AddVoter() on it to add other +// servers to the cluster. +func BootstrapCluster(conf *Config, logs LogStore, stable StableStore, + snaps SnapshotStore, trans Transport, configuration Configuration) error { + // Validate the Raft server config. + if err := ValidateConfig(conf); err != nil { + return err + } + + // Sanity check the Raft peer configuration. + if err := checkConfiguration(configuration); err != nil { + return err + } + + // Make sure the cluster is in a clean state. + hasState, err := HasExistingState(logs, stable, snaps) + if err != nil { + return fmt.Errorf("failed to check for existing state: %v", err) + } + if hasState { + return ErrCantBootstrap + } + + // Set current term to 1. + if err := stable.SetUint64(keyCurrentTerm, 1); err != nil { + return fmt.Errorf("failed to save current term: %v", err) + } + + // Append configuration entry to log. + entry := &Log{ + Index: 1, + Term: 1, + } + if conf.ProtocolVersion < 3 { + entry.Type = LogRemovePeerDeprecated + entry.Data = encodePeers(configuration, trans) + } else { + entry.Type = LogConfiguration + entry.Data = encodeConfiguration(configuration) + } + if err := logs.StoreLog(entry); err != nil { + return fmt.Errorf("failed to append configuration entry to log: %v", err) + } + + return nil +} + +// RecoverCluster is used to manually force a new configuration in order to +// recover from a loss of quorum where the current configuration cannot be +// restored, such as when several servers die at the same time. This works by +// reading all the current state for this server, creating a snapshot with the +// supplied configuration, and then truncating the Raft log. This is the only +// safe way to force a given configuration without actually altering the log to +// insert any new entries, which could cause conflicts with other servers with +// different state. +// +// WARNING! This operation implicitly commits all entries in the Raft log, so +// in general this is an extremely unsafe operation. If you've lost your other +// servers and are performing a manual recovery, then you've also lost the +// commit information, so this is likely the best you can do, but you should be +// aware that calling this can cause Raft log entries that were in the process +// of being replicated but not yet be committed to be committed. +// +// Note the FSM passed here is used for the snapshot operations and will be +// left in a state that should not be used by the application. Be sure to +// discard this FSM and any associated state and provide a fresh one when +// calling NewRaft later. +// +// A typical way to recover the cluster is to shut down all servers and then +// run RecoverCluster on every server using an identical configuration. When +// the cluster is then restarted, and election should occur and then Raft will +// resume normal operation. If it's desired to make a particular server the +// leader, this can be used to inject a new configuration with that server as +// the sole voter, and then join up other new clean-state peer servers using +// the usual APIs in order to bring the cluster back into a known state. +func RecoverCluster(conf *Config, fsm FSM, logs LogStore, stable StableStore, + snaps SnapshotStore, trans Transport, configuration Configuration) error { + // Validate the Raft server config. + if err := ValidateConfig(conf); err != nil { + return err + } + + // Sanity check the Raft peer configuration. + if err := checkConfiguration(configuration); err != nil { + return err + } + + // Refuse to recover if there's no existing state. This would be safe to + // do, but it is likely an indication of an operator error where they + // expect data to be there and it's not. By refusing, we force them + // to show intent to start a cluster fresh by explicitly doing a + // bootstrap, rather than quietly fire up a fresh cluster here. + hasState, err := HasExistingState(logs, stable, snaps) + if err != nil { + return fmt.Errorf("failed to check for existing state: %v", err) + } + if !hasState { + return fmt.Errorf("refused to recover cluster with no initial state, this is probably an operator error") + } + + // Attempt to restore any snapshots we find, newest to oldest. + var snapshotIndex uint64 + var snapshotTerm uint64 + snapshots, err := snaps.List() + if err != nil { + return fmt.Errorf("failed to list snapshots: %v", err) + } + for _, snapshot := range snapshots { + _, source, err := snaps.Open(snapshot.ID) + if err != nil { + // Skip this one and try the next. We will detect if we + // couldn't open any snapshots. + continue + } + defer source.Close() + + if err := fsm.Restore(source); err != nil { + // Same here, skip and try the next one. + continue + } + + snapshotIndex = snapshot.Index + snapshotTerm = snapshot.Term + break + } + if len(snapshots) > 0 && (snapshotIndex == 0 || snapshotTerm == 0) { + return fmt.Errorf("failed to restore any of the available snapshots") + } + + // The snapshot information is the best known end point for the data + // until we play back the Raft log entries. + lastIndex := snapshotIndex + lastTerm := snapshotTerm + + // Apply any Raft log entries past the snapshot. + lastLogIndex, err := logs.LastIndex() + if err != nil { + return fmt.Errorf("failed to find last log: %v", err) + } + for index := snapshotIndex + 1; index <= lastLogIndex; index++ { + var entry Log + if err := logs.GetLog(index, &entry); err != nil { + return fmt.Errorf("failed to get log at index %d: %v", index, err) + } + if entry.Type == LogCommand { + _ = fsm.Apply(&entry) + } + lastIndex = entry.Index + lastTerm = entry.Term + } + + // Create a new snapshot, placing the configuration in as if it was + // committed at index 1. + snapshot, err := fsm.Snapshot() + if err != nil { + return fmt.Errorf("failed to snapshot FSM: %v", err) + } + version := getSnapshotVersion(conf.ProtocolVersion) + sink, err := snaps.Create(version, lastIndex, lastTerm, configuration, 1, trans) + if err != nil { + return fmt.Errorf("failed to create snapshot: %v", err) + } + if err := snapshot.Persist(sink); err != nil { + return fmt.Errorf("failed to persist snapshot: %v", err) + } + if err := sink.Close(); err != nil { + return fmt.Errorf("failed to finalize snapshot: %v", err) + } + + // Compact the log so that we don't get bad interference from any + // configuration change log entries that might be there. + firstLogIndex, err := logs.FirstIndex() + if err != nil { + return fmt.Errorf("failed to get first log index: %v", err) + } + if err := logs.DeleteRange(firstLogIndex, lastLogIndex); err != nil { + return fmt.Errorf("log compaction failed: %v", err) + } + + return nil +} + +// HasExistingState returns true if the server has any existing state (logs, +// knowledge of a current term, or any snapshots). +func HasExistingState(logs LogStore, stable StableStore, snaps SnapshotStore) (bool, error) { + // Make sure we don't have a current term. + currentTerm, err := stable.GetUint64(keyCurrentTerm) + if err == nil { + if currentTerm > 0 { + return true, nil + } + } else { + if err.Error() != "not found" { + return false, fmt.Errorf("failed to read current term: %v", err) + } + } + + // Make sure we have an empty log. + lastIndex, err := logs.LastIndex() + if err != nil { + return false, fmt.Errorf("failed to get last log index: %v", err) + } + if lastIndex > 0 { + return true, nil + } + + // Make sure we have no snapshots + snapshots, err := snaps.List() + if err != nil { + return false, fmt.Errorf("failed to list snapshots: %v", err) + } + if len(snapshots) > 0 { + return true, nil + } + + return false, nil +} + +// NewRaft is used to construct a new Raft node. It takes a configuration, as well +// as implementations of various interfaces that are required. If we have any +// old state, such as snapshots, logs, peers, etc, all those will be restored +// when creating the Raft node. +func NewRaft(conf *Config, fsm FSM, logs LogStore, stable StableStore, snaps SnapshotStore, trans Transport) (*Raft, error) { + // Validate the configuration. + if err := ValidateConfig(conf); err != nil { + return nil, err + } + + // Ensure we have a LogOutput. + var logger *log.Logger + if conf.Logger != nil { + logger = conf.Logger + } else { + if conf.LogOutput == nil { + conf.LogOutput = os.Stderr + } + logger = log.New(conf.LogOutput, "", log.LstdFlags) + } + + // Try to restore the current term. + currentTerm, err := stable.GetUint64(keyCurrentTerm) + if err != nil && err.Error() != "not found" { + return nil, fmt.Errorf("failed to load current term: %v", err) + } + + // Read the index of the last log entry. + lastIndex, err := logs.LastIndex() + if err != nil { + return nil, fmt.Errorf("failed to find last log: %v", err) + } + + // Get the last log entry. + var lastLog Log + if lastIndex > 0 { + if err = logs.GetLog(lastIndex, &lastLog); err != nil { + return nil, fmt.Errorf("failed to get last log at index %d: %v", lastIndex, err) + } + } + + // Make sure we have a valid server address and ID. + protocolVersion := conf.ProtocolVersion + localAddr := ServerAddress(trans.LocalAddr()) + localID := conf.LocalID + + // TODO (slackpad) - When we deprecate protocol version 2, remove this + // along with the AddPeer() and RemovePeer() APIs. + if protocolVersion < 3 && string(localID) != string(localAddr) { + return nil, fmt.Errorf("when running with ProtocolVersion < 3, LocalID must be set to the network address") + } + + // Create Raft struct. + r := &Raft{ + protocolVersion: protocolVersion, + applyCh: make(chan *logFuture), + conf: *conf, + fsm: fsm, + fsmMutateCh: make(chan interface{}, 128), + fsmSnapshotCh: make(chan *reqSnapshotFuture), + leaderCh: make(chan bool), + localID: localID, + localAddr: localAddr, + logger: logger, + logs: logs, + configurationChangeCh: make(chan *configurationChangeFuture), + configurations: configurations{}, + rpcCh: trans.Consumer(), + snapshots: snaps, + userSnapshotCh: make(chan *userSnapshotFuture), + userRestoreCh: make(chan *userRestoreFuture), + shutdownCh: make(chan struct{}), + stable: stable, + trans: trans, + verifyCh: make(chan *verifyFuture, 64), + configurationsCh: make(chan *configurationsFuture, 8), + bootstrapCh: make(chan *bootstrapFuture), + observers: make(map[uint64]*Observer), + } + + // Initialize as a follower. + r.setState(Follower) + + // Start as leader if specified. This should only be used + // for testing purposes. + if conf.StartAsLeader { + r.setState(Leader) + r.setLeader(r.localAddr) + } + + // Restore the current term and the last log. + r.setCurrentTerm(currentTerm) + r.setLastLog(lastLog.Index, lastLog.Term) + + // Attempt to restore a snapshot if there are any. + if err := r.restoreSnapshot(); err != nil { + return nil, err + } + + // Scan through the log for any configuration change entries. + snapshotIndex, _ := r.getLastSnapshot() + for index := snapshotIndex + 1; index <= lastLog.Index; index++ { + var entry Log + if err := r.logs.GetLog(index, &entry); err != nil { + r.logger.Printf("[ERR] raft: Failed to get log at %d: %v", index, err) + panic(err) + } + r.processConfigurationLogEntry(&entry) + } + r.logger.Printf("[INFO] raft: Initial configuration (index=%d): %+v", + r.configurations.latestIndex, r.configurations.latest.Servers) + + // Setup a heartbeat fast-path to avoid head-of-line + // blocking where possible. It MUST be safe for this + // to be called concurrently with a blocking RPC. + trans.SetHeartbeatHandler(r.processHeartbeat) + + // Start the background work. + r.goFunc(r.run) + r.goFunc(r.runFSM) + r.goFunc(r.runSnapshots) + return r, nil +} + +// restoreSnapshot attempts to restore the latest snapshots, and fails if none +// of them can be restored. This is called at initialization time, and is +// completely unsafe to call at any other time. +func (r *Raft) restoreSnapshot() error { + snapshots, err := r.snapshots.List() + if err != nil { + r.logger.Printf("[ERR] raft: Failed to list snapshots: %v", err) + return err + } + + // Try to load in order of newest to oldest + for _, snapshot := range snapshots { + _, source, err := r.snapshots.Open(snapshot.ID) + if err != nil { + r.logger.Printf("[ERR] raft: Failed to open snapshot %v: %v", snapshot.ID, err) + continue + } + defer source.Close() + + if err := r.fsm.Restore(source); err != nil { + r.logger.Printf("[ERR] raft: Failed to restore snapshot %v: %v", snapshot.ID, err) + continue + } + + // Log success + r.logger.Printf("[INFO] raft: Restored from snapshot %v", snapshot.ID) + + // Update the lastApplied so we don't replay old logs + r.setLastApplied(snapshot.Index) + + // Update the last stable snapshot info + r.setLastSnapshot(snapshot.Index, snapshot.Term) + + // Update the configuration + if snapshot.Version > 0 { + r.configurations.committed = snapshot.Configuration + r.configurations.committedIndex = snapshot.ConfigurationIndex + r.configurations.latest = snapshot.Configuration + r.configurations.latestIndex = snapshot.ConfigurationIndex + } else { + configuration := decodePeers(snapshot.Peers, r.trans) + r.configurations.committed = configuration + r.configurations.committedIndex = snapshot.Index + r.configurations.latest = configuration + r.configurations.latestIndex = snapshot.Index + } + + // Success! + return nil + } + + // If we had snapshots and failed to load them, its an error + if len(snapshots) > 0 { + return fmt.Errorf("failed to load any existing snapshots") + } + return nil +} + +// BootstrapCluster is equivalent to non-member BootstrapCluster but can be +// called on an un-bootstrapped Raft instance after it has been created. This +// should only be called at the beginning of time for the cluster, and you +// absolutely must make sure that you call it with the same configuration on all +// the Voter servers. There is no need to bootstrap Nonvoter and Staging +// servers. +func (r *Raft) BootstrapCluster(configuration Configuration) Future { + bootstrapReq := &bootstrapFuture{} + bootstrapReq.init() + bootstrapReq.configuration = configuration + select { + case <-r.shutdownCh: + return errorFuture{ErrRaftShutdown} + case r.bootstrapCh <- bootstrapReq: + return bootstrapReq + } +} + +// Leader is used to return the current leader of the cluster. +// It may return empty string if there is no current leader +// or the leader is unknown. +func (r *Raft) Leader() ServerAddress { + r.leaderLock.RLock() + leader := r.leader + r.leaderLock.RUnlock() + return leader +} + +// Apply is used to apply a command to the FSM in a highly consistent +// manner. This returns a future that can be used to wait on the application. +// An optional timeout can be provided to limit the amount of time we wait +// for the command to be started. This must be run on the leader or it +// will fail. +func (r *Raft) Apply(cmd []byte, timeout time.Duration) ApplyFuture { + metrics.IncrCounter([]string{"raft", "apply"}, 1) + var timer <-chan time.Time + if timeout > 0 { + timer = time.After(timeout) + } + + // Create a log future, no index or term yet + logFuture := &logFuture{ + log: Log{ + Type: LogCommand, + Data: cmd, + }, + } + logFuture.init() + + select { + case <-timer: + return errorFuture{ErrEnqueueTimeout} + case <-r.shutdownCh: + return errorFuture{ErrRaftShutdown} + case r.applyCh <- logFuture: + return logFuture + } +} + +// Barrier is used to issue a command that blocks until all preceeding +// operations have been applied to the FSM. It can be used to ensure the +// FSM reflects all queued writes. An optional timeout can be provided to +// limit the amount of time we wait for the command to be started. This +// must be run on the leader or it will fail. +func (r *Raft) Barrier(timeout time.Duration) Future { + metrics.IncrCounter([]string{"raft", "barrier"}, 1) + var timer <-chan time.Time + if timeout > 0 { + timer = time.After(timeout) + } + + // Create a log future, no index or term yet + logFuture := &logFuture{ + log: Log{ + Type: LogBarrier, + }, + } + logFuture.init() + + select { + case <-timer: + return errorFuture{ErrEnqueueTimeout} + case <-r.shutdownCh: + return errorFuture{ErrRaftShutdown} + case r.applyCh <- logFuture: + return logFuture + } +} + +// VerifyLeader is used to ensure the current node is still +// the leader. This can be done to prevent stale reads when a +// new leader has potentially been elected. +func (r *Raft) VerifyLeader() Future { + metrics.IncrCounter([]string{"raft", "verify_leader"}, 1) + verifyFuture := &verifyFuture{} + verifyFuture.init() + select { + case <-r.shutdownCh: + return errorFuture{ErrRaftShutdown} + case r.verifyCh <- verifyFuture: + return verifyFuture + } +} + +// GetConfiguration returns the latest configuration and its associated index +// currently in use. This may not yet be committed. This must not be called on +// the main thread (which can access the information directly). +func (r *Raft) GetConfiguration() ConfigurationFuture { + configReq := &configurationsFuture{} + configReq.init() + select { + case <-r.shutdownCh: + configReq.respond(ErrRaftShutdown) + return configReq + case r.configurationsCh <- configReq: + return configReq + } +} + +// AddPeer (deprecated) is used to add a new peer into the cluster. This must be +// run on the leader or it will fail. Use AddVoter/AddNonvoter instead. +func (r *Raft) AddPeer(peer ServerAddress) Future { + if r.protocolVersion > 2 { + return errorFuture{ErrUnsupportedProtocol} + } + + return r.requestConfigChange(configurationChangeRequest{ + command: AddStaging, + serverID: ServerID(peer), + serverAddress: peer, + prevIndex: 0, + }, 0) +} + +// RemovePeer (deprecated) is used to remove a peer from the cluster. If the +// current leader is being removed, it will cause a new election +// to occur. This must be run on the leader or it will fail. +// Use RemoveServer instead. +func (r *Raft) RemovePeer(peer ServerAddress) Future { + if r.protocolVersion > 2 { + return errorFuture{ErrUnsupportedProtocol} + } + + return r.requestConfigChange(configurationChangeRequest{ + command: RemoveServer, + serverID: ServerID(peer), + prevIndex: 0, + }, 0) +} + +// AddVoter will add the given server to the cluster as a staging server. If the +// server is already in the cluster as a voter, this does nothing. This must be +// run on the leader or it will fail. The leader will promote the staging server +// to a voter once that server is ready. If nonzero, prevIndex is the index of +// the only configuration upon which this change may be applied; if another +// configuration entry has been added in the meantime, this request will fail. +// If nonzero, timeout is how long this server should wait before the +// configuration change log entry is appended. +func (r *Raft) AddVoter(id ServerID, address ServerAddress, prevIndex uint64, timeout time.Duration) IndexFuture { + if r.protocolVersion < 2 { + return errorFuture{ErrUnsupportedProtocol} + } + + return r.requestConfigChange(configurationChangeRequest{ + command: AddStaging, + serverID: id, + serverAddress: address, + prevIndex: prevIndex, + }, timeout) +} + +// AddNonvoter will add the given server to the cluster but won't assign it a +// vote. The server will receive log entries, but it won't participate in +// elections or log entry commitment. If the server is already in the cluster as +// a staging server or voter, this does nothing. This must be run on the leader +// or it will fail. For prevIndex and timeout, see AddVoter. +func (r *Raft) AddNonvoter(id ServerID, address ServerAddress, prevIndex uint64, timeout time.Duration) IndexFuture { + if r.protocolVersion < 3 { + return errorFuture{ErrUnsupportedProtocol} + } + + return r.requestConfigChange(configurationChangeRequest{ + command: AddNonvoter, + serverID: id, + serverAddress: address, + prevIndex: prevIndex, + }, timeout) +} + +// RemoveServer will remove the given server from the cluster. If the current +// leader is being removed, it will cause a new election to occur. This must be +// run on the leader or it will fail. For prevIndex and timeout, see AddVoter. +func (r *Raft) RemoveServer(id ServerID, prevIndex uint64, timeout time.Duration) IndexFuture { + if r.protocolVersion < 2 { + return errorFuture{ErrUnsupportedProtocol} + } + + return r.requestConfigChange(configurationChangeRequest{ + command: RemoveServer, + serverID: id, + prevIndex: prevIndex, + }, timeout) +} + +// DemoteVoter will take away a server's vote, if it has one. If present, the +// server will continue to receive log entries, but it won't participate in +// elections or log entry commitment. If the server is not in the cluster, this +// does nothing. This must be run on the leader or it will fail. For prevIndex +// and timeout, see AddVoter. +func (r *Raft) DemoteVoter(id ServerID, prevIndex uint64, timeout time.Duration) IndexFuture { + if r.protocolVersion < 3 { + return errorFuture{ErrUnsupportedProtocol} + } + + return r.requestConfigChange(configurationChangeRequest{ + command: DemoteVoter, + serverID: id, + prevIndex: prevIndex, + }, timeout) +} + +// Shutdown is used to stop the Raft background routines. +// This is not a graceful operation. Provides a future that +// can be used to block until all background routines have exited. +func (r *Raft) Shutdown() Future { + r.shutdownLock.Lock() + defer r.shutdownLock.Unlock() + + if !r.shutdown { + close(r.shutdownCh) + r.shutdown = true + r.setState(Shutdown) + return &shutdownFuture{r} + } + + // avoid closing transport twice + return &shutdownFuture{nil} +} + +// Snapshot is used to manually force Raft to take a snapshot. Returns a future +// that can be used to block until complete, and that contains a function that +// can be used to open the snapshot. +func (r *Raft) Snapshot() SnapshotFuture { + future := &userSnapshotFuture{} + future.init() + select { + case r.userSnapshotCh <- future: + return future + case <-r.shutdownCh: + future.respond(ErrRaftShutdown) + return future + } +} + +// Restore is used to manually force Raft to consume an external snapshot, such +// as if restoring from a backup. We will use the current Raft configuration, +// not the one from the snapshot, so that we can restore into a new cluster. We +// will also use the higher of the index of the snapshot, or the current index, +// and then add 1 to that, so we force a new state with a hole in the Raft log, +// so that the snapshot will be sent to followers and used for any new joiners. +// This can only be run on the leader, and blocks until the restore is complete +// or an error occurs. +// +// WARNING! This operation has the leader take on the state of the snapshot and +// then sets itself up so that it replicates that to its followers though the +// install snapshot process. This involves a potentially dangerous period where +// the leader commits ahead of its followers, so should only be used for disaster +// recovery into a fresh cluster, and should not be used in normal operations. +func (r *Raft) Restore(meta *SnapshotMeta, reader io.Reader, timeout time.Duration) error { + metrics.IncrCounter([]string{"raft", "restore"}, 1) + var timer <-chan time.Time + if timeout > 0 { + timer = time.After(timeout) + } + + // Perform the restore. + restore := &userRestoreFuture{ + meta: meta, + reader: reader, + } + restore.init() + select { + case <-timer: + return ErrEnqueueTimeout + case <-r.shutdownCh: + return ErrRaftShutdown + case r.userRestoreCh <- restore: + // If the restore is ingested then wait for it to complete. + if err := restore.Error(); err != nil { + return err + } + } + + // Apply a no-op log entry. Waiting for this allows us to wait until the + // followers have gotten the restore and replicated at least this new + // entry, which shows that we've also faulted and installed the + // snapshot with the contents of the restore. + noop := &logFuture{ + log: Log{ + Type: LogNoop, + }, + } + noop.init() + select { + case <-timer: + return ErrEnqueueTimeout + case <-r.shutdownCh: + return ErrRaftShutdown + case r.applyCh <- noop: + return noop.Error() + } +} + +// State is used to return the current raft state. +func (r *Raft) State() RaftState { + return r.getState() +} + +// LeaderCh is used to get a channel which delivers signals on +// acquiring or losing leadership. It sends true if we become +// the leader, and false if we lose it. The channel is not buffered, +// and does not block on writes. +func (r *Raft) LeaderCh() <-chan bool { + return r.leaderCh +} + +// String returns a string representation of this Raft node. +func (r *Raft) String() string { + return fmt.Sprintf("Node at %s [%v]", r.localAddr, r.getState()) +} + +// LastContact returns the time of last contact by a leader. +// This only makes sense if we are currently a follower. +func (r *Raft) LastContact() time.Time { + r.lastContactLock.RLock() + last := r.lastContact + r.lastContactLock.RUnlock() + return last +} + +// Stats is used to return a map of various internal stats. This +// should only be used for informative purposes or debugging. +// +// Keys are: "state", "term", "last_log_index", "last_log_term", +// "commit_index", "applied_index", "fsm_pending", +// "last_snapshot_index", "last_snapshot_term", +// "latest_configuration", "last_contact", and "num_peers". +// +// The value of "state" is a numerical value representing a +// RaftState const. +// +// The value of "latest_configuration" is a string which contains +// the id of each server, its suffrage status, and its address. +// +// The value of "last_contact" is either "never" if there +// has been no contact with a leader, "0" if the node is in the +// leader state, or the time since last contact with a leader +// formatted as a string. +// +// The value of "num_peers" is the number of other voting servers in the +// cluster, not including this node. If this node isn't part of the +// configuration then this will be "0". +// +// All other values are uint64s, formatted as strings. +func (r *Raft) Stats() map[string]string { + toString := func(v uint64) string { + return strconv.FormatUint(v, 10) + } + lastLogIndex, lastLogTerm := r.getLastLog() + lastSnapIndex, lastSnapTerm := r.getLastSnapshot() + s := map[string]string{ + "state": r.getState().String(), + "term": toString(r.getCurrentTerm()), + "last_log_index": toString(lastLogIndex), + "last_log_term": toString(lastLogTerm), + "commit_index": toString(r.getCommitIndex()), + "applied_index": toString(r.getLastApplied()), + "fsm_pending": toString(uint64(len(r.fsmMutateCh))), + "last_snapshot_index": toString(lastSnapIndex), + "last_snapshot_term": toString(lastSnapTerm), + "protocol_version": toString(uint64(r.protocolVersion)), + "protocol_version_min": toString(uint64(ProtocolVersionMin)), + "protocol_version_max": toString(uint64(ProtocolVersionMax)), + "snapshot_version_min": toString(uint64(SnapshotVersionMin)), + "snapshot_version_max": toString(uint64(SnapshotVersionMax)), + } + + future := r.GetConfiguration() + if err := future.Error(); err != nil { + r.logger.Printf("[WARN] raft: could not get configuration for Stats: %v", err) + } else { + configuration := future.Configuration() + s["latest_configuration_index"] = toString(future.Index()) + s["latest_configuration"] = fmt.Sprintf("%+v", configuration.Servers) + + // This is a legacy metric that we've seen people use in the wild. + hasUs := false + numPeers := 0 + for _, server := range configuration.Servers { + if server.Suffrage == Voter { + if server.ID == r.localID { + hasUs = true + } else { + numPeers++ + } + } + } + if !hasUs { + numPeers = 0 + } + s["num_peers"] = toString(uint64(numPeers)) + } + + last := r.LastContact() + if last.IsZero() { + s["last_contact"] = "never" + } else if r.getState() == Leader { + s["last_contact"] = "0" + } else { + s["last_contact"] = fmt.Sprintf("%v", time.Now().Sub(last)) + } + return s +} + +// LastIndex returns the last index in stable storage, +// either from the last log or from the last snapshot. +func (r *Raft) LastIndex() uint64 { + return r.getLastIndex() +} + +// AppliedIndex returns the last index applied to the FSM. This is generally +// lagging behind the last index, especially for indexes that are persisted but +// have not yet been considered committed by the leader. NOTE - this reflects +// the last index that was sent to the application's FSM over the apply channel +// but DOES NOT mean that the application's FSM has yet consumed it and applied +// it to its internal state. Thus, the application's state may lag behind this +// index. +func (r *Raft) AppliedIndex() uint64 { + return r.getLastApplied() +} diff --git a/vendor/github.com/hashicorp/raft/commands.go b/vendor/github.com/hashicorp/raft/commands.go new file mode 100644 index 0000000000..5d89e7bcdb --- /dev/null +++ b/vendor/github.com/hashicorp/raft/commands.go @@ -0,0 +1,151 @@ +package raft + +// RPCHeader is a common sub-structure used to pass along protocol version and +// other information about the cluster. For older Raft implementations before +// versioning was added this will default to a zero-valued structure when read +// by newer Raft versions. +type RPCHeader struct { + // ProtocolVersion is the version of the protocol the sender is + // speaking. + ProtocolVersion ProtocolVersion +} + +// WithRPCHeader is an interface that exposes the RPC header. +type WithRPCHeader interface { + GetRPCHeader() RPCHeader +} + +// AppendEntriesRequest is the command used to append entries to the +// replicated log. +type AppendEntriesRequest struct { + RPCHeader + + // Provide the current term and leader + Term uint64 + Leader []byte + + // Provide the previous entries for integrity checking + PrevLogEntry uint64 + PrevLogTerm uint64 + + // New entries to commit + Entries []*Log + + // Commit index on the leader + LeaderCommitIndex uint64 +} + +// See WithRPCHeader. +func (r *AppendEntriesRequest) GetRPCHeader() RPCHeader { + return r.RPCHeader +} + +// AppendEntriesResponse is the response returned from an +// AppendEntriesRequest. +type AppendEntriesResponse struct { + RPCHeader + + // Newer term if leader is out of date + Term uint64 + + // Last Log is a hint to help accelerate rebuilding slow nodes + LastLog uint64 + + // We may not succeed if we have a conflicting entry + Success bool + + // There are scenarios where this request didn't succeed + // but there's no need to wait/back-off the next attempt. + NoRetryBackoff bool +} + +// See WithRPCHeader. +func (r *AppendEntriesResponse) GetRPCHeader() RPCHeader { + return r.RPCHeader +} + +// RequestVoteRequest is the command used by a candidate to ask a Raft peer +// for a vote in an election. +type RequestVoteRequest struct { + RPCHeader + + // Provide the term and our id + Term uint64 + Candidate []byte + + // Used to ensure safety + LastLogIndex uint64 + LastLogTerm uint64 +} + +// See WithRPCHeader. +func (r *RequestVoteRequest) GetRPCHeader() RPCHeader { + return r.RPCHeader +} + +// RequestVoteResponse is the response returned from a RequestVoteRequest. +type RequestVoteResponse struct { + RPCHeader + + // Newer term if leader is out of date. + Term uint64 + + // Peers is deprecated, but required by servers that only understand + // protocol version 0. This is not populated in protocol version 2 + // and later. + Peers []byte + + // Is the vote granted. + Granted bool +} + +// See WithRPCHeader. +func (r *RequestVoteResponse) GetRPCHeader() RPCHeader { + return r.RPCHeader +} + +// InstallSnapshotRequest is the command sent to a Raft peer to bootstrap its +// log (and state machine) from a snapshot on another peer. +type InstallSnapshotRequest struct { + RPCHeader + SnapshotVersion SnapshotVersion + + Term uint64 + Leader []byte + + // These are the last index/term included in the snapshot + LastLogIndex uint64 + LastLogTerm uint64 + + // Peer Set in the snapshot. This is deprecated in favor of Configuration + // but remains here in case we receive an InstallSnapshot from a leader + // that's running old code. + Peers []byte + + // Cluster membership. + Configuration []byte + // Log index where 'Configuration' entry was originally written. + ConfigurationIndex uint64 + + // Size of the snapshot + Size int64 +} + +// See WithRPCHeader. +func (r *InstallSnapshotRequest) GetRPCHeader() RPCHeader { + return r.RPCHeader +} + +// InstallSnapshotResponse is the response returned from an +// InstallSnapshotRequest. +type InstallSnapshotResponse struct { + RPCHeader + + Term uint64 + Success bool +} + +// See WithRPCHeader. +func (r *InstallSnapshotResponse) GetRPCHeader() RPCHeader { + return r.RPCHeader +} diff --git a/vendor/github.com/hashicorp/raft/commitment.go b/vendor/github.com/hashicorp/raft/commitment.go new file mode 100644 index 0000000000..b5ba2634ef --- /dev/null +++ b/vendor/github.com/hashicorp/raft/commitment.go @@ -0,0 +1,101 @@ +package raft + +import ( + "sort" + "sync" +) + +// Commitment is used to advance the leader's commit index. The leader and +// replication goroutines report in newly written entries with Match(), and +// this notifies on commitCh when the commit index has advanced. +type commitment struct { + // protectes matchIndexes and commitIndex + sync.Mutex + // notified when commitIndex increases + commitCh chan struct{} + // voter ID to log index: the server stores up through this log entry + matchIndexes map[ServerID]uint64 + // a quorum stores up through this log entry. monotonically increases. + commitIndex uint64 + // the first index of this leader's term: this needs to be replicated to a + // majority of the cluster before this leader may mark anything committed + // (per Raft's commitment rule) + startIndex uint64 +} + +// newCommitment returns an commitment struct that notifies the provided +// channel when log entries have been committed. A new commitment struct is +// created each time this server becomes leader for a particular term. +// 'configuration' is the servers in the cluster. +// 'startIndex' is the first index created in this term (see +// its description above). +func newCommitment(commitCh chan struct{}, configuration Configuration, startIndex uint64) *commitment { + matchIndexes := make(map[ServerID]uint64) + for _, server := range configuration.Servers { + if server.Suffrage == Voter { + matchIndexes[server.ID] = 0 + } + } + return &commitment{ + commitCh: commitCh, + matchIndexes: matchIndexes, + commitIndex: 0, + startIndex: startIndex, + } +} + +// Called when a new cluster membership configuration is created: it will be +// used to determine commitment from now on. 'configuration' is the servers in +// the cluster. +func (c *commitment) setConfiguration(configuration Configuration) { + c.Lock() + defer c.Unlock() + oldMatchIndexes := c.matchIndexes + c.matchIndexes = make(map[ServerID]uint64) + for _, server := range configuration.Servers { + if server.Suffrage == Voter { + c.matchIndexes[server.ID] = oldMatchIndexes[server.ID] // defaults to 0 + } + } + c.recalculate() +} + +// Called by leader after commitCh is notified +func (c *commitment) getCommitIndex() uint64 { + c.Lock() + defer c.Unlock() + return c.commitIndex +} + +// Match is called once a server completes writing entries to disk: either the +// leader has written the new entry or a follower has replied to an +// AppendEntries RPC. The given server's disk agrees with this server's log up +// through the given index. +func (c *commitment) match(server ServerID, matchIndex uint64) { + c.Lock() + defer c.Unlock() + if prev, hasVote := c.matchIndexes[server]; hasVote && matchIndex > prev { + c.matchIndexes[server] = matchIndex + c.recalculate() + } +} + +// Internal helper to calculate new commitIndex from matchIndexes. +// Must be called with lock held. +func (c *commitment) recalculate() { + if len(c.matchIndexes) == 0 { + return + } + + matched := make([]uint64, 0, len(c.matchIndexes)) + for _, idx := range c.matchIndexes { + matched = append(matched, idx) + } + sort.Sort(uint64Slice(matched)) + quorumMatchIndex := matched[(len(matched)-1)/2] + + if quorumMatchIndex > c.commitIndex && quorumMatchIndex >= c.startIndex { + c.commitIndex = quorumMatchIndex + asyncNotifyCh(c.commitCh) + } +} diff --git a/vendor/github.com/hashicorp/raft/config.go b/vendor/github.com/hashicorp/raft/config.go new file mode 100644 index 0000000000..c1ce03ac22 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/config.go @@ -0,0 +1,258 @@ +package raft + +import ( + "fmt" + "io" + "log" + "time" +) + +// These are the versions of the protocol (which includes RPC messages as +// well as Raft-specific log entries) that this server can _understand_. Use +// the ProtocolVersion member of the Config object to control the version of +// the protocol to use when _speaking_ to other servers. Note that depending on +// the protocol version being spoken, some otherwise understood RPC messages +// may be refused. See dispositionRPC for details of this logic. +// +// There are notes about the upgrade path in the description of the versions +// below. If you are starting a fresh cluster then there's no reason not to +// jump right to the latest protocol version. If you need to interoperate with +// older, version 0 Raft servers you'll need to drive the cluster through the +// different versions in order. +// +// The version details are complicated, but here's a summary of what's required +// to get from a version 0 cluster to version 3: +// +// 1. In version N of your app that starts using the new Raft library with +// versioning, set ProtocolVersion to 1. +// 2. Make version N+1 of your app require version N as a prerequisite (all +// servers must be upgraded). For version N+1 of your app set ProtocolVersion +// to 2. +// 3. Similarly, make version N+2 of your app require version N+1 as a +// prerequisite. For version N+2 of your app, set ProtocolVersion to 3. +// +// During this upgrade, older cluster members will still have Server IDs equal +// to their network addresses. To upgrade an older member and give it an ID, it +// needs to leave the cluster and re-enter: +// +// 1. Remove the server from the cluster with RemoveServer, using its network +// address as its ServerID. +// 2. Update the server's config to a better ID (restarting the server). +// 3. Add the server back to the cluster with AddVoter, using its new ID. +// +// You can do this during the rolling upgrade from N+1 to N+2 of your app, or +// as a rolling change at any time after the upgrade. +// +// Version History +// +// 0: Original Raft library before versioning was added. Servers running this +// version of the Raft library use AddPeerDeprecated/RemovePeerDeprecated +// for all configuration changes, and have no support for LogConfiguration. +// 1: First versioned protocol, used to interoperate with old servers, and begin +// the migration path to newer versions of the protocol. Under this version +// all configuration changes are propagated using the now-deprecated +// RemovePeerDeprecated Raft log entry. This means that server IDs are always +// set to be the same as the server addresses (since the old log entry type +// cannot transmit an ID), and only AddPeer/RemovePeer APIs are supported. +// Servers running this version of the protocol can understand the new +// LogConfiguration Raft log entry but will never generate one so they can +// remain compatible with version 0 Raft servers in the cluster. +// 2: Transitional protocol used when migrating an existing cluster to the new +// server ID system. Server IDs are still set to be the same as server +// addresses, but all configuration changes are propagated using the new +// LogConfiguration Raft log entry type, which can carry full ID information. +// This version supports the old AddPeer/RemovePeer APIs as well as the new +// ID-based AddVoter/RemoveServer APIs which should be used when adding +// version 3 servers to the cluster later. This version sheds all +// interoperability with version 0 servers, but can interoperate with newer +// Raft servers running with protocol version 1 since they can understand the +// new LogConfiguration Raft log entry, and this version can still understand +// their RemovePeerDeprecated Raft log entries. We need this protocol version +// as an intermediate step between 1 and 3 so that servers will propagate the +// ID information that will come from newly-added (or -rolled) servers using +// protocol version 3, but since they are still using their address-based IDs +// from the previous step they will still be able to track commitments and +// their own voting status properly. If we skipped this step, servers would +// be started with their new IDs, but they wouldn't see themselves in the old +// address-based configuration, so none of the servers would think they had a +// vote. +// 3: Protocol adding full support for server IDs and new ID-based server APIs +// (AddVoter, AddNonvoter, etc.), old AddPeer/RemovePeer APIs are no longer +// supported. Version 2 servers should be swapped out by removing them from +// the cluster one-by-one and re-adding them with updated configuration for +// this protocol version, along with their server ID. The remove/add cycle +// is required to populate their server ID. Note that removing must be done +// by ID, which will be the old server's address. +type ProtocolVersion int + +const ( + ProtocolVersionMin ProtocolVersion = 0 + ProtocolVersionMax = 3 +) + +// These are versions of snapshots that this server can _understand_. Currently, +// it is always assumed that this server generates the latest version, though +// this may be changed in the future to include a configurable version. +// +// Version History +// +// 0: Original Raft library before versioning was added. The peers portion of +// these snapshots is encoded in the legacy format which requires decodePeers +// to parse. This version of snapshots should only be produced by the +// unversioned Raft library. +// 1: New format which adds support for a full configuration structure and its +// associated log index, with support for server IDs and non-voting server +// modes. To ease upgrades, this also includes the legacy peers structure but +// that will never be used by servers that understand version 1 snapshots. +// Since the original Raft library didn't enforce any versioning, we must +// include the legacy peers structure for this version, but we can deprecate +// it in the next snapshot version. +type SnapshotVersion int + +const ( + SnapshotVersionMin SnapshotVersion = 0 + SnapshotVersionMax = 1 +) + +// Config provides any necessary configuration for the Raft server. +type Config struct { + // ProtocolVersion allows a Raft server to inter-operate with older + // Raft servers running an older version of the code. This is used to + // version the wire protocol as well as Raft-specific log entries that + // the server uses when _speaking_ to other servers. There is currently + // no auto-negotiation of versions so all servers must be manually + // configured with compatible versions. See ProtocolVersionMin and + // ProtocolVersionMax for the versions of the protocol that this server + // can _understand_. + ProtocolVersion ProtocolVersion + + // HeartbeatTimeout specifies the time in follower state without + // a leader before we attempt an election. + HeartbeatTimeout time.Duration + + // ElectionTimeout specifies the time in candidate state without + // a leader before we attempt an election. + ElectionTimeout time.Duration + + // CommitTimeout controls the time without an Apply() operation + // before we heartbeat to ensure a timely commit. Due to random + // staggering, may be delayed as much as 2x this value. + CommitTimeout time.Duration + + // MaxAppendEntries controls the maximum number of append entries + // to send at once. We want to strike a balance between efficiency + // and avoiding waste if the follower is going to reject because of + // an inconsistent log. + MaxAppendEntries int + + // If we are a member of a cluster, and RemovePeer is invoked for the + // local node, then we forget all peers and transition into the follower state. + // If ShutdownOnRemove is is set, we additional shutdown Raft. Otherwise, + // we can become a leader of a cluster containing only this node. + ShutdownOnRemove bool + + // TrailingLogs controls how many logs we leave after a snapshot. This is + // used so that we can quickly replay logs on a follower instead of being + // forced to send an entire snapshot. + TrailingLogs uint64 + + // SnapshotInterval controls how often we check if we should perform a snapshot. + // We randomly stagger between this value and 2x this value to avoid the entire + // cluster from performing a snapshot at once. + SnapshotInterval time.Duration + + // SnapshotThreshold controls how many outstanding logs there must be before + // we perform a snapshot. This is to prevent excessive snapshots when we can + // just replay a small set of logs. + SnapshotThreshold uint64 + + // LeaderLeaseTimeout is used to control how long the "lease" lasts + // for being the leader without being able to contact a quorum + // of nodes. If we reach this interval without contact, we will + // step down as leader. + LeaderLeaseTimeout time.Duration + + // StartAsLeader forces Raft to start in the leader state. This should + // never be used except for testing purposes, as it can cause a split-brain. + StartAsLeader bool + + // The unique ID for this server across all time. When running with + // ProtocolVersion < 3, you must set this to be the same as the network + // address of your transport. + LocalID ServerID + + // NotifyCh is used to provide a channel that will be notified of leadership + // changes. Raft will block writing to this channel, so it should either be + // buffered or aggressively consumed. + NotifyCh chan<- bool + + // LogOutput is used as a sink for logs, unless Logger is specified. + // Defaults to os.Stderr. + LogOutput io.Writer + + // Logger is a user-provided logger. If nil, a logger writing to LogOutput + // is used. + Logger *log.Logger +} + +// DefaultConfig returns a Config with usable defaults. +func DefaultConfig() *Config { + return &Config{ + ProtocolVersion: ProtocolVersionMax, + HeartbeatTimeout: 1000 * time.Millisecond, + ElectionTimeout: 1000 * time.Millisecond, + CommitTimeout: 50 * time.Millisecond, + MaxAppendEntries: 64, + ShutdownOnRemove: true, + TrailingLogs: 10240, + SnapshotInterval: 120 * time.Second, + SnapshotThreshold: 8192, + LeaderLeaseTimeout: 500 * time.Millisecond, + } +} + +// ValidateConfig is used to validate a sane configuration +func ValidateConfig(config *Config) error { + // We don't actually support running as 0 in the library any more, but + // we do understand it. + protocolMin := ProtocolVersionMin + if protocolMin == 0 { + protocolMin = 1 + } + if config.ProtocolVersion < protocolMin || + config.ProtocolVersion > ProtocolVersionMax { + return fmt.Errorf("Protocol version %d must be >= %d and <= %d", + config.ProtocolVersion, protocolMin, ProtocolVersionMax) + } + if len(config.LocalID) == 0 { + return fmt.Errorf("LocalID cannot be empty") + } + if config.HeartbeatTimeout < 5*time.Millisecond { + return fmt.Errorf("Heartbeat timeout is too low") + } + if config.ElectionTimeout < 5*time.Millisecond { + return fmt.Errorf("Election timeout is too low") + } + if config.CommitTimeout < time.Millisecond { + return fmt.Errorf("Commit timeout is too low") + } + if config.MaxAppendEntries <= 0 { + return fmt.Errorf("MaxAppendEntries must be positive") + } + if config.MaxAppendEntries > 1024 { + return fmt.Errorf("MaxAppendEntries is too large") + } + if config.SnapshotInterval < 5*time.Millisecond { + return fmt.Errorf("Snapshot interval is too low") + } + if config.LeaderLeaseTimeout < 5*time.Millisecond { + return fmt.Errorf("Leader lease timeout is too low") + } + if config.LeaderLeaseTimeout > config.HeartbeatTimeout { + return fmt.Errorf("Leader lease timeout cannot be larger than heartbeat timeout") + } + if config.ElectionTimeout < config.HeartbeatTimeout { + return fmt.Errorf("Election timeout must be equal or greater than Heartbeat Timeout") + } + return nil +} diff --git a/vendor/github.com/hashicorp/raft/configuration.go b/vendor/github.com/hashicorp/raft/configuration.go new file mode 100644 index 0000000000..74508c5e53 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/configuration.go @@ -0,0 +1,343 @@ +package raft + +import "fmt" + +// ServerSuffrage determines whether a Server in a Configuration gets a vote. +type ServerSuffrage int + +// Note: Don't renumber these, since the numbers are written into the log. +const ( + // Voter is a server whose vote is counted in elections and whose match index + // is used in advancing the leader's commit index. + Voter ServerSuffrage = iota + // Nonvoter is a server that receives log entries but is not considered for + // elections or commitment purposes. + Nonvoter + // Staging is a server that acts like a nonvoter with one exception: once a + // staging server receives enough log entries to be sufficiently caught up to + // the leader's log, the leader will invoke a membership change to change + // the Staging server to a Voter. + Staging +) + +func (s ServerSuffrage) String() string { + switch s { + case Voter: + return "Voter" + case Nonvoter: + return "Nonvoter" + case Staging: + return "Staging" + } + return "ServerSuffrage" +} + +// ServerID is a unique string identifying a server for all time. +type ServerID string + +// ServerAddress is a network address for a server that a transport can contact. +type ServerAddress string + +// Server tracks the information about a single server in a configuration. +type Server struct { + // Suffrage determines whether the server gets a vote. + Suffrage ServerSuffrage + // ID is a unique string identifying this server for all time. + ID ServerID + // Address is its network address that a transport can contact. + Address ServerAddress +} + +// Configuration tracks which servers are in the cluster, and whether they have +// votes. This should include the local server, if it's a member of the cluster. +// The servers are listed no particular order, but each should only appear once. +// These entries are appended to the log during membership changes. +type Configuration struct { + Servers []Server +} + +// Clone makes a deep copy of a Configuration. +func (c *Configuration) Clone() (copy Configuration) { + copy.Servers = append(copy.Servers, c.Servers...) + return +} + +// ConfigurationChangeCommand is the different ways to change the cluster +// configuration. +type ConfigurationChangeCommand uint8 + +const ( + // AddStaging makes a server Staging unless its Voter. + AddStaging ConfigurationChangeCommand = iota + // AddNonvoter makes a server Nonvoter unless its Staging or Voter. + AddNonvoter + // DemoteVoter makes a server Nonvoter unless its absent. + DemoteVoter + // RemoveServer removes a server entirely from the cluster membership. + RemoveServer + // Promote is created automatically by a leader; it turns a Staging server + // into a Voter. + Promote +) + +func (c ConfigurationChangeCommand) String() string { + switch c { + case AddStaging: + return "AddStaging" + case AddNonvoter: + return "AddNonvoter" + case DemoteVoter: + return "DemoteVoter" + case RemoveServer: + return "RemoveServer" + case Promote: + return "Promote" + } + return "ConfigurationChangeCommand" +} + +// configurationChangeRequest describes a change that a leader would like to +// make to its current configuration. It's used only within a single server +// (never serialized into the log), as part of `configurationChangeFuture`. +type configurationChangeRequest struct { + command ConfigurationChangeCommand + serverID ServerID + serverAddress ServerAddress // only present for AddStaging, AddNonvoter + // prevIndex, if nonzero, is the index of the only configuration upon which + // this change may be applied; if another configuration entry has been + // added in the meantime, this request will fail. + prevIndex uint64 +} + +// configurations is state tracked on every server about its Configurations. +// Note that, per Diego's dissertation, there can be at most one uncommitted +// configuration at a time (the next configuration may not be created until the +// prior one has been committed). +// +// One downside to storing just two configurations is that if you try to take a +// snahpsot when your state machine hasn't yet applied the committedIndex, we +// have no record of the configuration that would logically fit into that +// snapshot. We disallow snapshots in that case now. An alternative approach, +// which LogCabin uses, is to track every configuration change in the +// log. +type configurations struct { + // committed is the latest configuration in the log/snapshot that has been + // committed (the one with the largest index). + committed Configuration + // committedIndex is the log index where 'committed' was written. + committedIndex uint64 + // latest is the latest configuration in the log/snapshot (may be committed + // or uncommitted) + latest Configuration + // latestIndex is the log index where 'latest' was written. + latestIndex uint64 +} + +// Clone makes a deep copy of a configurations object. +func (c *configurations) Clone() (copy configurations) { + copy.committed = c.committed.Clone() + copy.committedIndex = c.committedIndex + copy.latest = c.latest.Clone() + copy.latestIndex = c.latestIndex + return +} + +// hasVote returns true if the server identified by 'id' is a Voter in the +// provided Configuration. +func hasVote(configuration Configuration, id ServerID) bool { + for _, server := range configuration.Servers { + if server.ID == id { + return server.Suffrage == Voter + } + } + return false +} + +// checkConfiguration tests a cluster membership configuration for common +// errors. +func checkConfiguration(configuration Configuration) error { + idSet := make(map[ServerID]bool) + addressSet := make(map[ServerAddress]bool) + var voters int + for _, server := range configuration.Servers { + if server.ID == "" { + return fmt.Errorf("Empty ID in configuration: %v", configuration) + } + if server.Address == "" { + return fmt.Errorf("Empty address in configuration: %v", server) + } + if idSet[server.ID] { + return fmt.Errorf("Found duplicate ID in configuration: %v", server.ID) + } + idSet[server.ID] = true + if addressSet[server.Address] { + return fmt.Errorf("Found duplicate address in configuration: %v", server.Address) + } + addressSet[server.Address] = true + if server.Suffrage == Voter { + voters++ + } + } + if voters == 0 { + return fmt.Errorf("Need at least one voter in configuration: %v", configuration) + } + return nil +} + +// nextConfiguration generates a new Configuration from the current one and a +// configuration change request. It's split from appendConfigurationEntry so +// that it can be unit tested easily. +func nextConfiguration(current Configuration, currentIndex uint64, change configurationChangeRequest) (Configuration, error) { + if change.prevIndex > 0 && change.prevIndex != currentIndex { + return Configuration{}, fmt.Errorf("Configuration changed since %v (latest is %v)", change.prevIndex, currentIndex) + } + + configuration := current.Clone() + switch change.command { + case AddStaging: + // TODO: barf on new address? + newServer := Server{ + // TODO: This should add the server as Staging, to be automatically + // promoted to Voter later. However, the promoton to Voter is not yet + // implemented, and doing so is not trivial with the way the leader loop + // coordinates with the replication goroutines today. So, for now, the + // server will have a vote right away, and the Promote case below is + // unused. + Suffrage: Voter, + ID: change.serverID, + Address: change.serverAddress, + } + found := false + for i, server := range configuration.Servers { + if server.ID == change.serverID { + if server.Suffrage == Voter { + configuration.Servers[i].Address = change.serverAddress + } else { + configuration.Servers[i] = newServer + } + found = true + break + } + } + if !found { + configuration.Servers = append(configuration.Servers, newServer) + } + case AddNonvoter: + newServer := Server{ + Suffrage: Nonvoter, + ID: change.serverID, + Address: change.serverAddress, + } + found := false + for i, server := range configuration.Servers { + if server.ID == change.serverID { + if server.Suffrage != Nonvoter { + configuration.Servers[i].Address = change.serverAddress + } else { + configuration.Servers[i] = newServer + } + found = true + break + } + } + if !found { + configuration.Servers = append(configuration.Servers, newServer) + } + case DemoteVoter: + for i, server := range configuration.Servers { + if server.ID == change.serverID { + configuration.Servers[i].Suffrage = Nonvoter + break + } + } + case RemoveServer: + for i, server := range configuration.Servers { + if server.ID == change.serverID { + configuration.Servers = append(configuration.Servers[:i], configuration.Servers[i+1:]...) + break + } + } + case Promote: + for i, server := range configuration.Servers { + if server.ID == change.serverID && server.Suffrage == Staging { + configuration.Servers[i].Suffrage = Voter + break + } + } + } + + // Make sure we didn't do something bad like remove the last voter + if err := checkConfiguration(configuration); err != nil { + return Configuration{}, err + } + + return configuration, nil +} + +// encodePeers is used to serialize a Configuration into the old peers format. +// This is here for backwards compatibility when operating with a mix of old +// servers and should be removed once we deprecate support for protocol version 1. +func encodePeers(configuration Configuration, trans Transport) []byte { + // Gather up all the voters, other suffrage types are not supported by + // this data format. + var encPeers [][]byte + for _, server := range configuration.Servers { + if server.Suffrage == Voter { + encPeers = append(encPeers, trans.EncodePeer(server.Address)) + } + } + + // Encode the entire array. + buf, err := encodeMsgPack(encPeers) + if err != nil { + panic(fmt.Errorf("failed to encode peers: %v", err)) + } + + return buf.Bytes() +} + +// decodePeers is used to deserialize an old list of peers into a Configuration. +// This is here for backwards compatibility with old log entries and snapshots; +// it should be removed eventually. +func decodePeers(buf []byte, trans Transport) Configuration { + // Decode the buffer first. + var encPeers [][]byte + if err := decodeMsgPack(buf, &encPeers); err != nil { + panic(fmt.Errorf("failed to decode peers: %v", err)) + } + + // Deserialize each peer. + var servers []Server + for _, enc := range encPeers { + p := trans.DecodePeer(enc) + servers = append(servers, Server{ + Suffrage: Voter, + ID: ServerID(p), + Address: ServerAddress(p), + }) + } + + return Configuration{ + Servers: servers, + } +} + +// encodeConfiguration serializes a Configuration using MsgPack, or panics on +// errors. +func encodeConfiguration(configuration Configuration) []byte { + buf, err := encodeMsgPack(configuration) + if err != nil { + panic(fmt.Errorf("failed to encode configuration: %v", err)) + } + return buf.Bytes() +} + +// decodeConfiguration deserializes a Configuration using MsgPack, or panics on +// errors. +func decodeConfiguration(buf []byte) Configuration { + var configuration Configuration + if err := decodeMsgPack(buf, &configuration); err != nil { + panic(fmt.Errorf("failed to decode configuration: %v", err)) + } + return configuration +} diff --git a/vendor/github.com/hashicorp/raft/discard_snapshot.go b/vendor/github.com/hashicorp/raft/discard_snapshot.go new file mode 100644 index 0000000000..5e93a9fe01 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/discard_snapshot.go @@ -0,0 +1,49 @@ +package raft + +import ( + "fmt" + "io" +) + +// DiscardSnapshotStore is used to successfully snapshot while +// always discarding the snapshot. This is useful for when the +// log should be truncated but no snapshot should be retained. +// This should never be used for production use, and is only +// suitable for testing. +type DiscardSnapshotStore struct{} + +type DiscardSnapshotSink struct{} + +// NewDiscardSnapshotStore is used to create a new DiscardSnapshotStore. +func NewDiscardSnapshotStore() *DiscardSnapshotStore { + return &DiscardSnapshotStore{} +} + +func (d *DiscardSnapshotStore) Create(version SnapshotVersion, index, term uint64, + configuration Configuration, configurationIndex uint64, trans Transport) (SnapshotSink, error) { + return &DiscardSnapshotSink{}, nil +} + +func (d *DiscardSnapshotStore) List() ([]*SnapshotMeta, error) { + return nil, nil +} + +func (d *DiscardSnapshotStore) Open(id string) (*SnapshotMeta, io.ReadCloser, error) { + return nil, nil, fmt.Errorf("open is not supported") +} + +func (d *DiscardSnapshotSink) Write(b []byte) (int, error) { + return len(b), nil +} + +func (d *DiscardSnapshotSink) Close() error { + return nil +} + +func (d *DiscardSnapshotSink) ID() string { + return "discard" +} + +func (d *DiscardSnapshotSink) Cancel() error { + return nil +} diff --git a/vendor/github.com/hashicorp/raft/file_snapshot.go b/vendor/github.com/hashicorp/raft/file_snapshot.go new file mode 100644 index 0000000000..17d080134a --- /dev/null +++ b/vendor/github.com/hashicorp/raft/file_snapshot.go @@ -0,0 +1,494 @@ +package raft + +import ( + "bufio" + "bytes" + "encoding/json" + "fmt" + "hash" + "hash/crc64" + "io" + "io/ioutil" + "log" + "os" + "path/filepath" + "sort" + "strings" + "time" +) + +const ( + testPath = "permTest" + snapPath = "snapshots" + metaFilePath = "meta.json" + stateFilePath = "state.bin" + tmpSuffix = ".tmp" +) + +// FileSnapshotStore implements the SnapshotStore interface and allows +// snapshots to be made on the local disk. +type FileSnapshotStore struct { + path string + retain int + logger *log.Logger +} + +type snapMetaSlice []*fileSnapshotMeta + +// FileSnapshotSink implements SnapshotSink with a file. +type FileSnapshotSink struct { + store *FileSnapshotStore + logger *log.Logger + dir string + meta fileSnapshotMeta + + stateFile *os.File + stateHash hash.Hash64 + buffered *bufio.Writer + + closed bool +} + +// fileSnapshotMeta is stored on disk. We also put a CRC +// on disk so that we can verify the snapshot. +type fileSnapshotMeta struct { + SnapshotMeta + CRC []byte +} + +// bufferedFile is returned when we open a snapshot. This way +// reads are buffered and the file still gets closed. +type bufferedFile struct { + bh *bufio.Reader + fh *os.File +} + +func (b *bufferedFile) Read(p []byte) (n int, err error) { + return b.bh.Read(p) +} + +func (b *bufferedFile) Close() error { + return b.fh.Close() +} + +// NewFileSnapshotStoreWithLogger creates a new FileSnapshotStore based +// on a base directory. The `retain` parameter controls how many +// snapshots are retained. Must be at least 1. +func NewFileSnapshotStoreWithLogger(base string, retain int, logger *log.Logger) (*FileSnapshotStore, error) { + if retain < 1 { + return nil, fmt.Errorf("must retain at least one snapshot") + } + if logger == nil { + logger = log.New(os.Stderr, "", log.LstdFlags) + } + + // Ensure our path exists + path := filepath.Join(base, snapPath) + if err := os.MkdirAll(path, 0755); err != nil && !os.IsExist(err) { + return nil, fmt.Errorf("snapshot path not accessible: %v", err) + } + + // Setup the store + store := &FileSnapshotStore{ + path: path, + retain: retain, + logger: logger, + } + + // Do a permissions test + if err := store.testPermissions(); err != nil { + return nil, fmt.Errorf("permissions test failed: %v", err) + } + return store, nil +} + +// NewFileSnapshotStore creates a new FileSnapshotStore based +// on a base directory. The `retain` parameter controls how many +// snapshots are retained. Must be at least 1. +func NewFileSnapshotStore(base string, retain int, logOutput io.Writer) (*FileSnapshotStore, error) { + if logOutput == nil { + logOutput = os.Stderr + } + return NewFileSnapshotStoreWithLogger(base, retain, log.New(logOutput, "", log.LstdFlags)) +} + +// testPermissions tries to touch a file in our path to see if it works. +func (f *FileSnapshotStore) testPermissions() error { + path := filepath.Join(f.path, testPath) + fh, err := os.Create(path) + if err != nil { + return err + } + + if err = fh.Close(); err != nil { + return err + } + + if err = os.Remove(path); err != nil { + return err + } + return nil +} + +// snapshotName generates a name for the snapshot. +func snapshotName(term, index uint64) string { + now := time.Now() + msec := now.UnixNano() / int64(time.Millisecond) + return fmt.Sprintf("%d-%d-%d", term, index, msec) +} + +// Create is used to start a new snapshot +func (f *FileSnapshotStore) Create(version SnapshotVersion, index, term uint64, + configuration Configuration, configurationIndex uint64, trans Transport) (SnapshotSink, error) { + // We only support version 1 snapshots at this time. + if version != 1 { + return nil, fmt.Errorf("unsupported snapshot version %d", version) + } + + // Create a new path + name := snapshotName(term, index) + path := filepath.Join(f.path, name+tmpSuffix) + f.logger.Printf("[INFO] snapshot: Creating new snapshot at %s", path) + + // Make the directory + if err := os.MkdirAll(path, 0755); err != nil { + f.logger.Printf("[ERR] snapshot: Failed to make snapshot directory: %v", err) + return nil, err + } + + // Create the sink + sink := &FileSnapshotSink{ + store: f, + logger: f.logger, + dir: path, + meta: fileSnapshotMeta{ + SnapshotMeta: SnapshotMeta{ + Version: version, + ID: name, + Index: index, + Term: term, + Peers: encodePeers(configuration, trans), + Configuration: configuration, + ConfigurationIndex: configurationIndex, + }, + CRC: nil, + }, + } + + // Write out the meta data + if err := sink.writeMeta(); err != nil { + f.logger.Printf("[ERR] snapshot: Failed to write metadata: %v", err) + return nil, err + } + + // Open the state file + statePath := filepath.Join(path, stateFilePath) + fh, err := os.Create(statePath) + if err != nil { + f.logger.Printf("[ERR] snapshot: Failed to create state file: %v", err) + return nil, err + } + sink.stateFile = fh + + // Create a CRC64 hash + sink.stateHash = crc64.New(crc64.MakeTable(crc64.ECMA)) + + // Wrap both the hash and file in a MultiWriter with buffering + multi := io.MultiWriter(sink.stateFile, sink.stateHash) + sink.buffered = bufio.NewWriter(multi) + + // Done + return sink, nil +} + +// List returns available snapshots in the store. +func (f *FileSnapshotStore) List() ([]*SnapshotMeta, error) { + // Get the eligible snapshots + snapshots, err := f.getSnapshots() + if err != nil { + f.logger.Printf("[ERR] snapshot: Failed to get snapshots: %v", err) + return nil, err + } + + var snapMeta []*SnapshotMeta + for _, meta := range snapshots { + snapMeta = append(snapMeta, &meta.SnapshotMeta) + if len(snapMeta) == f.retain { + break + } + } + return snapMeta, nil +} + +// getSnapshots returns all the known snapshots. +func (f *FileSnapshotStore) getSnapshots() ([]*fileSnapshotMeta, error) { + // Get the eligible snapshots + snapshots, err := ioutil.ReadDir(f.path) + if err != nil { + f.logger.Printf("[ERR] snapshot: Failed to scan snapshot dir: %v", err) + return nil, err + } + + // Populate the metadata + var snapMeta []*fileSnapshotMeta + for _, snap := range snapshots { + // Ignore any files + if !snap.IsDir() { + continue + } + + // Ignore any temporary snapshots + dirName := snap.Name() + if strings.HasSuffix(dirName, tmpSuffix) { + f.logger.Printf("[WARN] snapshot: Found temporary snapshot: %v", dirName) + continue + } + + // Try to read the meta data + meta, err := f.readMeta(dirName) + if err != nil { + f.logger.Printf("[WARN] snapshot: Failed to read metadata for %v: %v", dirName, err) + continue + } + + // Make sure we can understand this version. + if meta.Version < SnapshotVersionMin || meta.Version > SnapshotVersionMax { + f.logger.Printf("[WARN] snapshot: Snapshot version for %v not supported: %d", dirName, meta.Version) + continue + } + + // Append, but only return up to the retain count + snapMeta = append(snapMeta, meta) + } + + // Sort the snapshot, reverse so we get new -> old + sort.Sort(sort.Reverse(snapMetaSlice(snapMeta))) + + return snapMeta, nil +} + +// readMeta is used to read the meta data for a given named backup +func (f *FileSnapshotStore) readMeta(name string) (*fileSnapshotMeta, error) { + // Open the meta file + metaPath := filepath.Join(f.path, name, metaFilePath) + fh, err := os.Open(metaPath) + if err != nil { + return nil, err + } + defer fh.Close() + + // Buffer the file IO + buffered := bufio.NewReader(fh) + + // Read in the JSON + meta := &fileSnapshotMeta{} + dec := json.NewDecoder(buffered) + if err := dec.Decode(meta); err != nil { + return nil, err + } + return meta, nil +} + +// Open takes a snapshot ID and returns a ReadCloser for that snapshot. +func (f *FileSnapshotStore) Open(id string) (*SnapshotMeta, io.ReadCloser, error) { + // Get the metadata + meta, err := f.readMeta(id) + if err != nil { + f.logger.Printf("[ERR] snapshot: Failed to get meta data to open snapshot: %v", err) + return nil, nil, err + } + + // Open the state file + statePath := filepath.Join(f.path, id, stateFilePath) + fh, err := os.Open(statePath) + if err != nil { + f.logger.Printf("[ERR] snapshot: Failed to open state file: %v", err) + return nil, nil, err + } + + // Create a CRC64 hash + stateHash := crc64.New(crc64.MakeTable(crc64.ECMA)) + + // Compute the hash + _, err = io.Copy(stateHash, fh) + if err != nil { + f.logger.Printf("[ERR] snapshot: Failed to read state file: %v", err) + fh.Close() + return nil, nil, err + } + + // Verify the hash + computed := stateHash.Sum(nil) + if bytes.Compare(meta.CRC, computed) != 0 { + f.logger.Printf("[ERR] snapshot: CRC checksum failed (stored: %v computed: %v)", + meta.CRC, computed) + fh.Close() + return nil, nil, fmt.Errorf("CRC mismatch") + } + + // Seek to the start + if _, err := fh.Seek(0, 0); err != nil { + f.logger.Printf("[ERR] snapshot: State file seek failed: %v", err) + fh.Close() + return nil, nil, err + } + + // Return a buffered file + buffered := &bufferedFile{ + bh: bufio.NewReader(fh), + fh: fh, + } + + return &meta.SnapshotMeta, buffered, nil +} + +// ReapSnapshots reaps any snapshots beyond the retain count. +func (f *FileSnapshotStore) ReapSnapshots() error { + snapshots, err := f.getSnapshots() + if err != nil { + f.logger.Printf("[ERR] snapshot: Failed to get snapshots: %v", err) + return err + } + + for i := f.retain; i < len(snapshots); i++ { + path := filepath.Join(f.path, snapshots[i].ID) + f.logger.Printf("[INFO] snapshot: reaping snapshot %v", path) + if err := os.RemoveAll(path); err != nil { + f.logger.Printf("[ERR] snapshot: Failed to reap snapshot %v: %v", path, err) + return err + } + } + return nil +} + +// ID returns the ID of the snapshot, can be used with Open() +// after the snapshot is finalized. +func (s *FileSnapshotSink) ID() string { + return s.meta.ID +} + +// Write is used to append to the state file. We write to the +// buffered IO object to reduce the amount of context switches. +func (s *FileSnapshotSink) Write(b []byte) (int, error) { + return s.buffered.Write(b) +} + +// Close is used to indicate a successful end. +func (s *FileSnapshotSink) Close() error { + // Make sure close is idempotent + if s.closed { + return nil + } + s.closed = true + + // Close the open handles + if err := s.finalize(); err != nil { + s.logger.Printf("[ERR] snapshot: Failed to finalize snapshot: %v", err) + return err + } + + // Write out the meta data + if err := s.writeMeta(); err != nil { + s.logger.Printf("[ERR] snapshot: Failed to write metadata: %v", err) + return err + } + + // Move the directory into place + newPath := strings.TrimSuffix(s.dir, tmpSuffix) + if err := os.Rename(s.dir, newPath); err != nil { + s.logger.Printf("[ERR] snapshot: Failed to move snapshot into place: %v", err) + return err + } + + // Reap any old snapshots + if err := s.store.ReapSnapshots(); err != nil { + return err + } + + return nil +} + +// Cancel is used to indicate an unsuccessful end. +func (s *FileSnapshotSink) Cancel() error { + // Make sure close is idempotent + if s.closed { + return nil + } + s.closed = true + + // Close the open handles + if err := s.finalize(); err != nil { + s.logger.Printf("[ERR] snapshot: Failed to finalize snapshot: %v", err) + return err + } + + // Attempt to remove all artifacts + return os.RemoveAll(s.dir) +} + +// finalize is used to close all of our resources. +func (s *FileSnapshotSink) finalize() error { + // Flush any remaining data + if err := s.buffered.Flush(); err != nil { + return err + } + + // Get the file size + stat, statErr := s.stateFile.Stat() + + // Close the file + if err := s.stateFile.Close(); err != nil { + return err + } + + // Set the file size, check after we close + if statErr != nil { + return statErr + } + s.meta.Size = stat.Size() + + // Set the CRC + s.meta.CRC = s.stateHash.Sum(nil) + return nil +} + +// writeMeta is used to write out the metadata we have. +func (s *FileSnapshotSink) writeMeta() error { + // Open the meta file + metaPath := filepath.Join(s.dir, metaFilePath) + fh, err := os.Create(metaPath) + if err != nil { + return err + } + defer fh.Close() + + // Buffer the file IO + buffered := bufio.NewWriter(fh) + defer buffered.Flush() + + // Write out as JSON + enc := json.NewEncoder(buffered) + if err := enc.Encode(&s.meta); err != nil { + return err + } + return nil +} + +// Implement the sort interface for []*fileSnapshotMeta. +func (s snapMetaSlice) Len() int { + return len(s) +} + +func (s snapMetaSlice) Less(i, j int) bool { + if s[i].Term != s[j].Term { + return s[i].Term < s[j].Term + } + if s[i].Index != s[j].Index { + return s[i].Index < s[j].Index + } + return s[i].ID < s[j].ID +} + +func (s snapMetaSlice) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} diff --git a/vendor/github.com/hashicorp/raft/fsm.go b/vendor/github.com/hashicorp/raft/fsm.go new file mode 100644 index 0000000000..c89986c0fa --- /dev/null +++ b/vendor/github.com/hashicorp/raft/fsm.go @@ -0,0 +1,136 @@ +package raft + +import ( + "fmt" + "io" + "time" + + "github.com/armon/go-metrics" +) + +// FSM provides an interface that can be implemented by +// clients to make use of the replicated log. +type FSM interface { + // Apply log is invoked once a log entry is committed. + // It returns a value which will be made available in the + // ApplyFuture returned by Raft.Apply method if that + // method was called on the same Raft node as the FSM. + Apply(*Log) interface{} + + // Snapshot is used to support log compaction. This call should + // return an FSMSnapshot which can be used to save a point-in-time + // snapshot of the FSM. Apply and Snapshot are not called in multiple + // threads, but Apply will be called concurrently with Persist. This means + // the FSM should be implemented in a fashion that allows for concurrent + // updates while a snapshot is happening. + Snapshot() (FSMSnapshot, error) + + // Restore is used to restore an FSM from a snapshot. It is not called + // concurrently with any other command. The FSM must discard all previous + // state. + Restore(io.ReadCloser) error +} + +// FSMSnapshot is returned by an FSM in response to a Snapshot +// It must be safe to invoke FSMSnapshot methods with concurrent +// calls to Apply. +type FSMSnapshot interface { + // Persist should dump all necessary state to the WriteCloser 'sink', + // and call sink.Close() when finished or call sink.Cancel() on error. + Persist(sink SnapshotSink) error + + // Release is invoked when we are finished with the snapshot. + Release() +} + +// runFSM is a long running goroutine responsible for applying logs +// to the FSM. This is done async of other logs since we don't want +// the FSM to block our internal operations. +func (r *Raft) runFSM() { + var lastIndex, lastTerm uint64 + + commit := func(req *commitTuple) { + // Apply the log if a command + var resp interface{} + if req.log.Type == LogCommand { + start := time.Now() + resp = r.fsm.Apply(req.log) + metrics.MeasureSince([]string{"raft", "fsm", "apply"}, start) + } + + // Update the indexes + lastIndex = req.log.Index + lastTerm = req.log.Term + + // Invoke the future if given + if req.future != nil { + req.future.response = resp + req.future.respond(nil) + } + } + + restore := func(req *restoreFuture) { + // Open the snapshot + meta, source, err := r.snapshots.Open(req.ID) + if err != nil { + req.respond(fmt.Errorf("failed to open snapshot %v: %v", req.ID, err)) + return + } + + // Attempt to restore + start := time.Now() + if err := r.fsm.Restore(source); err != nil { + req.respond(fmt.Errorf("failed to restore snapshot %v: %v", req.ID, err)) + source.Close() + return + } + source.Close() + metrics.MeasureSince([]string{"raft", "fsm", "restore"}, start) + + // Update the last index and term + lastIndex = meta.Index + lastTerm = meta.Term + req.respond(nil) + } + + snapshot := func(req *reqSnapshotFuture) { + // Is there something to snapshot? + if lastIndex == 0 { + req.respond(ErrNothingNewToSnapshot) + return + } + + // Start a snapshot + start := time.Now() + snap, err := r.fsm.Snapshot() + metrics.MeasureSince([]string{"raft", "fsm", "snapshot"}, start) + + // Respond to the request + req.index = lastIndex + req.term = lastTerm + req.snapshot = snap + req.respond(err) + } + + for { + select { + case ptr := <-r.fsmMutateCh: + switch req := ptr.(type) { + case *commitTuple: + commit(req) + + case *restoreFuture: + restore(req) + + default: + panic(fmt.Errorf("bad type passed to fsmMutateCh: %#v", ptr)) + } + + case req := <-r.fsmSnapshotCh: + snapshot(req) + + case <-r.shutdownCh: + return + } + } +} diff --git a/vendor/github.com/hashicorp/raft/future.go b/vendor/github.com/hashicorp/raft/future.go new file mode 100644 index 0000000000..fac59a5cc4 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/future.go @@ -0,0 +1,289 @@ +package raft + +import ( + "fmt" + "io" + "sync" + "time" +) + +// Future is used to represent an action that may occur in the future. +type Future interface { + // Error blocks until the future arrives and then + // returns the error status of the future. + // This may be called any number of times - all + // calls will return the same value. + // Note that it is not OK to call this method + // twice concurrently on the same Future instance. + Error() error +} + +// IndexFuture is used for future actions that can result in a raft log entry +// being created. +type IndexFuture interface { + Future + + // Index holds the index of the newly applied log entry. + // This must not be called until after the Error method has returned. + Index() uint64 +} + +// ApplyFuture is used for Apply and can return the FSM response. +type ApplyFuture interface { + IndexFuture + + // Response returns the FSM response as returned + // by the FSM.Apply method. This must not be called + // until after the Error method has returned. + Response() interface{} +} + +// ConfigurationFuture is used for GetConfiguration and can return the +// latest configuration in use by Raft. +type ConfigurationFuture interface { + IndexFuture + + // Configuration contains the latest configuration. This must + // not be called until after the Error method has returned. + Configuration() Configuration +} + +// SnapshotFuture is used for waiting on a user-triggered snapshot to complete. +type SnapshotFuture interface { + Future + + // Open is a function you can call to access the underlying snapshot and + // its metadata. This must not be called until after the Error method + // has returned. + Open() (*SnapshotMeta, io.ReadCloser, error) +} + +// errorFuture is used to return a static error. +type errorFuture struct { + err error +} + +func (e errorFuture) Error() error { + return e.err +} + +func (e errorFuture) Response() interface{} { + return nil +} + +func (e errorFuture) Index() uint64 { + return 0 +} + +// deferError can be embedded to allow a future +// to provide an error in the future. +type deferError struct { + err error + errCh chan error + responded bool +} + +func (d *deferError) init() { + d.errCh = make(chan error, 1) +} + +func (d *deferError) Error() error { + if d.err != nil { + // Note that when we've received a nil error, this + // won't trigger, but the channel is closed after + // send so we'll still return nil below. + return d.err + } + if d.errCh == nil { + panic("waiting for response on nil channel") + } + d.err = <-d.errCh + return d.err +} + +func (d *deferError) respond(err error) { + if d.errCh == nil { + return + } + if d.responded { + return + } + d.errCh <- err + close(d.errCh) + d.responded = true +} + +// There are several types of requests that cause a configuration entry to +// be appended to the log. These are encoded here for leaderLoop() to process. +// This is internal to a single server. +type configurationChangeFuture struct { + logFuture + req configurationChangeRequest +} + +// bootstrapFuture is used to attempt a live bootstrap of the cluster. See the +// Raft object's BootstrapCluster member function for more details. +type bootstrapFuture struct { + deferError + + // configuration is the proposed bootstrap configuration to apply. + configuration Configuration +} + +// logFuture is used to apply a log entry and waits until +// the log is considered committed. +type logFuture struct { + deferError + log Log + response interface{} + dispatch time.Time +} + +func (l *logFuture) Response() interface{} { + return l.response +} + +func (l *logFuture) Index() uint64 { + return l.log.Index +} + +type shutdownFuture struct { + raft *Raft +} + +func (s *shutdownFuture) Error() error { + if s.raft == nil { + return nil + } + s.raft.waitShutdown() + if closeable, ok := s.raft.trans.(WithClose); ok { + closeable.Close() + } + return nil +} + +// userSnapshotFuture is used for waiting on a user-triggered snapshot to +// complete. +type userSnapshotFuture struct { + deferError + + // opener is a function used to open the snapshot. This is filled in + // once the future returns with no error. + opener func() (*SnapshotMeta, io.ReadCloser, error) +} + +// Open is a function you can call to access the underlying snapshot and its +// metadata. +func (u *userSnapshotFuture) Open() (*SnapshotMeta, io.ReadCloser, error) { + if u.opener == nil { + return nil, nil, fmt.Errorf("no snapshot available") + } else { + // Invalidate the opener so it can't get called multiple times, + // which isn't generally safe. + defer func() { + u.opener = nil + }() + return u.opener() + } +} + +// userRestoreFuture is used for waiting on a user-triggered restore of an +// external snapshot to complete. +type userRestoreFuture struct { + deferError + + // meta is the metadata that belongs with the snapshot. + meta *SnapshotMeta + + // reader is the interface to read the snapshot contents from. + reader io.Reader +} + +// reqSnapshotFuture is used for requesting a snapshot start. +// It is only used internally. +type reqSnapshotFuture struct { + deferError + + // snapshot details provided by the FSM runner before responding + index uint64 + term uint64 + snapshot FSMSnapshot +} + +// restoreFuture is used for requesting an FSM to perform a +// snapshot restore. Used internally only. +type restoreFuture struct { + deferError + ID string +} + +// verifyFuture is used to verify the current node is still +// the leader. This is to prevent a stale read. +type verifyFuture struct { + deferError + notifyCh chan *verifyFuture + quorumSize int + votes int + voteLock sync.Mutex +} + +// configurationsFuture is used to retrieve the current configurations. This is +// used to allow safe access to this information outside of the main thread. +type configurationsFuture struct { + deferError + configurations configurations +} + +// Configuration returns the latest configuration in use by Raft. +func (c *configurationsFuture) Configuration() Configuration { + return c.configurations.latest +} + +// Index returns the index of the latest configuration in use by Raft. +func (c *configurationsFuture) Index() uint64 { + return c.configurations.latestIndex +} + +// vote is used to respond to a verifyFuture. +// This may block when responding on the notifyCh. +func (v *verifyFuture) vote(leader bool) { + v.voteLock.Lock() + defer v.voteLock.Unlock() + + // Guard against having notified already + if v.notifyCh == nil { + return + } + + if leader { + v.votes++ + if v.votes >= v.quorumSize { + v.notifyCh <- v + v.notifyCh = nil + } + } else { + v.notifyCh <- v + v.notifyCh = nil + } +} + +// appendFuture is used for waiting on a pipelined append +// entries RPC. +type appendFuture struct { + deferError + start time.Time + args *AppendEntriesRequest + resp *AppendEntriesResponse +} + +func (a *appendFuture) Start() time.Time { + return a.start +} + +func (a *appendFuture) Request() *AppendEntriesRequest { + return a.args +} + +func (a *appendFuture) Response() *AppendEntriesResponse { + return a.resp +} diff --git a/vendor/github.com/hashicorp/raft/inmem_snapshot.go b/vendor/github.com/hashicorp/raft/inmem_snapshot.go new file mode 100644 index 0000000000..3aa92b3e9a --- /dev/null +++ b/vendor/github.com/hashicorp/raft/inmem_snapshot.go @@ -0,0 +1,106 @@ +package raft + +import ( + "bytes" + "fmt" + "io" + "io/ioutil" + "sync" +) + +// InmemSnapshotStore implements the SnapshotStore interface and +// retains only the most recent snapshot +type InmemSnapshotStore struct { + latest *InmemSnapshotSink + hasSnapshot bool + sync.RWMutex +} + +// InmemSnapshotSink implements SnapshotSink in memory +type InmemSnapshotSink struct { + meta SnapshotMeta + contents *bytes.Buffer +} + +// NewInmemSnapshotStore creates a blank new InmemSnapshotStore +func NewInmemSnapshotStore() *InmemSnapshotStore { + return &InmemSnapshotStore{ + latest: &InmemSnapshotSink{ + contents: &bytes.Buffer{}, + }, + } +} + +// Create replaces the stored snapshot with a new one using the given args +func (m *InmemSnapshotStore) Create(version SnapshotVersion, index, term uint64, + configuration Configuration, configurationIndex uint64, trans Transport) (SnapshotSink, error) { + // We only support version 1 snapshots at this time. + if version != 1 { + return nil, fmt.Errorf("unsupported snapshot version %d", version) + } + + name := snapshotName(term, index) + + m.Lock() + defer m.Unlock() + + sink := &InmemSnapshotSink{ + meta: SnapshotMeta{ + Version: version, + ID: name, + Index: index, + Term: term, + Peers: encodePeers(configuration, trans), + Configuration: configuration, + ConfigurationIndex: configurationIndex, + }, + contents: &bytes.Buffer{}, + } + m.hasSnapshot = true + m.latest = sink + + return sink, nil +} + +// List returns the latest snapshot taken +func (m *InmemSnapshotStore) List() ([]*SnapshotMeta, error) { + m.RLock() + defer m.RUnlock() + + if !m.hasSnapshot { + return []*SnapshotMeta{}, nil + } + return []*SnapshotMeta{&m.latest.meta}, nil +} + +// Open wraps an io.ReadCloser around the snapshot contents +func (m *InmemSnapshotStore) Open(id string) (*SnapshotMeta, io.ReadCloser, error) { + m.RLock() + defer m.RUnlock() + + if m.latest.meta.ID != id { + return nil, nil, fmt.Errorf("[ERR] snapshot: failed to open snapshot id: %s", id) + } + + return &m.latest.meta, ioutil.NopCloser(m.latest.contents), nil +} + +// Write appends the given bytes to the snapshot contents +func (s *InmemSnapshotSink) Write(p []byte) (n int, err error) { + written, err := io.Copy(s.contents, bytes.NewReader(p)) + s.meta.Size += written + return int(written), err +} + +// Close updates the Size and is otherwise a no-op +func (s *InmemSnapshotSink) Close() error { + return nil +} + +func (s *InmemSnapshotSink) ID() string { + return s.meta.ID +} + +func (s *InmemSnapshotSink) Cancel() error { + return nil +} diff --git a/vendor/github.com/hashicorp/raft/inmem_store.go b/vendor/github.com/hashicorp/raft/inmem_store.go new file mode 100644 index 0000000000..e5d579e1b3 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/inmem_store.go @@ -0,0 +1,125 @@ +package raft + +import ( + "sync" +) + +// InmemStore implements the LogStore and StableStore interface. +// It should NOT EVER be used for production. It is used only for +// unit tests. Use the MDBStore implementation instead. +type InmemStore struct { + l sync.RWMutex + lowIndex uint64 + highIndex uint64 + logs map[uint64]*Log + kv map[string][]byte + kvInt map[string]uint64 +} + +// NewInmemStore returns a new in-memory backend. Do not ever +// use for production. Only for testing. +func NewInmemStore() *InmemStore { + i := &InmemStore{ + logs: make(map[uint64]*Log), + kv: make(map[string][]byte), + kvInt: make(map[string]uint64), + } + return i +} + +// FirstIndex implements the LogStore interface. +func (i *InmemStore) FirstIndex() (uint64, error) { + i.l.RLock() + defer i.l.RUnlock() + return i.lowIndex, nil +} + +// LastIndex implements the LogStore interface. +func (i *InmemStore) LastIndex() (uint64, error) { + i.l.RLock() + defer i.l.RUnlock() + return i.highIndex, nil +} + +// GetLog implements the LogStore interface. +func (i *InmemStore) GetLog(index uint64, log *Log) error { + i.l.RLock() + defer i.l.RUnlock() + l, ok := i.logs[index] + if !ok { + return ErrLogNotFound + } + *log = *l + return nil +} + +// StoreLog implements the LogStore interface. +func (i *InmemStore) StoreLog(log *Log) error { + return i.StoreLogs([]*Log{log}) +} + +// StoreLogs implements the LogStore interface. +func (i *InmemStore) StoreLogs(logs []*Log) error { + i.l.Lock() + defer i.l.Unlock() + for _, l := range logs { + i.logs[l.Index] = l + if i.lowIndex == 0 { + i.lowIndex = l.Index + } + if l.Index > i.highIndex { + i.highIndex = l.Index + } + } + return nil +} + +// DeleteRange implements the LogStore interface. +func (i *InmemStore) DeleteRange(min, max uint64) error { + i.l.Lock() + defer i.l.Unlock() + for j := min; j <= max; j++ { + delete(i.logs, j) + } + if min <= i.lowIndex { + i.lowIndex = max + 1 + } + if max >= i.highIndex { + i.highIndex = min - 1 + } + if i.lowIndex > i.highIndex { + i.lowIndex = 0 + i.highIndex = 0 + } + return nil +} + +// Set implements the StableStore interface. +func (i *InmemStore) Set(key []byte, val []byte) error { + i.l.Lock() + defer i.l.Unlock() + i.kv[string(key)] = val + return nil +} + +// Get implements the StableStore interface. +func (i *InmemStore) Get(key []byte) ([]byte, error) { + i.l.RLock() + defer i.l.RUnlock() + return i.kv[string(key)], nil +} + +// SetUint64 implements the StableStore interface. +func (i *InmemStore) SetUint64(key []byte, val uint64) error { + i.l.Lock() + defer i.l.Unlock() + i.kvInt[string(key)] = val + return nil +} + +// GetUint64 implements the StableStore interface. +func (i *InmemStore) GetUint64(key []byte) (uint64, error) { + i.l.RLock() + defer i.l.RUnlock() + return i.kvInt[string(key)], nil +} diff --git a/vendor/github.com/hashicorp/raft/inmem_transport.go b/vendor/github.com/hashicorp/raft/inmem_transport.go new file mode 100644 index 0000000000..3693cd5ad1 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/inmem_transport.go @@ -0,0 +1,322 @@ +package raft + +import ( + "fmt" + "io" + "sync" + "time" +) + +// NewInmemAddr returns a new in-memory addr with +// a randomly generate UUID as the ID. +func NewInmemAddr() ServerAddress { + return ServerAddress(generateUUID()) +} + +// inmemPipeline is used to pipeline requests for the in-mem transport. +type inmemPipeline struct { + trans *InmemTransport + peer *InmemTransport + peerAddr ServerAddress + + doneCh chan AppendFuture + inprogressCh chan *inmemPipelineInflight + + shutdown bool + shutdownCh chan struct{} + shutdownLock sync.Mutex +} + +type inmemPipelineInflight struct { + future *appendFuture + respCh <-chan RPCResponse +} + +// InmemTransport Implements the Transport interface, to allow Raft to be +// tested in-memory without going over a network. +type InmemTransport struct { + sync.RWMutex + consumerCh chan RPC + localAddr ServerAddress + peers map[ServerAddress]*InmemTransport + pipelines []*inmemPipeline + timeout time.Duration +} + +// NewInmemTransport is used to initialize a new transport +// and generates a random local address if none is specified +func NewInmemTransport(addr ServerAddress) (ServerAddress, *InmemTransport) { + if string(addr) == "" { + addr = NewInmemAddr() + } + trans := &InmemTransport{ + consumerCh: make(chan RPC, 16), + localAddr: addr, + peers: make(map[ServerAddress]*InmemTransport), + timeout: 50 * time.Millisecond, + } + return addr, trans +} + +// SetHeartbeatHandler is used to set optional fast-path for +// heartbeats, not supported for this transport. +func (i *InmemTransport) SetHeartbeatHandler(cb func(RPC)) { +} + +// Consumer implements the Transport interface. +func (i *InmemTransport) Consumer() <-chan RPC { + return i.consumerCh +} + +// LocalAddr implements the Transport interface. +func (i *InmemTransport) LocalAddr() ServerAddress { + return i.localAddr +} + +// AppendEntriesPipeline returns an interface that can be used to pipeline +// AppendEntries requests. +func (i *InmemTransport) AppendEntriesPipeline(target ServerAddress) (AppendPipeline, error) { + i.RLock() + peer, ok := i.peers[target] + i.RUnlock() + if !ok { + return nil, fmt.Errorf("failed to connect to peer: %v", target) + } + pipeline := newInmemPipeline(i, peer, target) + i.Lock() + i.pipelines = append(i.pipelines, pipeline) + i.Unlock() + return pipeline, nil +} + +// AppendEntries implements the Transport interface. +func (i *InmemTransport) AppendEntries(target ServerAddress, args *AppendEntriesRequest, resp *AppendEntriesResponse) error { + rpcResp, err := i.makeRPC(target, args, nil, i.timeout) + if err != nil { + return err + } + + // Copy the result back + out := rpcResp.Response.(*AppendEntriesResponse) + *resp = *out + return nil +} + +// RequestVote implements the Transport interface. +func (i *InmemTransport) RequestVote(target ServerAddress, args *RequestVoteRequest, resp *RequestVoteResponse) error { + rpcResp, err := i.makeRPC(target, args, nil, i.timeout) + if err != nil { + return err + } + + // Copy the result back + out := rpcResp.Response.(*RequestVoteResponse) + *resp = *out + return nil +} + +// InstallSnapshot implements the Transport interface. +func (i *InmemTransport) InstallSnapshot(target ServerAddress, args *InstallSnapshotRequest, resp *InstallSnapshotResponse, data io.Reader) error { + rpcResp, err := i.makeRPC(target, args, data, 10*i.timeout) + if err != nil { + return err + } + + // Copy the result back + out := rpcResp.Response.(*InstallSnapshotResponse) + *resp = *out + return nil +} + +func (i *InmemTransport) makeRPC(target ServerAddress, args interface{}, r io.Reader, timeout time.Duration) (rpcResp RPCResponse, err error) { + i.RLock() + peer, ok := i.peers[target] + i.RUnlock() + + if !ok { + err = fmt.Errorf("failed to connect to peer: %v", target) + return + } + + // Send the RPC over + respCh := make(chan RPCResponse) + peer.consumerCh <- RPC{ + Command: args, + Reader: r, + RespChan: respCh, + } + + // Wait for a response + select { + case rpcResp = <-respCh: + if rpcResp.Error != nil { + err = rpcResp.Error + } + case <-time.After(timeout): + err = fmt.Errorf("command timed out") + } + return +} + +// EncodePeer implements the Transport interface. +func (i *InmemTransport) EncodePeer(p ServerAddress) []byte { + return []byte(p) +} + +// DecodePeer implements the Transport interface. +func (i *InmemTransport) DecodePeer(buf []byte) ServerAddress { + return ServerAddress(buf) +} + +// Connect is used to connect this transport to another transport for +// a given peer name. This allows for local routing. +func (i *InmemTransport) Connect(peer ServerAddress, t Transport) { + trans := t.(*InmemTransport) + i.Lock() + defer i.Unlock() + i.peers[peer] = trans +} + +// Disconnect is used to remove the ability to route to a given peer. +func (i *InmemTransport) Disconnect(peer ServerAddress) { + i.Lock() + defer i.Unlock() + delete(i.peers, peer) + + // Disconnect any pipelines + n := len(i.pipelines) + for idx := 0; idx < n; idx++ { + if i.pipelines[idx].peerAddr == peer { + i.pipelines[idx].Close() + i.pipelines[idx], i.pipelines[n-1] = i.pipelines[n-1], nil + idx-- + n-- + } + } + i.pipelines = i.pipelines[:n] +} + +// DisconnectAll is used to remove all routes to peers. +func (i *InmemTransport) DisconnectAll() { + i.Lock() + defer i.Unlock() + i.peers = make(map[ServerAddress]*InmemTransport) + + // Handle pipelines + for _, pipeline := range i.pipelines { + pipeline.Close() + } + i.pipelines = nil +} + +// Close is used to permanently disable the transport +func (i *InmemTransport) Close() error { + i.DisconnectAll() + return nil +} + +func newInmemPipeline(trans *InmemTransport, peer *InmemTransport, addr ServerAddress) *inmemPipeline { + i := &inmemPipeline{ + trans: trans, + peer: peer, + peerAddr: addr, + doneCh: make(chan AppendFuture, 16), + inprogressCh: make(chan *inmemPipelineInflight, 16), + shutdownCh: make(chan struct{}), + } + go i.decodeResponses() + return i +} + +func (i *inmemPipeline) decodeResponses() { + timeout := i.trans.timeout + for { + select { + case inp := <-i.inprogressCh: + var timeoutCh <-chan time.Time + if timeout > 0 { + timeoutCh = time.After(timeout) + } + + select { + case rpcResp := <-inp.respCh: + // Copy the result back + *inp.future.resp = *rpcResp.Response.(*AppendEntriesResponse) + inp.future.respond(rpcResp.Error) + + select { + case i.doneCh <- inp.future: + case <-i.shutdownCh: + return + } + + case <-timeoutCh: + inp.future.respond(fmt.Errorf("command timed out")) + select { + case i.doneCh <- inp.future: + case <-i.shutdownCh: + return + } + + case <-i.shutdownCh: + return + } + case <-i.shutdownCh: + return + } + } +} + +func (i *inmemPipeline) AppendEntries(args *AppendEntriesRequest, resp *AppendEntriesResponse) (AppendFuture, error) { + // Create a new future + future := &appendFuture{ + start: time.Now(), + args: args, + resp: resp, + } + future.init() + + // Handle a timeout + var timeout <-chan time.Time + if i.trans.timeout > 0 { + timeout = time.After(i.trans.timeout) + } + + // Send the RPC over + respCh := make(chan RPCResponse, 1) + rpc := RPC{ + Command: args, + RespChan: respCh, + } + select { + case i.peer.consumerCh <- rpc: + case <-timeout: + return nil, fmt.Errorf("command enqueue timeout") + case <-i.shutdownCh: + return nil, ErrPipelineShutdown + } + + // Send to be decoded + select { + case i.inprogressCh <- &inmemPipelineInflight{future, respCh}: + return future, nil + case <-i.shutdownCh: + return nil, ErrPipelineShutdown + } +} + +func (i *inmemPipeline) Consumer() <-chan AppendFuture { + return i.doneCh +} + +func (i *inmemPipeline) Close() error { + i.shutdownLock.Lock() + defer i.shutdownLock.Unlock() + if i.shutdown { + return nil + } + + i.shutdown = true + close(i.shutdownCh) + return nil +} diff --git a/vendor/github.com/hashicorp/raft/log.go b/vendor/github.com/hashicorp/raft/log.go new file mode 100644 index 0000000000..4ade38ecc1 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/log.go @@ -0,0 +1,72 @@ +package raft + +// LogType describes various types of log entries. +type LogType uint8 + +const ( + // LogCommand is applied to a user FSM. + LogCommand LogType = iota + + // LogNoop is used to assert leadership. + LogNoop + + // LogAddPeer is used to add a new peer. This should only be used with + // older protocol versions designed to be compatible with unversioned + // Raft servers. See comments in config.go for details. + LogAddPeerDeprecated + + // LogRemovePeer is used to remove an existing peer. This should only be + // used with older protocol versions designed to be compatible with + // unversioned Raft servers. See comments in config.go for details. + LogRemovePeerDeprecated + + // LogBarrier is used to ensure all preceding operations have been + // applied to the FSM. It is similar to LogNoop, but instead of returning + // once committed, it only returns once the FSM manager acks it. Otherwise + // it is possible there are operations committed but not yet applied to + // the FSM. + LogBarrier + + // LogConfiguration establishes a membership change configuration. It is + // created when a server is added, removed, promoted, etc. Only used + // when protocol version 1 or greater is in use. + LogConfiguration +) + +// Log entries are replicated to all members of the Raft cluster +// and form the heart of the replicated state machine. +type Log struct { + // Index holds the index of the log entry. + Index uint64 + + // Term holds the election term of the log entry. + Term uint64 + + // Type holds the type of the log entry. + Type LogType + + // Data holds the log entry's type-specific data. + Data []byte +} + +// LogStore is used to provide an interface for storing +// and retrieving logs in a durable fashion. +type LogStore interface { + // FirstIndex returns the first index written. 0 for no entries. + FirstIndex() (uint64, error) + + // LastIndex returns the last index written. 0 for no entries. + LastIndex() (uint64, error) + + // GetLog gets a log entry at a given index. + GetLog(index uint64, log *Log) error + + // StoreLog stores a log entry. + StoreLog(log *Log) error + + // StoreLogs stores multiple log entries. + StoreLogs(logs []*Log) error + + // DeleteRange deletes a range of log entries. The range is inclusive. + DeleteRange(min, max uint64) error +} diff --git a/vendor/github.com/hashicorp/raft/log_cache.go b/vendor/github.com/hashicorp/raft/log_cache.go new file mode 100644 index 0000000000..952e98c228 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/log_cache.go @@ -0,0 +1,79 @@ +package raft + +import ( + "fmt" + "sync" +) + +// LogCache wraps any LogStore implementation to provide an +// in-memory ring buffer. This is used to cache access to +// the recently written entries. For implementations that do not +// cache themselves, this can provide a substantial boost by +// avoiding disk I/O on recent entries. +type LogCache struct { + store LogStore + + cache []*Log + l sync.RWMutex +} + +// NewLogCache is used to create a new LogCache with the +// given capacity and backend store. +func NewLogCache(capacity int, store LogStore) (*LogCache, error) { + if capacity <= 0 { + return nil, fmt.Errorf("capacity must be positive") + } + c := &LogCache{ + store: store, + cache: make([]*Log, capacity), + } + return c, nil +} + +func (c *LogCache) GetLog(idx uint64, log *Log) error { + // Check the buffer for an entry + c.l.RLock() + cached := c.cache[idx%uint64(len(c.cache))] + c.l.RUnlock() + + // Check if entry is valid + if cached != nil && cached.Index == idx { + *log = *cached + return nil + } + + // Forward request on cache miss + return c.store.GetLog(idx, log) +} + +func (c *LogCache) StoreLog(log *Log) error { + return c.StoreLogs([]*Log{log}) +} + +func (c *LogCache) StoreLogs(logs []*Log) error { + // Insert the logs into the ring buffer + c.l.Lock() + for _, l := range logs { + c.cache[l.Index%uint64(len(c.cache))] = l + } + c.l.Unlock() + + return c.store.StoreLogs(logs) +} + +func (c *LogCache) FirstIndex() (uint64, error) { + return c.store.FirstIndex() +} + +func (c *LogCache) LastIndex() (uint64, error) { + return c.store.LastIndex() +} + +func (c *LogCache) DeleteRange(min, max uint64) error { + // Invalidate the cache on deletes + c.l.Lock() + c.cache = make([]*Log, len(c.cache)) + c.l.Unlock() + + return c.store.DeleteRange(min, max) +} diff --git a/vendor/github.com/hashicorp/raft/membership.md b/vendor/github.com/hashicorp/raft/membership.md new file mode 100644 index 0000000000..df1f83e27f --- /dev/null +++ b/vendor/github.com/hashicorp/raft/membership.md @@ -0,0 +1,83 @@ +Simon (@superfell) and I (@ongardie) talked through reworking this library's cluster membership changes last Friday. We don't see a way to split this into independent patches, so we're taking the next best approach: submitting the plan here for review, then working on an enormous PR. Your feedback would be appreciated. (@superfell is out this week, however, so don't expect him to respond quickly.) + +These are the main goals: + - Bringing things in line with the description in my PhD dissertation; + - Catching up new servers prior to granting them a vote, as well as allowing permanent non-voting members; and + - Eliminating the `peers.json` file, to avoid issues of consistency between that and the log/snapshot. + +## Data-centric view + +We propose to re-define a *configuration* as a set of servers, where each server includes an address (as it does today) and a mode that is either: + - *Voter*: a server whose vote is counted in elections and whose match index is used in advancing the leader's commit index. + - *Nonvoter*: a server that receives log entries but is not considered for elections or commitment purposes. + - *Staging*: a server that acts like a nonvoter with one exception: once a staging server receives enough log entries to catch up sufficiently to the leader's log, the leader will invoke a membership change to change the staging server to a voter. + +All changes to the configuration will be done by writing a new configuration to the log. The new configuration will be in affect as soon as it is appended to the log (not when it is committed like a normal state machine command). Note that, per my dissertation, there can be at most one uncommitted configuration at a time (the next configuration may not be created until the prior one has been committed). It's not strictly necessary to follow these same rules for the nonvoter/staging servers, but we think its best to treat all changes uniformly. + +Each server will track two configurations: + 1. its *committed configuration*: the latest configuration in the log/snapshot that has been committed, along with its index. + 2. its *latest configuration*: the latest configuration in the log/snapshot (may be committed or uncommitted), along with its index. + +When there's no membership change happening, these two will be the same. The latest configuration is almost always the one used, except: + - When followers truncate the suffix of their logs, they may need to fall back to the committed configuration. + - When snapshotting, the committed configuration is written, to correspond with the committed log prefix that is being snapshotted. + + +## Application API + +We propose the following operations for clients to manipulate the cluster configuration: + - AddVoter: server becomes staging unless voter, + - AddNonvoter: server becomes nonvoter unless staging or voter, + - DemoteVoter: server becomes nonvoter unless absent, + - RemovePeer: server removed from configuration, + - GetConfiguration: waits for latest config to commit, returns committed config. + +This diagram, of which I'm quite proud, shows the possible transitions: +``` ++-----------------------------------------------------------------------------+ +| | +| Start -> +--------+ | +| ,------<------------| | | +| / | absent | | +| / RemovePeer--> | | <---RemovePeer | +| / | +--------+ \ | +| / | | \ | +| AddNonvoter | AddVoter \ | +| | ,->---' `--<-. | \ | +| v / \ v \ | +| +----------+ +----------+ +----------+ | +| | | ---AddVoter--> | | -log caught up --> | | | +| | nonvoter | | staging | | voter | | +| | | <-DemoteVoter- | | ,- | | | +| +----------+ \ +----------+ / +----------+ | +| \ / | +| `--------------<---------------' | +| | ++-----------------------------------------------------------------------------+ +``` + +While these operations aren't quite symmetric, we think they're a good set to capture +the possible intent of the user. For example, if I want to make sure a server doesn't have a vote, but the server isn't part of the configuration at all, it probably shouldn't be added as a nonvoting server. + +Each of these application-level operations will be interpreted by the leader and, if it has an effect, will cause the leader to write a new configuration entry to its log. Which particular application-level operation caused the log entry to be written need not be part of the log entry. + +## Code implications + +This is a non-exhaustive list, but we came up with a few things: +- Remove the PeerStore: the `peers.json` file introduces the possibility of getting out of sync with the log and snapshot, and it's hard to maintain this atomically as the log changes. It's not clear whether it's meant to track the committed or latest configuration, either. +- Servers will have to search their snapshot and log to find the committed configuration and the latest configuration on startup. +- Bootstrap will no longer use `peers.json` but should initialize the log or snapshot with an application-provided configuration entry. +- Snapshots should store the index of their configuration along with the configuration itself. In my experience with LogCabin, the original log index of the configuration is very useful to include in debug log messages. +- As noted in hashicorp/raft#84, configuration change requests should come in via a separate channel, and one may not proceed until the last has been committed. +- As to deciding when a log is sufficiently caught up, implementing a sophisticated algorithm *is* something that can be done in a separate PR. An easy and decent placeholder is: once the staging server has reached 95% of the leader's commit index, promote it. + +## Feedback + +Again, we're looking for feedback here before we start working on this. Here are some questions to think about: + - Does this seem like where we want things to go? + - Is there anything here that should be left out? + - Is there anything else we're forgetting about? + - Is there a good way to break this up? + - What do we need to worry about in terms of backwards compatibility? + - What implication will this have on current tests? + - What's the best way to test this code, in particular the small changes that will be sprinkled all over the library? diff --git a/vendor/github.com/hashicorp/raft/net_transport.go b/vendor/github.com/hashicorp/raft/net_transport.go new file mode 100644 index 0000000000..7c55ac5371 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/net_transport.go @@ -0,0 +1,622 @@ +package raft + +import ( + "bufio" + "errors" + "fmt" + "io" + "log" + "net" + "os" + "sync" + "time" + + "github.com/hashicorp/go-msgpack/codec" +) + +const ( + rpcAppendEntries uint8 = iota + rpcRequestVote + rpcInstallSnapshot + + // DefaultTimeoutScale is the default TimeoutScale in a NetworkTransport. + DefaultTimeoutScale = 256 * 1024 // 256KB + + // rpcMaxPipeline controls the maximum number of outstanding + // AppendEntries RPC calls. + rpcMaxPipeline = 128 +) + +var ( + // ErrTransportShutdown is returned when operations on a transport are + // invoked after it's been terminated. + ErrTransportShutdown = errors.New("transport shutdown") + + // ErrPipelineShutdown is returned when the pipeline is closed. + ErrPipelineShutdown = errors.New("append pipeline closed") +) + +/* + +NetworkTransport provides a network based transport that can be +used to communicate with Raft on remote machines. It requires +an underlying stream layer to provide a stream abstraction, which can +be simple TCP, TLS, etc. + +This transport is very simple and lightweight. Each RPC request is +framed by sending a byte that indicates the message type, followed +by the MsgPack encoded request. + +The response is an error string followed by the response object, +both are encoded using MsgPack. + +InstallSnapshot is special, in that after the RPC request we stream +the entire state. That socket is not re-used as the connection state +is not known if there is an error. + +*/ +type NetworkTransport struct { + connPool map[ServerAddress][]*netConn + connPoolLock sync.Mutex + + consumeCh chan RPC + + heartbeatFn func(RPC) + heartbeatFnLock sync.Mutex + + logger *log.Logger + + maxPool int + + shutdown bool + shutdownCh chan struct{} + shutdownLock sync.Mutex + + stream StreamLayer + + timeout time.Duration + TimeoutScale int +} + +// StreamLayer is used with the NetworkTransport to provide +// the low level stream abstraction. +type StreamLayer interface { + net.Listener + + // Dial is used to create a new outgoing connection + Dial(address ServerAddress, timeout time.Duration) (net.Conn, error) +} + +type netConn struct { + target ServerAddress + conn net.Conn + r *bufio.Reader + w *bufio.Writer + dec *codec.Decoder + enc *codec.Encoder +} + +func (n *netConn) Release() error { + return n.conn.Close() +} + +type netPipeline struct { + conn *netConn + trans *NetworkTransport + + doneCh chan AppendFuture + inprogressCh chan *appendFuture + + shutdown bool + shutdownCh chan struct{} + shutdownLock sync.Mutex +} + +// NewNetworkTransport creates a new network transport with the given dialer +// and listener. The maxPool controls how many connections we will pool. The +// timeout is used to apply I/O deadlines. For InstallSnapshot, we multiply +// the timeout by (SnapshotSize / TimeoutScale). +func NewNetworkTransport( + stream StreamLayer, + maxPool int, + timeout time.Duration, + logOutput io.Writer, +) *NetworkTransport { + if logOutput == nil { + logOutput = os.Stderr + } + return NewNetworkTransportWithLogger(stream, maxPool, timeout, log.New(logOutput, "", log.LstdFlags)) +} + +// NewNetworkTransportWithLogger creates a new network transport with the given dialer +// and listener. The maxPool controls how many connections we will pool. The +// timeout is used to apply I/O deadlines. For InstallSnapshot, we multiply +// the timeout by (SnapshotSize / TimeoutScale). +func NewNetworkTransportWithLogger( + stream StreamLayer, + maxPool int, + timeout time.Duration, + logger *log.Logger, +) *NetworkTransport { + if logger == nil { + logger = log.New(os.Stderr, "", log.LstdFlags) + } + trans := &NetworkTransport{ + connPool: make(map[ServerAddress][]*netConn), + consumeCh: make(chan RPC), + logger: logger, + maxPool: maxPool, + shutdownCh: make(chan struct{}), + stream: stream, + timeout: timeout, + TimeoutScale: DefaultTimeoutScale, + } + go trans.listen() + return trans +} + +// SetHeartbeatHandler is used to setup a heartbeat handler +// as a fast-pass. This is to avoid head-of-line blocking from +// disk IO. +func (n *NetworkTransport) SetHeartbeatHandler(cb func(rpc RPC)) { + n.heartbeatFnLock.Lock() + defer n.heartbeatFnLock.Unlock() + n.heartbeatFn = cb +} + +// Close is used to stop the network transport. +func (n *NetworkTransport) Close() error { + n.shutdownLock.Lock() + defer n.shutdownLock.Unlock() + + if !n.shutdown { + close(n.shutdownCh) + n.stream.Close() + n.shutdown = true + } + return nil +} + +// Consumer implements the Transport interface. +func (n *NetworkTransport) Consumer() <-chan RPC { + return n.consumeCh +} + +// LocalAddr implements the Transport interface. +func (n *NetworkTransport) LocalAddr() ServerAddress { + return ServerAddress(n.stream.Addr().String()) +} + +// IsShutdown is used to check if the transport is shutdown. +func (n *NetworkTransport) IsShutdown() bool { + select { + case <-n.shutdownCh: + return true + default: + return false + } +} + +// getExistingConn is used to grab a pooled connection. +func (n *NetworkTransport) getPooledConn(target ServerAddress) *netConn { + n.connPoolLock.Lock() + defer n.connPoolLock.Unlock() + + conns, ok := n.connPool[target] + if !ok || len(conns) == 0 { + return nil + } + + var conn *netConn + num := len(conns) + conn, conns[num-1] = conns[num-1], nil + n.connPool[target] = conns[:num-1] + return conn +} + +// getConn is used to get a connection from the pool. +func (n *NetworkTransport) getConn(target ServerAddress) (*netConn, error) { + // Check for a pooled conn + if conn := n.getPooledConn(target); conn != nil { + return conn, nil + } + + // Dial a new connection + conn, err := n.stream.Dial(target, n.timeout) + if err != nil { + return nil, err + } + + // Wrap the conn + netConn := &netConn{ + target: target, + conn: conn, + r: bufio.NewReader(conn), + w: bufio.NewWriter(conn), + } + + // Setup encoder/decoders + netConn.dec = codec.NewDecoder(netConn.r, &codec.MsgpackHandle{}) + netConn.enc = codec.NewEncoder(netConn.w, &codec.MsgpackHandle{}) + + // Done + return netConn, nil +} + +// returnConn returns a connection back to the pool. +func (n *NetworkTransport) returnConn(conn *netConn) { + n.connPoolLock.Lock() + defer n.connPoolLock.Unlock() + + key := conn.target + conns, _ := n.connPool[key] + + if !n.IsShutdown() && len(conns) < n.maxPool { + n.connPool[key] = append(conns, conn) + } else { + conn.Release() + } +} + +// AppendEntriesPipeline returns an interface that can be used to pipeline +// AppendEntries requests. +func (n *NetworkTransport) AppendEntriesPipeline(target ServerAddress) (AppendPipeline, error) { + // Get a connection + conn, err := n.getConn(target) + if err != nil { + return nil, err + } + + // Create the pipeline + return newNetPipeline(n, conn), nil +} + +// AppendEntries implements the Transport interface. +func (n *NetworkTransport) AppendEntries(target ServerAddress, args *AppendEntriesRequest, resp *AppendEntriesResponse) error { + return n.genericRPC(target, rpcAppendEntries, args, resp) +} + +// RequestVote implements the Transport interface. +func (n *NetworkTransport) RequestVote(target ServerAddress, args *RequestVoteRequest, resp *RequestVoteResponse) error { + return n.genericRPC(target, rpcRequestVote, args, resp) +} + +// genericRPC handles a simple request/response RPC. +func (n *NetworkTransport) genericRPC(target ServerAddress, rpcType uint8, args interface{}, resp interface{}) error { + // Get a conn + conn, err := n.getConn(target) + if err != nil { + return err + } + + // Set a deadline + if n.timeout > 0 { + conn.conn.SetDeadline(time.Now().Add(n.timeout)) + } + + // Send the RPC + if err = sendRPC(conn, rpcType, args); err != nil { + return err + } + + // Decode the response + canReturn, err := decodeResponse(conn, resp) + if canReturn { + n.returnConn(conn) + } + return err +} + +// InstallSnapshot implements the Transport interface. +func (n *NetworkTransport) InstallSnapshot(target ServerAddress, args *InstallSnapshotRequest, resp *InstallSnapshotResponse, data io.Reader) error { + // Get a conn, always close for InstallSnapshot + conn, err := n.getConn(target) + if err != nil { + return err + } + defer conn.Release() + + // Set a deadline, scaled by request size + if n.timeout > 0 { + timeout := n.timeout * time.Duration(args.Size/int64(n.TimeoutScale)) + if timeout < n.timeout { + timeout = n.timeout + } + conn.conn.SetDeadline(time.Now().Add(timeout)) + } + + // Send the RPC + if err = sendRPC(conn, rpcInstallSnapshot, args); err != nil { + return err + } + + // Stream the state + if _, err = io.Copy(conn.w, data); err != nil { + return err + } + + // Flush + if err = conn.w.Flush(); err != nil { + return err + } + + // Decode the response, do not return conn + _, err = decodeResponse(conn, resp) + return err +} + +// EncodePeer implements the Transport interface. +func (n *NetworkTransport) EncodePeer(p ServerAddress) []byte { + return []byte(p) +} + +// DecodePeer implements the Transport interface. +func (n *NetworkTransport) DecodePeer(buf []byte) ServerAddress { + return ServerAddress(buf) +} + +// listen is used to handling incoming connections. +func (n *NetworkTransport) listen() { + for { + // Accept incoming connections + conn, err := n.stream.Accept() + if err != nil { + if n.IsShutdown() { + return + } + n.logger.Printf("[ERR] raft-net: Failed to accept connection: %v", err) + continue + } + n.logger.Printf("[DEBUG] raft-net: %v accepted connection from: %v", n.LocalAddr(), conn.RemoteAddr()) + + // Handle the connection in dedicated routine + go n.handleConn(conn) + } +} + +// handleConn is used to handle an inbound connection for its lifespan. +func (n *NetworkTransport) handleConn(conn net.Conn) { + defer conn.Close() + r := bufio.NewReader(conn) + w := bufio.NewWriter(conn) + dec := codec.NewDecoder(r, &codec.MsgpackHandle{}) + enc := codec.NewEncoder(w, &codec.MsgpackHandle{}) + + for { + if err := n.handleCommand(r, dec, enc); err != nil { + if err != io.EOF { + n.logger.Printf("[ERR] raft-net: Failed to decode incoming command: %v", err) + } + return + } + if err := w.Flush(); err != nil { + n.logger.Printf("[ERR] raft-net: Failed to flush response: %v", err) + return + } + } +} + +// handleCommand is used to decode and dispatch a single command. +func (n *NetworkTransport) handleCommand(r *bufio.Reader, dec *codec.Decoder, enc *codec.Encoder) error { + // Get the rpc type + rpcType, err := r.ReadByte() + if err != nil { + return err + } + + // Create the RPC object + respCh := make(chan RPCResponse, 1) + rpc := RPC{ + RespChan: respCh, + } + + // Decode the command + isHeartbeat := false + switch rpcType { + case rpcAppendEntries: + var req AppendEntriesRequest + if err := dec.Decode(&req); err != nil { + return err + } + rpc.Command = &req + + // Check if this is a heartbeat + if req.Term != 0 && req.Leader != nil && + req.PrevLogEntry == 0 && req.PrevLogTerm == 0 && + len(req.Entries) == 0 && req.LeaderCommitIndex == 0 { + isHeartbeat = true + } + + case rpcRequestVote: + var req RequestVoteRequest + if err := dec.Decode(&req); err != nil { + return err + } + rpc.Command = &req + + case rpcInstallSnapshot: + var req InstallSnapshotRequest + if err := dec.Decode(&req); err != nil { + return err + } + rpc.Command = &req + rpc.Reader = io.LimitReader(r, req.Size) + + default: + return fmt.Errorf("unknown rpc type %d", rpcType) + } + + // Check for heartbeat fast-path + if isHeartbeat { + n.heartbeatFnLock.Lock() + fn := n.heartbeatFn + n.heartbeatFnLock.Unlock() + if fn != nil { + fn(rpc) + goto RESP + } + } + + // Dispatch the RPC + select { + case n.consumeCh <- rpc: + case <-n.shutdownCh: + return ErrTransportShutdown + } + + // Wait for response +RESP: + select { + case resp := <-respCh: + // Send the error first + respErr := "" + if resp.Error != nil { + respErr = resp.Error.Error() + } + if err := enc.Encode(respErr); err != nil { + return err + } + + // Send the response + if err := enc.Encode(resp.Response); err != nil { + return err + } + case <-n.shutdownCh: + return ErrTransportShutdown + } + return nil +} + +// decodeResponse is used to decode an RPC response and reports whether +// the connection can be reused. +func decodeResponse(conn *netConn, resp interface{}) (bool, error) { + // Decode the error if any + var rpcError string + if err := conn.dec.Decode(&rpcError); err != nil { + conn.Release() + return false, err + } + + // Decode the response + if err := conn.dec.Decode(resp); err != nil { + conn.Release() + return false, err + } + + // Format an error if any + if rpcError != "" { + return true, fmt.Errorf(rpcError) + } + return true, nil +} + +// sendRPC is used to encode and send the RPC. +func sendRPC(conn *netConn, rpcType uint8, args interface{}) error { + // Write the request type + if err := conn.w.WriteByte(rpcType); err != nil { + conn.Release() + return err + } + + // Send the request + if err := conn.enc.Encode(args); err != nil { + conn.Release() + return err + } + + // Flush + if err := conn.w.Flush(); err != nil { + conn.Release() + return err + } + return nil +} + +// newNetPipeline is used to construct a netPipeline from a given +// transport and connection. +func newNetPipeline(trans *NetworkTransport, conn *netConn) *netPipeline { + n := &netPipeline{ + conn: conn, + trans: trans, + doneCh: make(chan AppendFuture, rpcMaxPipeline), + inprogressCh: make(chan *appendFuture, rpcMaxPipeline), + shutdownCh: make(chan struct{}), + } + go n.decodeResponses() + return n +} + +// decodeResponses is a long running routine that decodes the responses +// sent on the connection. +func (n *netPipeline) decodeResponses() { + timeout := n.trans.timeout + for { + select { + case future := <-n.inprogressCh: + if timeout > 0 { + n.conn.conn.SetReadDeadline(time.Now().Add(timeout)) + } + + _, err := decodeResponse(n.conn, future.resp) + future.respond(err) + select { + case n.doneCh <- future: + case <-n.shutdownCh: + return + } + case <-n.shutdownCh: + return + } + } +} + +// AppendEntries is used to pipeline a new append entries request. +func (n *netPipeline) AppendEntries(args *AppendEntriesRequest, resp *AppendEntriesResponse) (AppendFuture, error) { + // Create a new future + future := &appendFuture{ + start: time.Now(), + args: args, + resp: resp, + } + future.init() + + // Add a send timeout + if timeout := n.trans.timeout; timeout > 0 { + n.conn.conn.SetWriteDeadline(time.Now().Add(timeout)) + } + + // Send the RPC + if err := sendRPC(n.conn, rpcAppendEntries, future.args); err != nil { + return nil, err + } + + // Hand-off for decoding, this can also cause back-pressure + // to prevent too many inflight requests + select { + case n.inprogressCh <- future: + return future, nil + case <-n.shutdownCh: + return nil, ErrPipelineShutdown + } +} + +// Consumer returns a channel that can be used to consume complete futures. +func (n *netPipeline) Consumer() <-chan AppendFuture { + return n.doneCh +} + +// Closed is used to shutdown the pipeline connection. +func (n *netPipeline) Close() error { + n.shutdownLock.Lock() + defer n.shutdownLock.Unlock() + if n.shutdown { + return nil + } + + // Release the connection + n.conn.Release() + + n.shutdown = true + close(n.shutdownCh) + return nil +} diff --git a/vendor/github.com/hashicorp/raft/observer.go b/vendor/github.com/hashicorp/raft/observer.go new file mode 100644 index 0000000000..22500fa875 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/observer.go @@ -0,0 +1,115 @@ +package raft + +import ( + "sync/atomic" +) + +// Observation is sent along the given channel to observers when an event occurs. +type Observation struct { + // Raft holds the Raft instance generating the observation. + Raft *Raft + // Data holds observation-specific data. Possible types are + // *RequestVoteRequest and RaftState. + Data interface{} +} + +// nextObserverId is used to provide a unique ID for each observer to aid in +// deregistration. +var nextObserverID uint64 + +// FilterFn is a function that can be registered in order to filter observations. +// The function reports whether the observation should be included - if +// it returns false, the observation will be filtered out. +type FilterFn func(o *Observation) bool + +// Observer describes what to do with a given observation. +type Observer struct { + // channel receives observations. + channel chan Observation + + // blocking, if true, will cause Raft to block when sending an observation + // to this observer. This should generally be set to false. + blocking bool + + // filter will be called to determine if an observation should be sent to + // the channel. + filter FilterFn + + // id is the ID of this observer in the Raft map. + id uint64 + + // numObserved and numDropped are performance counters for this observer. + numObserved uint64 + numDropped uint64 +} + +// NewObserver creates a new observer that can be registered +// to make observations on a Raft instance. Observations +// will be sent on the given channel if they satisfy the +// given filter. +// +// If blocking is true, the observer will block when it can't +// send on the channel, otherwise it may discard events. +func NewObserver(channel chan Observation, blocking bool, filter FilterFn) *Observer { + return &Observer{ + channel: channel, + blocking: blocking, + filter: filter, + id: atomic.AddUint64(&nextObserverID, 1), + } +} + +// GetNumObserved returns the number of observations. +func (or *Observer) GetNumObserved() uint64 { + return atomic.LoadUint64(&or.numObserved) +} + +// GetNumDropped returns the number of dropped observations due to blocking. +func (or *Observer) GetNumDropped() uint64 { + return atomic.LoadUint64(&or.numDropped) +} + +// RegisterObserver registers a new observer. +func (r *Raft) RegisterObserver(or *Observer) { + r.observersLock.Lock() + defer r.observersLock.Unlock() + r.observers[or.id] = or +} + +// DeregisterObserver deregisters an observer. +func (r *Raft) DeregisterObserver(or *Observer) { + r.observersLock.Lock() + defer r.observersLock.Unlock() + delete(r.observers, or.id) +} + +// observe sends an observation to every observer. +func (r *Raft) observe(o interface{}) { + // In general observers should not block. But in any case this isn't + // disastrous as we only hold a read lock, which merely prevents + // registration / deregistration of observers. + r.observersLock.RLock() + defer r.observersLock.RUnlock() + for _, or := range r.observers { + // It's wasteful to do this in the loop, but for the common case + // where there are no observers we won't create any objects. + ob := Observation{Raft: r, Data: o} + if or.filter != nil && !or.filter(&ob) { + continue + } + if or.channel == nil { + continue + } + if or.blocking { + or.channel <- ob + atomic.AddUint64(&or.numObserved, 1) + } else { + select { + case or.channel <- ob: + atomic.AddUint64(&or.numObserved, 1) + default: + atomic.AddUint64(&or.numDropped, 1) + } + } + } +} diff --git a/vendor/github.com/hashicorp/raft/peersjson.go b/vendor/github.com/hashicorp/raft/peersjson.go new file mode 100644 index 0000000000..c55fdbb43d --- /dev/null +++ b/vendor/github.com/hashicorp/raft/peersjson.go @@ -0,0 +1,46 @@ +package raft + +import ( + "bytes" + "encoding/json" + "io/ioutil" +) + +// ReadPeersJSON consumes a legacy peers.json file in the format of the old JSON +// peer store and creates a new-style configuration structure. This can be used +// to migrate this data or perform manual recovery when running protocol versions +// that can interoperate with older, unversioned Raft servers. This should not be +// used once server IDs are in use, because the old peers.json file didn't have +// support for these, nor non-voter suffrage types. +func ReadPeersJSON(path string) (Configuration, error) { + // Read in the file. + buf, err := ioutil.ReadFile(path) + if err != nil { + return Configuration{}, err + } + + // Parse it as JSON. + var peers []string + dec := json.NewDecoder(bytes.NewReader(buf)) + if err := dec.Decode(&peers); err != nil { + return Configuration{}, err + } + + // Map it into the new-style configuration structure. We can only specify + // voter roles here, and the ID has to be the same as the address. + var configuration Configuration + for _, peer := range peers { + server := Server{ + Suffrage: Voter, + ID: ServerID(peer), + Address: ServerAddress(peer), + } + configuration.Servers = append(configuration.Servers, server) + } + + // We should only ingest valid configurations. + if err := checkConfiguration(configuration); err != nil { + return Configuration{}, err + } + return configuration, nil +} diff --git a/vendor/github.com/hashicorp/raft/raft.go b/vendor/github.com/hashicorp/raft/raft.go new file mode 100644 index 0000000000..aa8fe82082 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/raft.go @@ -0,0 +1,1456 @@ +package raft + +import ( + "bytes" + "container/list" + "fmt" + "io" + "time" + + "github.com/armon/go-metrics" +) + +const ( + minCheckInterval = 10 * time.Millisecond +) + +var ( + keyCurrentTerm = []byte("CurrentTerm") + keyLastVoteTerm = []byte("LastVoteTerm") + keyLastVoteCand = []byte("LastVoteCand") +) + +// getRPCHeader returns an initialized RPCHeader struct for the given +// Raft instance. This structure is sent along with RPC requests and +// responses. +func (r *Raft) getRPCHeader() RPCHeader { + return RPCHeader{ + ProtocolVersion: r.conf.ProtocolVersion, + } +} + +// checkRPCHeader houses logic about whether this instance of Raft can process +// the given RPC message. +func (r *Raft) checkRPCHeader(rpc RPC) error { + // Get the header off the RPC message. + wh, ok := rpc.Command.(WithRPCHeader) + if !ok { + return fmt.Errorf("RPC does not have a header") + } + header := wh.GetRPCHeader() + + // First check is to just make sure the code can understand the + // protocol at all. + if header.ProtocolVersion < ProtocolVersionMin || + header.ProtocolVersion > ProtocolVersionMax { + return ErrUnsupportedProtocol + } + + // Second check is whether we should support this message, given the + // current protocol we are configured to run. This will drop support + // for protocol version 0 starting at protocol version 2, which is + // currently what we want, and in general support one version back. We + // may need to revisit this policy depending on how future protocol + // changes evolve. + if header.ProtocolVersion < r.conf.ProtocolVersion-1 { + return ErrUnsupportedProtocol + } + + return nil +} + +// getSnapshotVersion returns the snapshot version that should be used when +// creating snapshots, given the protocol version in use. +func getSnapshotVersion(protocolVersion ProtocolVersion) SnapshotVersion { + // Right now we only have two versions and they are backwards compatible + // so we don't need to look at the protocol version. + return 1 +} + +// commitTuple is used to send an index that was committed, +// with an optional associated future that should be invoked. +type commitTuple struct { + log *Log + future *logFuture +} + +// leaderState is state that is used while we are a leader. +type leaderState struct { + commitCh chan struct{} + commitment *commitment + inflight *list.List // list of logFuture in log index order + replState map[ServerID]*followerReplication + notify map[*verifyFuture]struct{} + stepDown chan struct{} +} + +// setLeader is used to modify the current leader of the cluster +func (r *Raft) setLeader(leader ServerAddress) { + r.leaderLock.Lock() + r.leader = leader + r.leaderLock.Unlock() +} + +// requestConfigChange is a helper for the above functions that make +// configuration change requests. 'req' describes the change. For timeout, +// see AddVoter. +func (r *Raft) requestConfigChange(req configurationChangeRequest, timeout time.Duration) IndexFuture { + var timer <-chan time.Time + if timeout > 0 { + timer = time.After(timeout) + } + future := &configurationChangeFuture{ + req: req, + } + future.init() + select { + case <-timer: + return errorFuture{ErrEnqueueTimeout} + case r.configurationChangeCh <- future: + return future + case <-r.shutdownCh: + return errorFuture{ErrRaftShutdown} + } +} + +// run is a long running goroutine that runs the Raft FSM. +func (r *Raft) run() { + for { + // Check if we are doing a shutdown + select { + case <-r.shutdownCh: + // Clear the leader to prevent forwarding + r.setLeader("") + return + default: + } + + // Enter into a sub-FSM + switch r.getState() { + case Follower: + r.runFollower() + case Candidate: + r.runCandidate() + case Leader: + r.runLeader() + } + } +} + +// runFollower runs the FSM for a follower. +func (r *Raft) runFollower() { + didWarn := false + r.logger.Printf("[INFO] raft: %v entering Follower state (Leader: %q)", r, r.Leader()) + metrics.IncrCounter([]string{"raft", "state", "follower"}, 1) + heartbeatTimer := randomTimeout(r.conf.HeartbeatTimeout) + for { + select { + case rpc := <-r.rpcCh: + r.processRPC(rpc) + + case c := <-r.configurationChangeCh: + // Reject any operations since we are not the leader + c.respond(ErrNotLeader) + + case a := <-r.applyCh: + // Reject any operations since we are not the leader + a.respond(ErrNotLeader) + + case v := <-r.verifyCh: + // Reject any operations since we are not the leader + v.respond(ErrNotLeader) + + case r := <-r.userRestoreCh: + // Reject any restores since we are not the leader + r.respond(ErrNotLeader) + + case c := <-r.configurationsCh: + c.configurations = r.configurations.Clone() + c.respond(nil) + + case b := <-r.bootstrapCh: + b.respond(r.liveBootstrap(b.configuration)) + + case <-heartbeatTimer: + // Restart the heartbeat timer + heartbeatTimer = randomTimeout(r.conf.HeartbeatTimeout) + + // Check if we have had a successful contact + lastContact := r.LastContact() + if time.Now().Sub(lastContact) < r.conf.HeartbeatTimeout { + continue + } + + // Heartbeat failed! Transition to the candidate state + lastLeader := r.Leader() + r.setLeader("") + + if r.configurations.latestIndex == 0 { + if !didWarn { + r.logger.Printf("[WARN] raft: no known peers, aborting election") + didWarn = true + } + } else if r.configurations.latestIndex == r.configurations.committedIndex && + !hasVote(r.configurations.latest, r.localID) { + if !didWarn { + r.logger.Printf("[WARN] raft: not part of stable configuration, aborting election") + didWarn = true + } + } else { + r.logger.Printf(`[WARN] raft: Heartbeat timeout from %q reached, starting election`, lastLeader) + metrics.IncrCounter([]string{"raft", "transition", "heartbeat_timeout"}, 1) + r.setState(Candidate) + return + } + + case <-r.shutdownCh: + return + } + } +} + +// liveBootstrap attempts to seed an initial configuration for the cluster. See +// the Raft object's member BootstrapCluster for more details. This must only be +// called on the main thread, and only makes sense in the follower state. +func (r *Raft) liveBootstrap(configuration Configuration) error { + // Use the pre-init API to make the static updates. + err := BootstrapCluster(&r.conf, r.logs, r.stable, r.snapshots, + r.trans, configuration) + if err != nil { + return err + } + + // Make the configuration live. + var entry Log + if err := r.logs.GetLog(1, &entry); err != nil { + panic(err) + } + r.setCurrentTerm(1) + r.setLastLog(entry.Index, entry.Term) + r.processConfigurationLogEntry(&entry) + return nil +} + +// runCandidate runs the FSM for a candidate. +func (r *Raft) runCandidate() { + r.logger.Printf("[INFO] raft: %v entering Candidate state in term %v", + r, r.getCurrentTerm()+1) + metrics.IncrCounter([]string{"raft", "state", "candidate"}, 1) + + // Start vote for us, and set a timeout + voteCh := r.electSelf() + electionTimer := randomTimeout(r.conf.ElectionTimeout) + + // Tally the votes, need a simple majority + grantedVotes := 0 + votesNeeded := r.quorumSize() + r.logger.Printf("[DEBUG] raft: Votes needed: %d", votesNeeded) + + for r.getState() == Candidate { + select { + case rpc := <-r.rpcCh: + r.processRPC(rpc) + + case vote := <-voteCh: + // Check if the term is greater than ours, bail + if vote.Term > r.getCurrentTerm() { + r.logger.Printf("[DEBUG] raft: Newer term discovered, fallback to follower") + r.setState(Follower) + r.setCurrentTerm(vote.Term) + return + } + + // Check if the vote is granted + if vote.Granted { + grantedVotes++ + r.logger.Printf("[DEBUG] raft: Vote granted from %s in term %v. Tally: %d", + vote.voterID, vote.Term, grantedVotes) + } + + // Check if we've become the leader + if grantedVotes >= votesNeeded { + r.logger.Printf("[INFO] raft: Election won. Tally: %d", grantedVotes) + r.setState(Leader) + r.setLeader(r.localAddr) + return + } + + case c := <-r.configurationChangeCh: + // Reject any operations since we are not the leader + c.respond(ErrNotLeader) + + case a := <-r.applyCh: + // Reject any operations since we are not the leader + a.respond(ErrNotLeader) + + case v := <-r.verifyCh: + // Reject any operations since we are not the leader + v.respond(ErrNotLeader) + + case r := <-r.userRestoreCh: + // Reject any restores since we are not the leader + r.respond(ErrNotLeader) + + case c := <-r.configurationsCh: + c.configurations = r.configurations.Clone() + c.respond(nil) + + case b := <-r.bootstrapCh: + b.respond(ErrCantBootstrap) + + case <-electionTimer: + // Election failed! Restart the election. We simply return, + // which will kick us back into runCandidate + r.logger.Printf("[WARN] raft: Election timeout reached, restarting election") + return + + case <-r.shutdownCh: + return + } + } +} + +// runLeader runs the FSM for a leader. Do the setup here and drop into +// the leaderLoop for the hot loop. +func (r *Raft) runLeader() { + r.logger.Printf("[INFO] raft: %v entering Leader state", r) + metrics.IncrCounter([]string{"raft", "state", "leader"}, 1) + + // Notify that we are the leader + asyncNotifyBool(r.leaderCh, true) + + // Push to the notify channel if given + if notify := r.conf.NotifyCh; notify != nil { + select { + case notify <- true: + case <-r.shutdownCh: + } + } + + // Setup leader state + r.leaderState.commitCh = make(chan struct{}, 1) + r.leaderState.commitment = newCommitment(r.leaderState.commitCh, + r.configurations.latest, + r.getLastIndex()+1 /* first index that may be committed in this term */) + r.leaderState.inflight = list.New() + r.leaderState.replState = make(map[ServerID]*followerReplication) + r.leaderState.notify = make(map[*verifyFuture]struct{}) + r.leaderState.stepDown = make(chan struct{}, 1) + + // Cleanup state on step down + defer func() { + // Since we were the leader previously, we update our + // last contact time when we step down, so that we are not + // reporting a last contact time from before we were the + // leader. Otherwise, to a client it would seem our data + // is extremely stale. + r.setLastContact() + + // Stop replication + for _, p := range r.leaderState.replState { + close(p.stopCh) + } + + // Respond to all inflight operations + for e := r.leaderState.inflight.Front(); e != nil; e = e.Next() { + e.Value.(*logFuture).respond(ErrLeadershipLost) + } + + // Respond to any pending verify requests + for future := range r.leaderState.notify { + future.respond(ErrLeadershipLost) + } + + // Clear all the state + r.leaderState.commitCh = nil + r.leaderState.commitment = nil + r.leaderState.inflight = nil + r.leaderState.replState = nil + r.leaderState.notify = nil + r.leaderState.stepDown = nil + + // If we are stepping down for some reason, no known leader. + // We may have stepped down due to an RPC call, which would + // provide the leader, so we cannot always blank this out. + r.leaderLock.Lock() + if r.leader == r.localAddr { + r.leader = "" + } + r.leaderLock.Unlock() + + // Notify that we are not the leader + asyncNotifyBool(r.leaderCh, false) + + // Push to the notify channel if given + if notify := r.conf.NotifyCh; notify != nil { + select { + case notify <- false: + case <-r.shutdownCh: + // On shutdown, make a best effort but do not block + select { + case notify <- false: + default: + } + } + } + }() + + // Start a replication routine for each peer + r.startStopReplication() + + // Dispatch a no-op log entry first. This gets this leader up to the latest + // possible commit index, even in the absence of client commands. This used + // to append a configuration entry instead of a noop. However, that permits + // an unbounded number of uncommitted configurations in the log. We now + // maintain that there exists at most one uncommitted configuration entry in + // any log, so we have to do proper no-ops here. + noop := &logFuture{ + log: Log{ + Type: LogNoop, + }, + } + r.dispatchLogs([]*logFuture{noop}) + + // Sit in the leader loop until we step down + r.leaderLoop() +} + +// startStopReplication will set up state and start asynchronous replication to +// new peers, and stop replication to removed peers. Before removing a peer, +// it'll instruct the replication routines to try to replicate to the current +// index. This must only be called from the main thread. +func (r *Raft) startStopReplication() { + inConfig := make(map[ServerID]bool, len(r.configurations.latest.Servers)) + lastIdx := r.getLastIndex() + + // Start replication goroutines that need starting + for _, server := range r.configurations.latest.Servers { + if server.ID == r.localID { + continue + } + inConfig[server.ID] = true + if _, ok := r.leaderState.replState[server.ID]; !ok { + r.logger.Printf("[INFO] raft: Added peer %v, starting replication", server.ID) + s := &followerReplication{ + peer: server, + commitment: r.leaderState.commitment, + stopCh: make(chan uint64, 1), + triggerCh: make(chan struct{}, 1), + currentTerm: r.getCurrentTerm(), + nextIndex: lastIdx + 1, + lastContact: time.Now(), + notifyCh: make(chan struct{}, 1), + stepDown: r.leaderState.stepDown, + } + r.leaderState.replState[server.ID] = s + r.goFunc(func() { r.replicate(s) }) + asyncNotifyCh(s.triggerCh) + } + } + + // Stop replication goroutines that need stopping + for serverID, repl := range r.leaderState.replState { + if inConfig[serverID] { + continue + } + // Replicate up to lastIdx and stop + r.logger.Printf("[INFO] raft: Removed peer %v, stopping replication after %v", serverID, lastIdx) + repl.stopCh <- lastIdx + close(repl.stopCh) + delete(r.leaderState.replState, serverID) + } +} + +// configurationChangeChIfStable returns r.configurationChangeCh if it's safe +// to process requests from it, or nil otherwise. This must only be called +// from the main thread. +// +// Note that if the conditions here were to change outside of leaderLoop to take +// this from nil to non-nil, we would need leaderLoop to be kicked. +func (r *Raft) configurationChangeChIfStable() chan *configurationChangeFuture { + // Have to wait until: + // 1. The latest configuration is committed, and + // 2. This leader has committed some entry (the noop) in this term + // https://groups.google.com/forum/#!msg/raft-dev/t4xj6dJTP6E/d2D9LrWRza8J + if r.configurations.latestIndex == r.configurations.committedIndex && + r.getCommitIndex() >= r.leaderState.commitment.startIndex { + return r.configurationChangeCh + } + return nil +} + +// leaderLoop is the hot loop for a leader. It is invoked +// after all the various leader setup is done. +func (r *Raft) leaderLoop() { + // stepDown is used to track if there is an inflight log that + // would cause us to lose leadership (specifically a RemovePeer of + // ourselves). If this is the case, we must not allow any logs to + // be processed in parallel, otherwise we are basing commit on + // only a single peer (ourself) and replicating to an undefined set + // of peers. + stepDown := false + + lease := time.After(r.conf.LeaderLeaseTimeout) + for r.getState() == Leader { + select { + case rpc := <-r.rpcCh: + r.processRPC(rpc) + + case <-r.leaderState.stepDown: + r.setState(Follower) + + case <-r.leaderState.commitCh: + // Process the newly committed entries + oldCommitIndex := r.getCommitIndex() + commitIndex := r.leaderState.commitment.getCommitIndex() + r.setCommitIndex(commitIndex) + + if r.configurations.latestIndex > oldCommitIndex && + r.configurations.latestIndex <= commitIndex { + r.configurations.committed = r.configurations.latest + r.configurations.committedIndex = r.configurations.latestIndex + if !hasVote(r.configurations.committed, r.localID) { + stepDown = true + } + } + + for { + e := r.leaderState.inflight.Front() + if e == nil { + break + } + commitLog := e.Value.(*logFuture) + idx := commitLog.log.Index + if idx > commitIndex { + break + } + // Measure the commit time + metrics.MeasureSince([]string{"raft", "commitTime"}, commitLog.dispatch) + r.processLogs(idx, commitLog) + r.leaderState.inflight.Remove(e) + } + + if stepDown { + if r.conf.ShutdownOnRemove { + r.logger.Printf("[INFO] raft: Removed ourself, shutting down") + r.Shutdown() + } else { + r.logger.Printf("[INFO] raft: Removed ourself, transitioning to follower") + r.setState(Follower) + } + } + + case v := <-r.verifyCh: + if v.quorumSize == 0 { + // Just dispatched, start the verification + r.verifyLeader(v) + + } else if v.votes < v.quorumSize { + // Early return, means there must be a new leader + r.logger.Printf("[WARN] raft: New leader elected, stepping down") + r.setState(Follower) + delete(r.leaderState.notify, v) + v.respond(ErrNotLeader) + + } else { + // Quorum of members agree, we are still leader + delete(r.leaderState.notify, v) + v.respond(nil) + } + + case future := <-r.userRestoreCh: + err := r.restoreUserSnapshot(future.meta, future.reader) + future.respond(err) + + case c := <-r.configurationsCh: + c.configurations = r.configurations.Clone() + c.respond(nil) + + case future := <-r.configurationChangeChIfStable(): + r.appendConfigurationEntry(future) + + case b := <-r.bootstrapCh: + b.respond(ErrCantBootstrap) + + case newLog := <-r.applyCh: + // Group commit, gather all the ready commits + ready := []*logFuture{newLog} + for i := 0; i < r.conf.MaxAppendEntries; i++ { + select { + case newLog := <-r.applyCh: + ready = append(ready, newLog) + default: + break + } + } + + // Dispatch the logs + if stepDown { + // we're in the process of stepping down as leader, don't process anything new + for i := range ready { + ready[i].respond(ErrNotLeader) + } + } else { + r.dispatchLogs(ready) + } + + case <-lease: + // Check if we've exceeded the lease, potentially stepping down + maxDiff := r.checkLeaderLease() + + // Next check interval should adjust for the last node we've + // contacted, without going negative + checkInterval := r.conf.LeaderLeaseTimeout - maxDiff + if checkInterval < minCheckInterval { + checkInterval = minCheckInterval + } + + // Renew the lease timer + lease = time.After(checkInterval) + + case <-r.shutdownCh: + return + } + } +} + +// verifyLeader must be called from the main thread for safety. +// Causes the followers to attempt an immediate heartbeat. +func (r *Raft) verifyLeader(v *verifyFuture) { + // Current leader always votes for self + v.votes = 1 + + // Set the quorum size, hot-path for single node + v.quorumSize = r.quorumSize() + if v.quorumSize == 1 { + v.respond(nil) + return + } + + // Track this request + v.notifyCh = r.verifyCh + r.leaderState.notify[v] = struct{}{} + + // Trigger immediate heartbeats + for _, repl := range r.leaderState.replState { + repl.notifyLock.Lock() + repl.notify = append(repl.notify, v) + repl.notifyLock.Unlock() + asyncNotifyCh(repl.notifyCh) + } +} + +// checkLeaderLease is used to check if we can contact a quorum of nodes +// within the last leader lease interval. If not, we need to step down, +// as we may have lost connectivity. Returns the maximum duration without +// contact. This must only be called from the main thread. +func (r *Raft) checkLeaderLease() time.Duration { + // Track contacted nodes, we can always contact ourself + contacted := 1 + + // Check each follower + var maxDiff time.Duration + now := time.Now() + for peer, f := range r.leaderState.replState { + diff := now.Sub(f.LastContact()) + if diff <= r.conf.LeaderLeaseTimeout { + contacted++ + if diff > maxDiff { + maxDiff = diff + } + } else { + // Log at least once at high value, then debug. Otherwise it gets very verbose. + if diff <= 3*r.conf.LeaderLeaseTimeout { + r.logger.Printf("[WARN] raft: Failed to contact %v in %v", peer, diff) + } else { + r.logger.Printf("[DEBUG] raft: Failed to contact %v in %v", peer, diff) + } + } + metrics.AddSample([]string{"raft", "leader", "lastContact"}, float32(diff/time.Millisecond)) + } + + // Verify we can contact a quorum + quorum := r.quorumSize() + if contacted < quorum { + r.logger.Printf("[WARN] raft: Failed to contact quorum of nodes, stepping down") + r.setState(Follower) + metrics.IncrCounter([]string{"raft", "transition", "leader_lease_timeout"}, 1) + } + return maxDiff +} + +// quorumSize is used to return the quorum size. This must only be called on +// the main thread. +// TODO: revisit usage +func (r *Raft) quorumSize() int { + voters := 0 + for _, server := range r.configurations.latest.Servers { + if server.Suffrage == Voter { + voters++ + } + } + return voters/2 + 1 +} + +// restoreUserSnapshot is used to manually consume an external snapshot, such +// as if restoring from a backup. We will use the current Raft configuration, +// not the one from the snapshot, so that we can restore into a new cluster. We +// will also use the higher of the index of the snapshot, or the current index, +// and then add 1 to that, so we force a new state with a hole in the Raft log, +// so that the snapshot will be sent to followers and used for any new joiners. +// This can only be run on the leader, and returns a future that can be used to +// block until complete. +func (r *Raft) restoreUserSnapshot(meta *SnapshotMeta, reader io.Reader) error { + defer metrics.MeasureSince([]string{"raft", "restoreUserSnapshot"}, time.Now()) + + // Sanity check the version. + version := meta.Version + if version < SnapshotVersionMin || version > SnapshotVersionMax { + return fmt.Errorf("unsupported snapshot version %d", version) + } + + // We don't support snapshots while there's a config change + // outstanding since the snapshot doesn't have a means to + // represent this state. + committedIndex := r.configurations.committedIndex + latestIndex := r.configurations.latestIndex + if committedIndex != latestIndex { + return fmt.Errorf("cannot restore snapshot now, wait until the configuration entry at %v has been applied (have applied %v)", + latestIndex, committedIndex) + } + + // Cancel any inflight requests. + for { + e := r.leaderState.inflight.Front() + if e == nil { + break + } + e.Value.(*logFuture).respond(ErrAbortedByRestore) + r.leaderState.inflight.Remove(e) + } + + // We will overwrite the snapshot metadata with the current term, + // an index that's greater than the current index, or the last + // index in the snapshot. It's important that we leave a hole in + // the index so we know there's nothing in the Raft log there and + // replication will fault and send the snapshot. + term := r.getCurrentTerm() + lastIndex := r.getLastIndex() + if meta.Index > lastIndex { + lastIndex = meta.Index + } + lastIndex++ + + // Dump the snapshot. Note that we use the latest configuration, + // not the one that came with the snapshot. + sink, err := r.snapshots.Create(version, lastIndex, term, + r.configurations.latest, r.configurations.latestIndex, r.trans) + if err != nil { + return fmt.Errorf("failed to create snapshot: %v", err) + } + n, err := io.Copy(sink, reader) + if err != nil { + sink.Cancel() + return fmt.Errorf("failed to write snapshot: %v", err) + } + if n != meta.Size { + sink.Cancel() + return fmt.Errorf("failed to write snapshot, size didn't match (%d != %d)", n, meta.Size) + } + if err := sink.Close(); err != nil { + return fmt.Errorf("failed to close snapshot: %v", err) + } + r.logger.Printf("[INFO] raft: Copied %d bytes to local snapshot", n) + + // Restore the snapshot into the FSM. If this fails we are in a + // bad state so we panic to take ourselves out. + fsm := &restoreFuture{ID: sink.ID()} + fsm.init() + select { + case r.fsmMutateCh <- fsm: + case <-r.shutdownCh: + return ErrRaftShutdown + } + if err := fsm.Error(); err != nil { + panic(fmt.Errorf("failed to restore snapshot: %v", err)) + } + + // We set the last log so it looks like we've stored the empty + // index we burned. The last applied is set because we made the + // FSM take the snapshot state, and we store the last snapshot + // in the stable store since we created a snapshot as part of + // this process. + r.setLastLog(lastIndex, term) + r.setLastApplied(lastIndex) + r.setLastSnapshot(lastIndex, term) + + r.logger.Printf("[INFO] raft: Restored user snapshot (index %d)", lastIndex) + return nil +} + +// appendConfigurationEntry changes the configuration and adds a new +// configuration entry to the log. This must only be called from the +// main thread. +func (r *Raft) appendConfigurationEntry(future *configurationChangeFuture) { + configuration, err := nextConfiguration(r.configurations.latest, r.configurations.latestIndex, future.req) + if err != nil { + future.respond(err) + return + } + + r.logger.Printf("[INFO] raft: Updating configuration with %s (%v, %v) to %+v", + future.req.command, future.req.serverID, future.req.serverAddress, configuration.Servers) + + // In pre-ID compatibility mode we translate all configuration changes + // in to an old remove peer message, which can handle all supported + // cases for peer changes in the pre-ID world (adding and removing + // voters). Both add peer and remove peer log entries are handled + // similarly on old Raft servers, but remove peer does extra checks to + // see if a leader needs to step down. Since they both assert the full + // configuration, then we can safely call remove peer for everything. + if r.protocolVersion < 2 { + future.log = Log{ + Type: LogRemovePeerDeprecated, + Data: encodePeers(configuration, r.trans), + } + } else { + future.log = Log{ + Type: LogConfiguration, + Data: encodeConfiguration(configuration), + } + } + + r.dispatchLogs([]*logFuture{&future.logFuture}) + index := future.Index() + r.configurations.latest = configuration + r.configurations.latestIndex = index + r.leaderState.commitment.setConfiguration(configuration) + r.startStopReplication() +} + +// dispatchLog is called on the leader to push a log to disk, mark it +// as inflight and begin replication of it. +func (r *Raft) dispatchLogs(applyLogs []*logFuture) { + now := time.Now() + defer metrics.MeasureSince([]string{"raft", "leader", "dispatchLog"}, now) + + term := r.getCurrentTerm() + lastIndex := r.getLastIndex() + logs := make([]*Log, len(applyLogs)) + + for idx, applyLog := range applyLogs { + applyLog.dispatch = now + lastIndex++ + applyLog.log.Index = lastIndex + applyLog.log.Term = term + logs[idx] = &applyLog.log + r.leaderState.inflight.PushBack(applyLog) + } + + // Write the log entry locally + if err := r.logs.StoreLogs(logs); err != nil { + r.logger.Printf("[ERR] raft: Failed to commit logs: %v", err) + for _, applyLog := range applyLogs { + applyLog.respond(err) + } + r.setState(Follower) + return + } + r.leaderState.commitment.match(r.localID, lastIndex) + + // Update the last log since it's on disk now + r.setLastLog(lastIndex, term) + + // Notify the replicators of the new log + for _, f := range r.leaderState.replState { + asyncNotifyCh(f.triggerCh) + } +} + +// processLogs is used to apply all the committed entires that haven't been +// applied up to the given index limit. +// This can be called from both leaders and followers. +// Followers call this from AppendEntires, for n entires at a time, and always +// pass future=nil. +// Leaders call this once per inflight when entries are committed. They pass +// the future from inflights. +func (r *Raft) processLogs(index uint64, future *logFuture) { + // Reject logs we've applied already + lastApplied := r.getLastApplied() + if index <= lastApplied { + r.logger.Printf("[WARN] raft: Skipping application of old log: %d", index) + return + } + + // Apply all the preceding logs + for idx := r.getLastApplied() + 1; idx <= index; idx++ { + // Get the log, either from the future or from our log store + if future != nil && future.log.Index == idx { + r.processLog(&future.log, future) + + } else { + l := new(Log) + if err := r.logs.GetLog(idx, l); err != nil { + r.logger.Printf("[ERR] raft: Failed to get log at %d: %v", idx, err) + panic(err) + } + r.processLog(l, nil) + } + + // Update the lastApplied index and term + r.setLastApplied(idx) + } +} + +// processLog is invoked to process the application of a single committed log entry. +func (r *Raft) processLog(l *Log, future *logFuture) { + switch l.Type { + case LogBarrier: + // Barrier is handled by the FSM + fallthrough + + case LogCommand: + // Forward to the fsm handler + select { + case r.fsmMutateCh <- &commitTuple{l, future}: + case <-r.shutdownCh: + if future != nil { + future.respond(ErrRaftShutdown) + } + } + + // Return so that the future is only responded to + // by the FSM handler when the application is done + return + + case LogConfiguration: + case LogAddPeerDeprecated: + case LogRemovePeerDeprecated: + case LogNoop: + // Ignore the no-op + + default: + panic(fmt.Errorf("unrecognized log type: %#v", l)) + } + + // Invoke the future if given + if future != nil { + future.respond(nil) + } +} + +// processRPC is called to handle an incoming RPC request. This must only be +// called from the main thread. +func (r *Raft) processRPC(rpc RPC) { + if err := r.checkRPCHeader(rpc); err != nil { + rpc.Respond(nil, err) + return + } + + switch cmd := rpc.Command.(type) { + case *AppendEntriesRequest: + r.appendEntries(rpc, cmd) + case *RequestVoteRequest: + r.requestVote(rpc, cmd) + case *InstallSnapshotRequest: + r.installSnapshot(rpc, cmd) + default: + r.logger.Printf("[ERR] raft: Got unexpected command: %#v", rpc.Command) + rpc.Respond(nil, fmt.Errorf("unexpected command")) + } +} + +// processHeartbeat is a special handler used just for heartbeat requests +// so that they can be fast-pathed if a transport supports it. This must only +// be called from the main thread. +func (r *Raft) processHeartbeat(rpc RPC) { + defer metrics.MeasureSince([]string{"raft", "rpc", "processHeartbeat"}, time.Now()) + + // Check if we are shutdown, just ignore the RPC + select { + case <-r.shutdownCh: + return + default: + } + + // Ensure we are only handling a heartbeat + switch cmd := rpc.Command.(type) { + case *AppendEntriesRequest: + r.appendEntries(rpc, cmd) + default: + r.logger.Printf("[ERR] raft: Expected heartbeat, got command: %#v", rpc.Command) + rpc.Respond(nil, fmt.Errorf("unexpected command")) + } +} + +// appendEntries is invoked when we get an append entries RPC call. This must +// only be called from the main thread. +func (r *Raft) appendEntries(rpc RPC, a *AppendEntriesRequest) { + defer metrics.MeasureSince([]string{"raft", "rpc", "appendEntries"}, time.Now()) + // Setup a response + resp := &AppendEntriesResponse{ + RPCHeader: r.getRPCHeader(), + Term: r.getCurrentTerm(), + LastLog: r.getLastIndex(), + Success: false, + NoRetryBackoff: false, + } + var rpcErr error + defer func() { + rpc.Respond(resp, rpcErr) + }() + + // Ignore an older term + if a.Term < r.getCurrentTerm() { + return + } + + // Increase the term if we see a newer one, also transition to follower + // if we ever get an appendEntries call + if a.Term > r.getCurrentTerm() || r.getState() != Follower { + // Ensure transition to follower + r.setState(Follower) + r.setCurrentTerm(a.Term) + resp.Term = a.Term + } + + // Save the current leader + r.setLeader(ServerAddress(r.trans.DecodePeer(a.Leader))) + + // Verify the last log entry + if a.PrevLogEntry > 0 { + lastIdx, lastTerm := r.getLastEntry() + + var prevLogTerm uint64 + if a.PrevLogEntry == lastIdx { + prevLogTerm = lastTerm + + } else { + var prevLog Log + if err := r.logs.GetLog(a.PrevLogEntry, &prevLog); err != nil { + r.logger.Printf("[WARN] raft: Failed to get previous log: %d %v (last: %d)", + a.PrevLogEntry, err, lastIdx) + resp.NoRetryBackoff = true + return + } + prevLogTerm = prevLog.Term + } + + if a.PrevLogTerm != prevLogTerm { + r.logger.Printf("[WARN] raft: Previous log term mis-match: ours: %d remote: %d", + prevLogTerm, a.PrevLogTerm) + resp.NoRetryBackoff = true + return + } + } + + // Process any new entries + if len(a.Entries) > 0 { + start := time.Now() + + // Delete any conflicting entries, skip any duplicates + lastLogIdx, _ := r.getLastLog() + var newEntries []*Log + for i, entry := range a.Entries { + if entry.Index > lastLogIdx { + newEntries = a.Entries[i:] + break + } + var storeEntry Log + if err := r.logs.GetLog(entry.Index, &storeEntry); err != nil { + r.logger.Printf("[WARN] raft: Failed to get log entry %d: %v", + entry.Index, err) + return + } + if entry.Term != storeEntry.Term { + r.logger.Printf("[WARN] raft: Clearing log suffix from %d to %d", entry.Index, lastLogIdx) + if err := r.logs.DeleteRange(entry.Index, lastLogIdx); err != nil { + r.logger.Printf("[ERR] raft: Failed to clear log suffix: %v", err) + return + } + if entry.Index <= r.configurations.latestIndex { + r.configurations.latest = r.configurations.committed + r.configurations.latestIndex = r.configurations.committedIndex + } + newEntries = a.Entries[i:] + break + } + } + + if n := len(newEntries); n > 0 { + // Append the new entries + if err := r.logs.StoreLogs(newEntries); err != nil { + r.logger.Printf("[ERR] raft: Failed to append to logs: %v", err) + // TODO: leaving r.getLastLog() in the wrong + // state if there was a truncation above + return + } + + // Handle any new configuration changes + for _, newEntry := range newEntries { + r.processConfigurationLogEntry(newEntry) + } + + // Update the lastLog + last := newEntries[n-1] + r.setLastLog(last.Index, last.Term) + } + + metrics.MeasureSince([]string{"raft", "rpc", "appendEntries", "storeLogs"}, start) + } + + // Update the commit index + if a.LeaderCommitIndex > 0 && a.LeaderCommitIndex > r.getCommitIndex() { + start := time.Now() + idx := min(a.LeaderCommitIndex, r.getLastIndex()) + r.setCommitIndex(idx) + if r.configurations.latestIndex <= idx { + r.configurations.committed = r.configurations.latest + r.configurations.committedIndex = r.configurations.latestIndex + } + r.processLogs(idx, nil) + metrics.MeasureSince([]string{"raft", "rpc", "appendEntries", "processLogs"}, start) + } + + // Everything went well, set success + resp.Success = true + r.setLastContact() + return +} + +// processConfigurationLogEntry takes a log entry and updates the latest +// configuration if the entry results in a new configuration. This must only be +// called from the main thread, or from NewRaft() before any threads have begun. +func (r *Raft) processConfigurationLogEntry(entry *Log) { + if entry.Type == LogConfiguration { + r.configurations.committed = r.configurations.latest + r.configurations.committedIndex = r.configurations.latestIndex + r.configurations.latest = decodeConfiguration(entry.Data) + r.configurations.latestIndex = entry.Index + } else if entry.Type == LogAddPeerDeprecated || entry.Type == LogRemovePeerDeprecated { + r.configurations.committed = r.configurations.latest + r.configurations.committedIndex = r.configurations.latestIndex + r.configurations.latest = decodePeers(entry.Data, r.trans) + r.configurations.latestIndex = entry.Index + } +} + +// requestVote is invoked when we get an request vote RPC call. +func (r *Raft) requestVote(rpc RPC, req *RequestVoteRequest) { + defer metrics.MeasureSince([]string{"raft", "rpc", "requestVote"}, time.Now()) + r.observe(*req) + + // Setup a response + resp := &RequestVoteResponse{ + RPCHeader: r.getRPCHeader(), + Term: r.getCurrentTerm(), + Granted: false, + } + var rpcErr error + defer func() { + rpc.Respond(resp, rpcErr) + }() + + // Version 0 servers will panic unless the peers is present. It's only + // used on them to produce a warning message. + if r.protocolVersion < 2 { + resp.Peers = encodePeers(r.configurations.latest, r.trans) + } + + // Check if we have an existing leader [who's not the candidate] + candidate := r.trans.DecodePeer(req.Candidate) + if leader := r.Leader(); leader != "" && leader != candidate { + r.logger.Printf("[WARN] raft: Rejecting vote request from %v since we have a leader: %v", + candidate, leader) + return + } + + // Ignore an older term + if req.Term < r.getCurrentTerm() { + return + } + + // Increase the term if we see a newer one + if req.Term > r.getCurrentTerm() { + // Ensure transition to follower + r.setState(Follower) + r.setCurrentTerm(req.Term) + resp.Term = req.Term + } + + // Check if we have voted yet + lastVoteTerm, err := r.stable.GetUint64(keyLastVoteTerm) + if err != nil && err.Error() != "not found" { + r.logger.Printf("[ERR] raft: Failed to get last vote term: %v", err) + return + } + lastVoteCandBytes, err := r.stable.Get(keyLastVoteCand) + if err != nil && err.Error() != "not found" { + r.logger.Printf("[ERR] raft: Failed to get last vote candidate: %v", err) + return + } + + // Check if we've voted in this election before + if lastVoteTerm == req.Term && lastVoteCandBytes != nil { + r.logger.Printf("[INFO] raft: Duplicate RequestVote for same term: %d", req.Term) + if bytes.Compare(lastVoteCandBytes, req.Candidate) == 0 { + r.logger.Printf("[WARN] raft: Duplicate RequestVote from candidate: %s", req.Candidate) + resp.Granted = true + } + return + } + + // Reject if their term is older + lastIdx, lastTerm := r.getLastEntry() + if lastTerm > req.LastLogTerm { + r.logger.Printf("[WARN] raft: Rejecting vote request from %v since our last term is greater (%d, %d)", + candidate, lastTerm, req.LastLogTerm) + return + } + + if lastTerm == req.LastLogTerm && lastIdx > req.LastLogIndex { + r.logger.Printf("[WARN] raft: Rejecting vote request from %v since our last index is greater (%d, %d)", + candidate, lastIdx, req.LastLogIndex) + return + } + + // Persist a vote for safety + if err := r.persistVote(req.Term, req.Candidate); err != nil { + r.logger.Printf("[ERR] raft: Failed to persist vote: %v", err) + return + } + + resp.Granted = true + r.setLastContact() + return +} + +// installSnapshot is invoked when we get a InstallSnapshot RPC call. +// We must be in the follower state for this, since it means we are +// too far behind a leader for log replay. This must only be called +// from the main thread. +func (r *Raft) installSnapshot(rpc RPC, req *InstallSnapshotRequest) { + defer metrics.MeasureSince([]string{"raft", "rpc", "installSnapshot"}, time.Now()) + // Setup a response + resp := &InstallSnapshotResponse{ + Term: r.getCurrentTerm(), + Success: false, + } + var rpcErr error + defer func() { + rpc.Respond(resp, rpcErr) + }() + + // Sanity check the version + if req.SnapshotVersion < SnapshotVersionMin || + req.SnapshotVersion > SnapshotVersionMax { + rpcErr = fmt.Errorf("unsupported snapshot version %d", req.SnapshotVersion) + return + } + + // Ignore an older term + if req.Term < r.getCurrentTerm() { + return + } + + // Increase the term if we see a newer one + if req.Term > r.getCurrentTerm() { + // Ensure transition to follower + r.setState(Follower) + r.setCurrentTerm(req.Term) + resp.Term = req.Term + } + + // Save the current leader + r.setLeader(ServerAddress(r.trans.DecodePeer(req.Leader))) + + // Create a new snapshot + var reqConfiguration Configuration + var reqConfigurationIndex uint64 + if req.SnapshotVersion > 0 { + reqConfiguration = decodeConfiguration(req.Configuration) + reqConfigurationIndex = req.ConfigurationIndex + } else { + reqConfiguration = decodePeers(req.Peers, r.trans) + reqConfigurationIndex = req.LastLogIndex + } + version := getSnapshotVersion(r.protocolVersion) + sink, err := r.snapshots.Create(version, req.LastLogIndex, req.LastLogTerm, + reqConfiguration, reqConfigurationIndex, r.trans) + if err != nil { + r.logger.Printf("[ERR] raft: Failed to create snapshot to install: %v", err) + rpcErr = fmt.Errorf("failed to create snapshot: %v", err) + return + } + + // Spill the remote snapshot to disk + n, err := io.Copy(sink, rpc.Reader) + if err != nil { + sink.Cancel() + r.logger.Printf("[ERR] raft: Failed to copy snapshot: %v", err) + rpcErr = err + return + } + + // Check that we received it all + if n != req.Size { + sink.Cancel() + r.logger.Printf("[ERR] raft: Failed to receive whole snapshot: %d / %d", n, req.Size) + rpcErr = fmt.Errorf("short read") + return + } + + // Finalize the snapshot + if err := sink.Close(); err != nil { + r.logger.Printf("[ERR] raft: Failed to finalize snapshot: %v", err) + rpcErr = err + return + } + r.logger.Printf("[INFO] raft: Copied %d bytes to local snapshot", n) + + // Restore snapshot + future := &restoreFuture{ID: sink.ID()} + future.init() + select { + case r.fsmMutateCh <- future: + case <-r.shutdownCh: + future.respond(ErrRaftShutdown) + return + } + + // Wait for the restore to happen + if err := future.Error(); err != nil { + r.logger.Printf("[ERR] raft: Failed to restore snapshot: %v", err) + rpcErr = err + return + } + + // Update the lastApplied so we don't replay old logs + r.setLastApplied(req.LastLogIndex) + + // Update the last stable snapshot info + r.setLastSnapshot(req.LastLogIndex, req.LastLogTerm) + + // Restore the peer set + r.configurations.latest = reqConfiguration + r.configurations.latestIndex = reqConfigurationIndex + r.configurations.committed = reqConfiguration + r.configurations.committedIndex = reqConfigurationIndex + + // Compact logs, continue even if this fails + if err := r.compactLogs(req.LastLogIndex); err != nil { + r.logger.Printf("[ERR] raft: Failed to compact logs: %v", err) + } + + r.logger.Printf("[INFO] raft: Installed remote snapshot") + resp.Success = true + r.setLastContact() + return +} + +// setLastContact is used to set the last contact time to now +func (r *Raft) setLastContact() { + r.lastContactLock.Lock() + r.lastContact = time.Now() + r.lastContactLock.Unlock() +} + +type voteResult struct { + RequestVoteResponse + voterID ServerID +} + +// electSelf is used to send a RequestVote RPC to all peers, and vote for +// ourself. This has the side affecting of incrementing the current term. The +// response channel returned is used to wait for all the responses (including a +// vote for ourself). This must only be called from the main thread. +func (r *Raft) electSelf() <-chan *voteResult { + // Create a response channel + respCh := make(chan *voteResult, len(r.configurations.latest.Servers)) + + // Increment the term + r.setCurrentTerm(r.getCurrentTerm() + 1) + + // Construct the request + lastIdx, lastTerm := r.getLastEntry() + req := &RequestVoteRequest{ + RPCHeader: r.getRPCHeader(), + Term: r.getCurrentTerm(), + Candidate: r.trans.EncodePeer(r.localAddr), + LastLogIndex: lastIdx, + LastLogTerm: lastTerm, + } + + // Construct a function to ask for a vote + askPeer := func(peer Server) { + r.goFunc(func() { + defer metrics.MeasureSince([]string{"raft", "candidate", "electSelf"}, time.Now()) + resp := &voteResult{voterID: peer.ID} + err := r.trans.RequestVote(peer.Address, req, &resp.RequestVoteResponse) + if err != nil { + r.logger.Printf("[ERR] raft: Failed to make RequestVote RPC to %v: %v", peer, err) + resp.Term = req.Term + resp.Granted = false + } + respCh <- resp + }) + } + + // For each peer, request a vote + for _, server := range r.configurations.latest.Servers { + if server.Suffrage == Voter { + if server.ID == r.localID { + // Persist a vote for ourselves + if err := r.persistVote(req.Term, req.Candidate); err != nil { + r.logger.Printf("[ERR] raft: Failed to persist vote : %v", err) + return nil + } + // Include our own vote + respCh <- &voteResult{ + RequestVoteResponse: RequestVoteResponse{ + RPCHeader: r.getRPCHeader(), + Term: req.Term, + Granted: true, + }, + voterID: r.localID, + } + } else { + askPeer(server) + } + } + } + + return respCh +} + +// persistVote is used to persist our vote for safety. +func (r *Raft) persistVote(term uint64, candidate []byte) error { + if err := r.stable.SetUint64(keyLastVoteTerm, term); err != nil { + return err + } + if err := r.stable.Set(keyLastVoteCand, candidate); err != nil { + return err + } + return nil +} + +// setCurrentTerm is used to set the current term in a durable manner. +func (r *Raft) setCurrentTerm(t uint64) { + // Persist to disk first + if err := r.stable.SetUint64(keyCurrentTerm, t); err != nil { + panic(fmt.Errorf("failed to save current term: %v", err)) + } + r.raftState.setCurrentTerm(t) +} + +// setState is used to update the current state. Any state +// transition causes the known leader to be cleared. This means +// that leader should be set only after updating the state. +func (r *Raft) setState(state RaftState) { + r.setLeader("") + oldState := r.raftState.getState() + r.raftState.setState(state) + if oldState != state { + r.observe(state) + } +} diff --git a/vendor/github.com/hashicorp/raft/replication.go b/vendor/github.com/hashicorp/raft/replication.go new file mode 100644 index 0000000000..6839273439 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/replication.go @@ -0,0 +1,561 @@ +package raft + +import ( + "errors" + "fmt" + "sync" + "time" + + "github.com/armon/go-metrics" +) + +const ( + maxFailureScale = 12 + failureWait = 10 * time.Millisecond +) + +var ( + // ErrLogNotFound indicates a given log entry is not available. + ErrLogNotFound = errors.New("log not found") + + // ErrPipelineReplicationNotSupported can be returned by the transport to + // signal that pipeline replication is not supported in general, and that + // no error message should be produced. + ErrPipelineReplicationNotSupported = errors.New("pipeline replication not supported") +) + +// followerReplication is in charge of sending snapshots and log entries from +// this leader during this particular term to a remote follower. +type followerReplication struct { + // peer contains the network address and ID of the remote follower. + peer Server + + // commitment tracks the entries acknowledged by followers so that the + // leader's commit index can advance. It is updated on successsful + // AppendEntries responses. + commitment *commitment + + // stopCh is notified/closed when this leader steps down or the follower is + // removed from the cluster. In the follower removed case, it carries a log + // index; replication should be attempted with a best effort up through that + // index, before exiting. + stopCh chan uint64 + // triggerCh is notified every time new entries are appended to the log. + triggerCh chan struct{} + + // currentTerm is the term of this leader, to be included in AppendEntries + // requests. + currentTerm uint64 + // nextIndex is the index of the next log entry to send to the follower, + // which may fall past the end of the log. + nextIndex uint64 + + // lastContact is updated to the current time whenever any response is + // received from the follower (successful or not). This is used to check + // whether the leader should step down (Raft.checkLeaderLease()). + lastContact time.Time + // lastContactLock protects 'lastContact'. + lastContactLock sync.RWMutex + + // failures counts the number of failed RPCs since the last success, which is + // used to apply backoff. + failures uint64 + + // notifyCh is notified to send out a heartbeat, which is used to check that + // this server is still leader. + notifyCh chan struct{} + // notify is a list of futures to be resolved upon receipt of an + // acknowledgement, then cleared from this list. + notify []*verifyFuture + // notifyLock protects 'notify'. + notifyLock sync.Mutex + + // stepDown is used to indicate to the leader that we + // should step down based on information from a follower. + stepDown chan struct{} + + // allowPipeline is used to determine when to pipeline the AppendEntries RPCs. + // It is private to this replication goroutine. + allowPipeline bool +} + +// notifyAll is used to notify all the waiting verify futures +// if the follower believes we are still the leader. +func (s *followerReplication) notifyAll(leader bool) { + // Clear the waiting notifies minimizing lock time + s.notifyLock.Lock() + n := s.notify + s.notify = nil + s.notifyLock.Unlock() + + // Submit our votes + for _, v := range n { + v.vote(leader) + } +} + +// LastContact returns the time of last contact. +func (s *followerReplication) LastContact() time.Time { + s.lastContactLock.RLock() + last := s.lastContact + s.lastContactLock.RUnlock() + return last +} + +// setLastContact sets the last contact to the current time. +func (s *followerReplication) setLastContact() { + s.lastContactLock.Lock() + s.lastContact = time.Now() + s.lastContactLock.Unlock() +} + +// replicate is a long running routine that replicates log entries to a single +// follower. +func (r *Raft) replicate(s *followerReplication) { + // Start an async heartbeating routing + stopHeartbeat := make(chan struct{}) + defer close(stopHeartbeat) + r.goFunc(func() { r.heartbeat(s, stopHeartbeat) }) + +RPC: + shouldStop := false + for !shouldStop { + select { + case maxIndex := <-s.stopCh: + // Make a best effort to replicate up to this index + if maxIndex > 0 { + r.replicateTo(s, maxIndex) + } + return + case <-s.triggerCh: + lastLogIdx, _ := r.getLastLog() + shouldStop = r.replicateTo(s, lastLogIdx) + case <-randomTimeout(r.conf.CommitTimeout): // TODO: what is this? + lastLogIdx, _ := r.getLastLog() + shouldStop = r.replicateTo(s, lastLogIdx) + } + + // If things looks healthy, switch to pipeline mode + if !shouldStop && s.allowPipeline { + goto PIPELINE + } + } + return + +PIPELINE: + // Disable until re-enabled + s.allowPipeline = false + + // Replicates using a pipeline for high performance. This method + // is not able to gracefully recover from errors, and so we fall back + // to standard mode on failure. + if err := r.pipelineReplicate(s); err != nil { + if err != ErrPipelineReplicationNotSupported { + r.logger.Printf("[ERR] raft: Failed to start pipeline replication to %s: %s", s.peer, err) + } + } + goto RPC +} + +// replicateTo is a hepler to replicate(), used to replicate the logs up to a +// given last index. +// If the follower log is behind, we take care to bring them up to date. +func (r *Raft) replicateTo(s *followerReplication, lastIndex uint64) (shouldStop bool) { + // Create the base request + var req AppendEntriesRequest + var resp AppendEntriesResponse + var start time.Time +START: + // Prevent an excessive retry rate on errors + if s.failures > 0 { + select { + case <-time.After(backoff(failureWait, s.failures, maxFailureScale)): + case <-r.shutdownCh: + } + } + + // Setup the request + if err := r.setupAppendEntries(s, &req, s.nextIndex, lastIndex); err == ErrLogNotFound { + goto SEND_SNAP + } else if err != nil { + return + } + + // Make the RPC call + start = time.Now() + if err := r.trans.AppendEntries(s.peer.Address, &req, &resp); err != nil { + r.logger.Printf("[ERR] raft: Failed to AppendEntries to %v: %v", s.peer, err) + s.failures++ + return + } + appendStats(string(s.peer.ID), start, float32(len(req.Entries))) + + // Check for a newer term, stop running + if resp.Term > req.Term { + r.handleStaleTerm(s) + return true + } + + // Update the last contact + s.setLastContact() + + // Update s based on success + if resp.Success { + // Update our replication state + updateLastAppended(s, &req) + + // Clear any failures, allow pipelining + s.failures = 0 + s.allowPipeline = true + } else { + s.nextIndex = max(min(s.nextIndex-1, resp.LastLog+1), 1) + if resp.NoRetryBackoff { + s.failures = 0 + } else { + s.failures++ + } + r.logger.Printf("[WARN] raft: AppendEntries to %v rejected, sending older logs (next: %d)", s.peer, s.nextIndex) + } + +CHECK_MORE: + // Poll the stop channel here in case we are looping and have been asked + // to stop, or have stepped down as leader. Even for the best effort case + // where we are asked to replicate to a given index and then shutdown, + // it's better to not loop in here to send lots of entries to a straggler + // that's leaving the cluster anyways. + select { + case <-s.stopCh: + return true + default: + } + + // Check if there are more logs to replicate + if s.nextIndex <= lastIndex { + goto START + } + return + + // SEND_SNAP is used when we fail to get a log, usually because the follower + // is too far behind, and we must ship a snapshot down instead +SEND_SNAP: + if stop, err := r.sendLatestSnapshot(s); stop { + return true + } else if err != nil { + r.logger.Printf("[ERR] raft: Failed to send snapshot to %v: %v", s.peer, err) + return + } + + // Check if there is more to replicate + goto CHECK_MORE +} + +// sendLatestSnapshot is used to send the latest snapshot we have +// down to our follower. +func (r *Raft) sendLatestSnapshot(s *followerReplication) (bool, error) { + // Get the snapshots + snapshots, err := r.snapshots.List() + if err != nil { + r.logger.Printf("[ERR] raft: Failed to list snapshots: %v", err) + return false, err + } + + // Check we have at least a single snapshot + if len(snapshots) == 0 { + return false, fmt.Errorf("no snapshots found") + } + + // Open the most recent snapshot + snapID := snapshots[0].ID + meta, snapshot, err := r.snapshots.Open(snapID) + if err != nil { + r.logger.Printf("[ERR] raft: Failed to open snapshot %v: %v", snapID, err) + return false, err + } + defer snapshot.Close() + + // Setup the request + req := InstallSnapshotRequest{ + RPCHeader: r.getRPCHeader(), + SnapshotVersion: meta.Version, + Term: s.currentTerm, + Leader: r.trans.EncodePeer(r.localAddr), + LastLogIndex: meta.Index, + LastLogTerm: meta.Term, + Peers: meta.Peers, + Size: meta.Size, + Configuration: encodeConfiguration(meta.Configuration), + ConfigurationIndex: meta.ConfigurationIndex, + } + + // Make the call + start := time.Now() + var resp InstallSnapshotResponse + if err := r.trans.InstallSnapshot(s.peer.Address, &req, &resp, snapshot); err != nil { + r.logger.Printf("[ERR] raft: Failed to install snapshot %v: %v", snapID, err) + s.failures++ + return false, err + } + metrics.MeasureSince([]string{"raft", "replication", "installSnapshot", string(s.peer.ID)}, start) + + // Check for a newer term, stop running + if resp.Term > req.Term { + r.handleStaleTerm(s) + return true, nil + } + + // Update the last contact + s.setLastContact() + + // Check for success + if resp.Success { + // Update the indexes + s.nextIndex = meta.Index + 1 + s.commitment.match(s.peer.ID, meta.Index) + + // Clear any failures + s.failures = 0 + + // Notify we are still leader + s.notifyAll(true) + } else { + s.failures++ + r.logger.Printf("[WARN] raft: InstallSnapshot to %v rejected", s.peer) + } + return false, nil +} + +// heartbeat is used to periodically invoke AppendEntries on a peer +// to ensure they don't time out. This is done async of replicate(), +// since that routine could potentially be blocked on disk IO. +func (r *Raft) heartbeat(s *followerReplication, stopCh chan struct{}) { + var failures uint64 + req := AppendEntriesRequest{ + RPCHeader: r.getRPCHeader(), + Term: s.currentTerm, + Leader: r.trans.EncodePeer(r.localAddr), + } + var resp AppendEntriesResponse + for { + // Wait for the next heartbeat interval or forced notify + select { + case <-s.notifyCh: + case <-randomTimeout(r.conf.HeartbeatTimeout / 10): + case <-stopCh: + return + } + + start := time.Now() + if err := r.trans.AppendEntries(s.peer.Address, &req, &resp); err != nil { + r.logger.Printf("[ERR] raft: Failed to heartbeat to %v: %v", s.peer.Address, err) + failures++ + select { + case <-time.After(backoff(failureWait, failures, maxFailureScale)): + case <-stopCh: + } + } else { + s.setLastContact() + failures = 0 + metrics.MeasureSince([]string{"raft", "replication", "heartbeat", string(s.peer.ID)}, start) + s.notifyAll(resp.Success) + } + } +} + +// pipelineReplicate is used when we have synchronized our state with the follower, +// and want to switch to a higher performance pipeline mode of replication. +// We only pipeline AppendEntries commands, and if we ever hit an error, we fall +// back to the standard replication which can handle more complex situations. +func (r *Raft) pipelineReplicate(s *followerReplication) error { + // Create a new pipeline + pipeline, err := r.trans.AppendEntriesPipeline(s.peer.Address) + if err != nil { + return err + } + defer pipeline.Close() + + // Log start and stop of pipeline + r.logger.Printf("[INFO] raft: pipelining replication to peer %v", s.peer) + defer r.logger.Printf("[INFO] raft: aborting pipeline replication to peer %v", s.peer) + + // Create a shutdown and finish channel + stopCh := make(chan struct{}) + finishCh := make(chan struct{}) + + // Start a dedicated decoder + r.goFunc(func() { r.pipelineDecode(s, pipeline, stopCh, finishCh) }) + + // Start pipeline sends at the last good nextIndex + nextIndex := s.nextIndex + + shouldStop := false +SEND: + for !shouldStop { + select { + case <-finishCh: + break SEND + case maxIndex := <-s.stopCh: + // Make a best effort to replicate up to this index + if maxIndex > 0 { + r.pipelineSend(s, pipeline, &nextIndex, maxIndex) + } + break SEND + case <-s.triggerCh: + lastLogIdx, _ := r.getLastLog() + shouldStop = r.pipelineSend(s, pipeline, &nextIndex, lastLogIdx) + case <-randomTimeout(r.conf.CommitTimeout): + lastLogIdx, _ := r.getLastLog() + shouldStop = r.pipelineSend(s, pipeline, &nextIndex, lastLogIdx) + } + } + + // Stop our decoder, and wait for it to finish + close(stopCh) + select { + case <-finishCh: + case <-r.shutdownCh: + } + return nil +} + +// pipelineSend is used to send data over a pipeline. It is a helper to +// pipelineReplicate. +func (r *Raft) pipelineSend(s *followerReplication, p AppendPipeline, nextIdx *uint64, lastIndex uint64) (shouldStop bool) { + // Create a new append request + req := new(AppendEntriesRequest) + if err := r.setupAppendEntries(s, req, *nextIdx, lastIndex); err != nil { + return true + } + + // Pipeline the append entries + if _, err := p.AppendEntries(req, new(AppendEntriesResponse)); err != nil { + r.logger.Printf("[ERR] raft: Failed to pipeline AppendEntries to %v: %v", s.peer, err) + return true + } + + // Increase the next send log to avoid re-sending old logs + if n := len(req.Entries); n > 0 { + last := req.Entries[n-1] + *nextIdx = last.Index + 1 + } + return false +} + +// pipelineDecode is used to decode the responses of pipelined requests. +func (r *Raft) pipelineDecode(s *followerReplication, p AppendPipeline, stopCh, finishCh chan struct{}) { + defer close(finishCh) + respCh := p.Consumer() + for { + select { + case ready := <-respCh: + req, resp := ready.Request(), ready.Response() + appendStats(string(s.peer.ID), ready.Start(), float32(len(req.Entries))) + + // Check for a newer term, stop running + if resp.Term > req.Term { + r.handleStaleTerm(s) + return + } + + // Update the last contact + s.setLastContact() + + // Abort pipeline if not successful + if !resp.Success { + return + } + + // Update our replication state + updateLastAppended(s, req) + case <-stopCh: + return + } + } +} + +// setupAppendEntries is used to setup an append entries request. +func (r *Raft) setupAppendEntries(s *followerReplication, req *AppendEntriesRequest, nextIndex, lastIndex uint64) error { + req.RPCHeader = r.getRPCHeader() + req.Term = s.currentTerm + req.Leader = r.trans.EncodePeer(r.localAddr) + req.LeaderCommitIndex = r.getCommitIndex() + if err := r.setPreviousLog(req, nextIndex); err != nil { + return err + } + if err := r.setNewLogs(req, nextIndex, lastIndex); err != nil { + return err + } + return nil +} + +// setPreviousLog is used to setup the PrevLogEntry and PrevLogTerm for an +// AppendEntriesRequest given the next index to replicate. +func (r *Raft) setPreviousLog(req *AppendEntriesRequest, nextIndex uint64) error { + // Guard for the first index, since there is no 0 log entry + // Guard against the previous index being a snapshot as well + lastSnapIdx, lastSnapTerm := r.getLastSnapshot() + if nextIndex == 1 { + req.PrevLogEntry = 0 + req.PrevLogTerm = 0 + + } else if (nextIndex - 1) == lastSnapIdx { + req.PrevLogEntry = lastSnapIdx + req.PrevLogTerm = lastSnapTerm + + } else { + var l Log + if err := r.logs.GetLog(nextIndex-1, &l); err != nil { + r.logger.Printf("[ERR] raft: Failed to get log at index %d: %v", + nextIndex-1, err) + return err + } + + // Set the previous index and term (0 if nextIndex is 1) + req.PrevLogEntry = l.Index + req.PrevLogTerm = l.Term + } + return nil +} + +// setNewLogs is used to setup the logs which should be appended for a request. +func (r *Raft) setNewLogs(req *AppendEntriesRequest, nextIndex, lastIndex uint64) error { + // Append up to MaxAppendEntries or up to the lastIndex + req.Entries = make([]*Log, 0, r.conf.MaxAppendEntries) + maxIndex := min(nextIndex+uint64(r.conf.MaxAppendEntries)-1, lastIndex) + for i := nextIndex; i <= maxIndex; i++ { + oldLog := new(Log) + if err := r.logs.GetLog(i, oldLog); err != nil { + r.logger.Printf("[ERR] raft: Failed to get log at index %d: %v", i, err) + return err + } + req.Entries = append(req.Entries, oldLog) + } + return nil +} + +// appendStats is used to emit stats about an AppendEntries invocation. +func appendStats(peer string, start time.Time, logs float32) { + metrics.MeasureSince([]string{"raft", "replication", "appendEntries", "rpc", peer}, start) + metrics.IncrCounter([]string{"raft", "replication", "appendEntries", "logs", peer}, logs) +} + +// handleStaleTerm is used when a follower indicates that we have a stale term. +func (r *Raft) handleStaleTerm(s *followerReplication) { + r.logger.Printf("[ERR] raft: peer %v has newer term, stopping replication", s.peer) + s.notifyAll(false) // No longer leader + asyncNotifyCh(s.stepDown) +} + +// updateLastAppended is used to update follower replication state after a +// successful AppendEntries RPC. +// TODO: This isn't used during InstallSnapshot, but the code there is similar. +func updateLastAppended(s *followerReplication, req *AppendEntriesRequest) { + // Mark any inflight logs as committed + if logs := req.Entries; len(logs) > 0 { + last := logs[len(logs)-1] + s.nextIndex = last.Index + 1 + s.commitment.match(s.peer.ID, last.Index) + } + + // Notify still leader + s.notifyAll(true) +} diff --git a/vendor/github.com/hashicorp/raft/snapshot.go b/vendor/github.com/hashicorp/raft/snapshot.go new file mode 100644 index 0000000000..5287ebc418 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/snapshot.go @@ -0,0 +1,239 @@ +package raft + +import ( + "fmt" + "io" + "time" + + "github.com/armon/go-metrics" +) + +// SnapshotMeta is for metadata of a snapshot. +type SnapshotMeta struct { + // Version is the version number of the snapshot metadata. This does not cover + // the application's data in the snapshot, that should be versioned + // separately. + Version SnapshotVersion + + // ID is opaque to the store, and is used for opening. + ID string + + // Index and Term store when the snapshot was taken. + Index uint64 + Term uint64 + + // Peers is deprecated and used to support version 0 snapshots, but will + // be populated in version 1 snapshots as well to help with upgrades. + Peers []byte + + // Configuration and ConfigurationIndex are present in version 1 + // snapshots and later. + Configuration Configuration + ConfigurationIndex uint64 + + // Size is the size of the snapshot in bytes. + Size int64 +} + +// SnapshotStore interface is used to allow for flexible implementations +// of snapshot storage and retrieval. For example, a client could implement +// a shared state store such as S3, allowing new nodes to restore snapshots +// without streaming from the leader. +type SnapshotStore interface { + // Create is used to begin a snapshot at a given index and term, and with + // the given committed configuration. The version parameter controls + // which snapshot version to create. + Create(version SnapshotVersion, index, term uint64, configuration Configuration, + configurationIndex uint64, trans Transport) (SnapshotSink, error) + + // List is used to list the available snapshots in the store. + // It should return then in descending order, with the highest index first. + List() ([]*SnapshotMeta, error) + + // Open takes a snapshot ID and provides a ReadCloser. Once close is + // called it is assumed the snapshot is no longer needed. + Open(id string) (*SnapshotMeta, io.ReadCloser, error) +} + +// SnapshotSink is returned by StartSnapshot. The FSM will Write state +// to the sink and call Close on completion. On error, Cancel will be invoked. +type SnapshotSink interface { + io.WriteCloser + ID() string + Cancel() error +} + +// runSnapshots is a long running goroutine used to manage taking +// new snapshots of the FSM. It runs in parallel to the FSM and +// main goroutines, so that snapshots do not block normal operation. +func (r *Raft) runSnapshots() { + for { + select { + case <-randomTimeout(r.conf.SnapshotInterval): + // Check if we should snapshot + if !r.shouldSnapshot() { + continue + } + + // Trigger a snapshot + if _, err := r.takeSnapshot(); err != nil { + r.logger.Printf("[ERR] raft: Failed to take snapshot: %v", err) + } + + case future := <-r.userSnapshotCh: + // User-triggered, run immediately + id, err := r.takeSnapshot() + if err != nil { + r.logger.Printf("[ERR] raft: Failed to take snapshot: %v", err) + } else { + future.opener = func() (*SnapshotMeta, io.ReadCloser, error) { + return r.snapshots.Open(id) + } + } + future.respond(err) + + case <-r.shutdownCh: + return + } + } +} + +// shouldSnapshot checks if we meet the conditions to take +// a new snapshot. +func (r *Raft) shouldSnapshot() bool { + // Check the last snapshot index + lastSnap, _ := r.getLastSnapshot() + + // Check the last log index + lastIdx, err := r.logs.LastIndex() + if err != nil { + r.logger.Printf("[ERR] raft: Failed to get last log index: %v", err) + return false + } + + // Compare the delta to the threshold + delta := lastIdx - lastSnap + return delta >= r.conf.SnapshotThreshold +} + +// takeSnapshot is used to take a new snapshot. This must only be called from +// the snapshot thread, never the main thread. This returns the ID of the new +// snapshot, along with an error. +func (r *Raft) takeSnapshot() (string, error) { + defer metrics.MeasureSince([]string{"raft", "snapshot", "takeSnapshot"}, time.Now()) + + // Create a request for the FSM to perform a snapshot. + snapReq := &reqSnapshotFuture{} + snapReq.init() + + // Wait for dispatch or shutdown. + select { + case r.fsmSnapshotCh <- snapReq: + case <-r.shutdownCh: + return "", ErrRaftShutdown + } + + // Wait until we get a response + if err := snapReq.Error(); err != nil { + if err != ErrNothingNewToSnapshot { + err = fmt.Errorf("failed to start snapshot: %v", err) + } + return "", err + } + defer snapReq.snapshot.Release() + + // Make a request for the configurations and extract the committed info. + // We have to use the future here to safely get this information since + // it is owned by the main thread. + configReq := &configurationsFuture{} + configReq.init() + select { + case r.configurationsCh <- configReq: + case <-r.shutdownCh: + return "", ErrRaftShutdown + } + if err := configReq.Error(); err != nil { + return "", err + } + committed := configReq.configurations.committed + committedIndex := configReq.configurations.committedIndex + + // We don't support snapshots while there's a config change outstanding + // since the snapshot doesn't have a means to represent this state. This + // is a little weird because we need the FSM to apply an index that's + // past the configuration change, even though the FSM itself doesn't see + // the configuration changes. It should be ok in practice with normal + // application traffic flowing through the FSM. If there's none of that + // then it's not crucial that we snapshot, since there's not much going + // on Raft-wise. + if snapReq.index < committedIndex { + return "", fmt.Errorf("cannot take snapshot now, wait until the configuration entry at %v has been applied (have applied %v)", + committedIndex, snapReq.index) + } + + // Create a new snapshot. + r.logger.Printf("[INFO] raft: Starting snapshot up to %d", snapReq.index) + start := time.Now() + version := getSnapshotVersion(r.protocolVersion) + sink, err := r.snapshots.Create(version, snapReq.index, snapReq.term, committed, committedIndex, r.trans) + if err != nil { + return "", fmt.Errorf("failed to create snapshot: %v", err) + } + metrics.MeasureSince([]string{"raft", "snapshot", "create"}, start) + + // Try to persist the snapshot. + start = time.Now() + if err := snapReq.snapshot.Persist(sink); err != nil { + sink.Cancel() + return "", fmt.Errorf("failed to persist snapshot: %v", err) + } + metrics.MeasureSince([]string{"raft", "snapshot", "persist"}, start) + + // Close and check for error. + if err := sink.Close(); err != nil { + return "", fmt.Errorf("failed to close snapshot: %v", err) + } + + // Update the last stable snapshot info. + r.setLastSnapshot(snapReq.index, snapReq.term) + + // Compact the logs. + if err := r.compactLogs(snapReq.index); err != nil { + return "", err + } + + r.logger.Printf("[INFO] raft: Snapshot to %d complete", snapReq.index) + return sink.ID(), nil +} + +// compactLogs takes the last inclusive index of a snapshot +// and trims the logs that are no longer needed. +func (r *Raft) compactLogs(snapIdx uint64) error { + defer metrics.MeasureSince([]string{"raft", "compactLogs"}, time.Now()) + // Determine log ranges to compact + minLog, err := r.logs.FirstIndex() + if err != nil { + return fmt.Errorf("failed to get first log index: %v", err) + } + + // Check if we have enough logs to truncate + lastLogIdx, _ := r.getLastLog() + if lastLogIdx <= r.conf.TrailingLogs { + return nil + } + + // Truncate up to the end of the snapshot, or `TrailingLogs` + // back from the head, which ever is further back. This ensures + // at least `TrailingLogs` entries, but does not allow logs + // after the snapshot to be removed. + maxLog := min(snapIdx, lastLogIdx-r.conf.TrailingLogs) + + // Log this + r.logger.Printf("[INFO] raft: Compacting logs from %d to %d", minLog, maxLog) + + // Compact the logs + if err := r.logs.DeleteRange(minLog, maxLog); err != nil { + return fmt.Errorf("log compaction failed: %v", err) + } + return nil +} diff --git a/vendor/github.com/hashicorp/raft/stable.go b/vendor/github.com/hashicorp/raft/stable.go new file mode 100644 index 0000000000..ff59a8c570 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/stable.go @@ -0,0 +1,15 @@ +package raft + +// StableStore is used to provide stable storage +// of key configurations to ensure safety. +type StableStore interface { + Set(key []byte, val []byte) error + + // Get returns the value for key, or an empty byte slice if key was not found. + Get(key []byte) ([]byte, error) + + SetUint64(key []byte, val uint64) error + + // GetUint64 returns the uint64 value for key, or 0 if key was not found. + GetUint64(key []byte) (uint64, error) +} diff --git a/vendor/github.com/hashicorp/raft/state.go b/vendor/github.com/hashicorp/raft/state.go new file mode 100644 index 0000000000..f6d658b8bb --- /dev/null +++ b/vendor/github.com/hashicorp/raft/state.go @@ -0,0 +1,167 @@ +package raft + +import ( + "sync" + "sync/atomic" +) + +// RaftState captures the state of a Raft node: Follower, Candidate, Leader, +// or Shutdown. +type RaftState uint32 + +const ( + // Follower is the initial state of a Raft node. + Follower RaftState = iota + + // Candidate is one of the valid states of a Raft node. + Candidate + + // Leader is one of the valid states of a Raft node. + Leader + + // Shutdown is the terminal state of a Raft node. + Shutdown +) + +func (s RaftState) String() string { + switch s { + case Follower: + return "Follower" + case Candidate: + return "Candidate" + case Leader: + return "Leader" + case Shutdown: + return "Shutdown" + default: + return "Unknown" + } +} + +// raftState is used to maintain various state variables +// and provides an interface to set/get the variables in a +// thread safe manner. +type raftState struct { + // The current term, cache of StableStore + currentTerm uint64 + + // Highest committed log entry + commitIndex uint64 + + // Last applied log to the FSM + lastApplied uint64 + + // protects 4 next fields + lastLock sync.Mutex + + // Cache the latest snapshot index/term + lastSnapshotIndex uint64 + lastSnapshotTerm uint64 + + // Cache the latest log from LogStore + lastLogIndex uint64 + lastLogTerm uint64 + + // Tracks running goroutines + routinesGroup sync.WaitGroup + + // The current state + state RaftState +} + +func (r *raftState) getState() RaftState { + stateAddr := (*uint32)(&r.state) + return RaftState(atomic.LoadUint32(stateAddr)) +} + +func (r *raftState) setState(s RaftState) { + stateAddr := (*uint32)(&r.state) + atomic.StoreUint32(stateAddr, uint32(s)) +} + +func (r *raftState) getCurrentTerm() uint64 { + return atomic.LoadUint64(&r.currentTerm) +} + +func (r *raftState) setCurrentTerm(term uint64) { + atomic.StoreUint64(&r.currentTerm, term) +} + +func (r *raftState) getLastLog() (index, term uint64) { + r.lastLock.Lock() + index = r.lastLogIndex + term = r.lastLogTerm + r.lastLock.Unlock() + return +} + +func (r *raftState) setLastLog(index, term uint64) { + r.lastLock.Lock() + r.lastLogIndex = index + r.lastLogTerm = term + r.lastLock.Unlock() +} + +func (r *raftState) getLastSnapshot() (index, term uint64) { + r.lastLock.Lock() + index = r.lastSnapshotIndex + term = r.lastSnapshotTerm + r.lastLock.Unlock() + return +} + +func (r *raftState) setLastSnapshot(index, term uint64) { + r.lastLock.Lock() + r.lastSnapshotIndex = index + r.lastSnapshotTerm = term + r.lastLock.Unlock() +} + +func (r *raftState) getCommitIndex() uint64 { + return atomic.LoadUint64(&r.commitIndex) +} + +func (r *raftState) setCommitIndex(index uint64) { + atomic.StoreUint64(&r.commitIndex, index) +} + +func (r *raftState) getLastApplied() uint64 { + return atomic.LoadUint64(&r.lastApplied) +} + +func (r *raftState) setLastApplied(index uint64) { + atomic.StoreUint64(&r.lastApplied, index) +} + +// Start a goroutine and properly handle the race between a routine +// starting and incrementing, and exiting and decrementing. +func (r *raftState) goFunc(f func()) { + r.routinesGroup.Add(1) + go func() { + defer r.routinesGroup.Done() + f() + }() +} + +func (r *raftState) waitShutdown() { + r.routinesGroup.Wait() +} + +// getLastIndex returns the last index in stable storage. +// Either from the last log or from the last snapshot. +func (r *raftState) getLastIndex() uint64 { + r.lastLock.Lock() + defer r.lastLock.Unlock() + return max(r.lastLogIndex, r.lastSnapshotIndex) +} + +// getLastEntry returns the last index and term in stable storage. +// Either from the last log or from the last snapshot. +func (r *raftState) getLastEntry() (uint64, uint64) { + r.lastLock.Lock() + defer r.lastLock.Unlock() + if r.lastLogIndex >= r.lastSnapshotIndex { + return r.lastLogIndex, r.lastLogTerm + } + return r.lastSnapshotIndex, r.lastSnapshotTerm +} diff --git a/vendor/github.com/hashicorp/raft/tcp_transport.go b/vendor/github.com/hashicorp/raft/tcp_transport.go new file mode 100644 index 0000000000..9281508a05 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/tcp_transport.go @@ -0,0 +1,105 @@ +package raft + +import ( + "errors" + "io" + "log" + "net" + "time" +) + +var ( + errNotAdvertisable = errors.New("local bind address is not advertisable") + errNotTCP = errors.New("local address is not a TCP address") +) + +// TCPStreamLayer implements StreamLayer interface for plain TCP. +type TCPStreamLayer struct { + advertise net.Addr + listener *net.TCPListener +} + +// NewTCPTransport returns a NetworkTransport that is built on top of +// a TCP streaming transport layer. +func NewTCPTransport( + bindAddr string, + advertise net.Addr, + maxPool int, + timeout time.Duration, + logOutput io.Writer, +) (*NetworkTransport, error) { + return newTCPTransport(bindAddr, advertise, maxPool, timeout, func(stream StreamLayer) *NetworkTransport { + return NewNetworkTransport(stream, maxPool, timeout, logOutput) + }) +} + +// NewTCPTransportWithLogger returns a NetworkTransport that is built on top of +// a TCP streaming transport layer, with log output going to the supplied Logger +func NewTCPTransportWithLogger( + bindAddr string, + advertise net.Addr, + maxPool int, + timeout time.Duration, + logger *log.Logger, +) (*NetworkTransport, error) { + return newTCPTransport(bindAddr, advertise, maxPool, timeout, func(stream StreamLayer) *NetworkTransport { + return NewNetworkTransportWithLogger(stream, maxPool, timeout, logger) + }) +} + +func newTCPTransport(bindAddr string, + advertise net.Addr, + maxPool int, + timeout time.Duration, + transportCreator func(stream StreamLayer) *NetworkTransport) (*NetworkTransport, error) { + // Try to bind + list, err := net.Listen("tcp", bindAddr) + if err != nil { + return nil, err + } + + // Create stream + stream := &TCPStreamLayer{ + advertise: advertise, + listener: list.(*net.TCPListener), + } + + // Verify that we have a usable advertise address + addr, ok := stream.Addr().(*net.TCPAddr) + if !ok { + list.Close() + return nil, errNotTCP + } + if addr.IP.IsUnspecified() { + list.Close() + return nil, errNotAdvertisable + } + + // Create the network transport + trans := transportCreator(stream) + return trans, nil +} + +// Dial implements the StreamLayer interface. +func (t *TCPStreamLayer) Dial(address ServerAddress, timeout time.Duration) (net.Conn, error) { + return net.DialTimeout("tcp", string(address), timeout) +} + +// Accept implements the net.Listener interface. +func (t *TCPStreamLayer) Accept() (c net.Conn, err error) { + return t.listener.Accept() +} + +// Close implements the net.Listener interface. +func (t *TCPStreamLayer) Close() (err error) { + return t.listener.Close() +} + +// Addr implements the net.Listener interface. +func (t *TCPStreamLayer) Addr() net.Addr { + // Use an advertise addr if provided + if t.advertise != nil { + return t.advertise + } + return t.listener.Addr() +} diff --git a/vendor/github.com/hashicorp/raft/transport.go b/vendor/github.com/hashicorp/raft/transport.go new file mode 100644 index 0000000000..633f97a8c5 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/transport.go @@ -0,0 +1,124 @@ +package raft + +import ( + "io" + "time" +) + +// RPCResponse captures both a response and a potential error. +type RPCResponse struct { + Response interface{} + Error error +} + +// RPC has a command, and provides a response mechanism. +type RPC struct { + Command interface{} + Reader io.Reader // Set only for InstallSnapshot + RespChan chan<- RPCResponse +} + +// Respond is used to respond with a response, error or both +func (r *RPC) Respond(resp interface{}, err error) { + r.RespChan <- RPCResponse{resp, err} +} + +// Transport provides an interface for network transports +// to allow Raft to communicate with other nodes. +type Transport interface { + // Consumer returns a channel that can be used to + // consume and respond to RPC requests. + Consumer() <-chan RPC + + // LocalAddr is used to return our local address to distinguish from our peers. + LocalAddr() ServerAddress + + // AppendEntriesPipeline returns an interface that can be used to pipeline + // AppendEntries requests. + AppendEntriesPipeline(target ServerAddress) (AppendPipeline, error) + + // AppendEntries sends the appropriate RPC to the target node. + AppendEntries(target ServerAddress, args *AppendEntriesRequest, resp *AppendEntriesResponse) error + + // RequestVote sends the appropriate RPC to the target node. + RequestVote(target ServerAddress, args *RequestVoteRequest, resp *RequestVoteResponse) error + + // InstallSnapshot is used to push a snapshot down to a follower. The data is read from + // the ReadCloser and streamed to the client. + InstallSnapshot(target ServerAddress, args *InstallSnapshotRequest, resp *InstallSnapshotResponse, data io.Reader) error + + // EncodePeer is used to serialize a peer's address. + EncodePeer(ServerAddress) []byte + + // DecodePeer is used to deserialize a peer's address. + DecodePeer([]byte) ServerAddress + + // SetHeartbeatHandler is used to setup a heartbeat handler + // as a fast-pass. This is to avoid head-of-line blocking from + // disk IO. If a Transport does not support this, it can simply + // ignore the call, and push the heartbeat onto the Consumer channel. + SetHeartbeatHandler(cb func(rpc RPC)) +} + +// WithClose is an interface that a transport may provide which +// allows a transport to be shut down cleanly when a Raft instance +// shuts down. +// +// It is defined separately from Transport as unfortunately it wasn't in the +// original interface specification. +type WithClose interface { + // Close permanently closes a transport, stopping + // any associated goroutines and freeing other resources. + Close() error +} + +// LoopbackTransport is an interface that provides a loopback transport suitable for testing +// e.g. InmemTransport. It's there so we don't have to rewrite tests. +type LoopbackTransport interface { + Transport // Embedded transport reference + WithPeers // Embedded peer management + WithClose // with a close routine +} + +// WithPeers is an interface that a transport may provide which allows for connection and +// disconnection. Unless the transport is a loopback transport, the transport specified to +// "Connect" is likely to be nil. +type WithPeers interface { + Connect(peer ServerAddress, t Transport) // Connect a peer + Disconnect(peer ServerAddress) // Disconnect a given peer + DisconnectAll() // Disconnect all peers, possibly to reconnect them later +} + +// AppendPipeline is used for pipelining AppendEntries requests. It is used +// to increase the replication throughput by masking latency and better +// utilizing bandwidth. +type AppendPipeline interface { + // AppendEntries is used to add another request to the pipeline. + // The send may block which is an effective form of back-pressure. + AppendEntries(args *AppendEntriesRequest, resp *AppendEntriesResponse) (AppendFuture, error) + + // Consumer returns a channel that can be used to consume + // response futures when they are ready. + Consumer() <-chan AppendFuture + + // Close closes the pipeline and cancels all inflight RPCs + Close() error +} + +// AppendFuture is used to return information about a pipelined AppendEntries request. +type AppendFuture interface { + Future + + // Start returns the time that the append request was started. + // It is always OK to call this method. + Start() time.Time + + // Request holds the parameters of the AppendEntries call. + // It is always OK to call this method. + Request() *AppendEntriesRequest + + // Response holds the results of the AppendEntries call. + // This method must only be called after the Error + // method returns, and will only be valid on success. + Response() *AppendEntriesResponse +} diff --git a/vendor/github.com/hashicorp/raft/util.go b/vendor/github.com/hashicorp/raft/util.go new file mode 100644 index 0000000000..90428d7437 --- /dev/null +++ b/vendor/github.com/hashicorp/raft/util.go @@ -0,0 +1,133 @@ +package raft + +import ( + "bytes" + crand "crypto/rand" + "fmt" + "math" + "math/big" + "math/rand" + "time" + + "github.com/hashicorp/go-msgpack/codec" +) + +func init() { + // Ensure we use a high-entropy seed for the psuedo-random generator + rand.Seed(newSeed()) +} + +// returns an int64 from a crypto random source +// can be used to seed a source for a math/rand. +func newSeed() int64 { + r, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64)) + if err != nil { + panic(fmt.Errorf("failed to read random bytes: %v", err)) + } + return r.Int64() +} + +// randomTimeout returns a value that is between the minVal and 2x minVal. +func randomTimeout(minVal time.Duration) <-chan time.Time { + if minVal == 0 { + return nil + } + extra := (time.Duration(rand.Int63()) % minVal) + return time.After(minVal + extra) +} + +// min returns the minimum. +func min(a, b uint64) uint64 { + if a <= b { + return a + } + return b +} + +// max returns the maximum. +func max(a, b uint64) uint64 { + if a >= b { + return a + } + return b +} + +// generateUUID is used to generate a random UUID. +func generateUUID() string { + buf := make([]byte, 16) + if _, err := crand.Read(buf); err != nil { + panic(fmt.Errorf("failed to read random bytes: %v", err)) + } + + return fmt.Sprintf("%08x-%04x-%04x-%04x-%12x", + buf[0:4], + buf[4:6], + buf[6:8], + buf[8:10], + buf[10:16]) +} + +// asyncNotifyCh is used to do an async channel send +// to a single channel without blocking. +func asyncNotifyCh(ch chan struct{}) { + select { + case ch <- struct{}{}: + default: + } +} + +// drainNotifyCh empties out a single-item notification channel without +// blocking, and returns whether it received anything. +func drainNotifyCh(ch chan struct{}) bool { + select { + case <-ch: + return true + default: + return false + } +} + +// asyncNotifyBool is used to do an async notification +// on a bool channel. +func asyncNotifyBool(ch chan bool, v bool) { + select { + case ch <- v: + default: + } +} + +// Decode reverses the encode operation on a byte slice input. +func decodeMsgPack(buf []byte, out interface{}) error { + r := bytes.NewBuffer(buf) + hd := codec.MsgpackHandle{} + dec := codec.NewDecoder(r, &hd) + return dec.Decode(out) +} + +// Encode writes an encoded object to a new bytes buffer. +func encodeMsgPack(in interface{}) (*bytes.Buffer, error) { + buf := bytes.NewBuffer(nil) + hd := codec.MsgpackHandle{} + enc := codec.NewEncoder(buf, &hd) + err := enc.Encode(in) + return buf, err +} + +// backoff is used to compute an exponential backoff +// duration. Base time is scaled by the current round, +// up to some maximum scale factor. +func backoff(base time.Duration, round, limit uint64) time.Duration { + power := min(round, limit) + for power > 2 { + base *= 2 + power-- + } + return base +} + +// Needed for sorting []uint64, used to determine commitment +type uint64Slice []uint64 + +func (p uint64Slice) Len() int { return len(p) } +func (p uint64Slice) Less(i, j int) bool { return p[i] < p[j] } +func (p uint64Slice) Swap(i, j int) { p[i], p[j] = p[j], p[i] } diff --git a/vendor/github.com/hashicorp/serf/serf/broadcast.go b/vendor/github.com/hashicorp/serf/serf/broadcast.go new file mode 100644 index 0000000000..d20728f3f4 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/broadcast.go @@ -0,0 +1,27 @@ +package serf + +import ( + "github.com/hashicorp/memberlist" +) + +// broadcast is an implementation of memberlist.Broadcast and is used +// to manage broadcasts across the memberlist channel that are related +// only to Serf. +type broadcast struct { + msg []byte + notify chan<- struct{} +} + +func (b *broadcast) Invalidates(other memberlist.Broadcast) bool { + return false +} + +func (b *broadcast) Message() []byte { + return b.msg +} + +func (b *broadcast) Finished() { + if b.notify != nil { + close(b.notify) + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/coalesce.go b/vendor/github.com/hashicorp/serf/serf/coalesce.go new file mode 100644 index 0000000000..567943be14 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/coalesce.go @@ -0,0 +1,80 @@ +package serf + +import ( + "time" +) + +// coalescer is a simple interface that must be implemented to be +// used inside of a coalesceLoop +type coalescer interface { + // Can the coalescer handle this event, if not it is + // directly passed through to the destination channel + Handle(Event) bool + + // Invoked to coalesce the given event + Coalesce(Event) + + // Invoked to flush the coalesced events + Flush(outChan chan<- Event) +} + +// coalescedEventCh returns an event channel where the events are coalesced +// using the given coalescer. +func coalescedEventCh(outCh chan<- Event, shutdownCh <-chan struct{}, + cPeriod time.Duration, qPeriod time.Duration, c coalescer) chan<- Event { + inCh := make(chan Event, 1024) + go coalesceLoop(inCh, outCh, shutdownCh, cPeriod, qPeriod, c) + return inCh +} + +// coalesceLoop is a simple long-running routine that manages the high-level +// flow of coalescing based on quiescence and a maximum quantum period. +func coalesceLoop(inCh <-chan Event, outCh chan<- Event, shutdownCh <-chan struct{}, + coalescePeriod time.Duration, quiescentPeriod time.Duration, c coalescer) { + var quiescent <-chan time.Time + var quantum <-chan time.Time + shutdown := false + +INGEST: + // Reset the timers + quantum = nil + quiescent = nil + + for { + select { + case e := <-inCh: + // Ignore any non handled events + if !c.Handle(e) { + outCh <- e + continue + } + + // Start a new quantum if we need to + // and restart the quiescent timer + if quantum == nil { + quantum = time.After(coalescePeriod) + } + quiescent = time.After(quiescentPeriod) + + // Coalesce the event + c.Coalesce(e) + + case <-quantum: + goto FLUSH + case <-quiescent: + goto FLUSH + case <-shutdownCh: + shutdown = true + goto FLUSH + } + } + +FLUSH: + // Flush the coalesced events + c.Flush(outCh) + + // Restart ingestion if we are not done + if !shutdown { + goto INGEST + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/coalesce_member.go b/vendor/github.com/hashicorp/serf/serf/coalesce_member.go new file mode 100644 index 0000000000..82fdb8dacf --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/coalesce_member.go @@ -0,0 +1,68 @@ +package serf + +type coalesceEvent struct { + Type EventType + Member *Member +} + +type memberEventCoalescer struct { + lastEvents map[string]EventType + latestEvents map[string]coalesceEvent +} + +func (c *memberEventCoalescer) Handle(e Event) bool { + switch e.EventType() { + case EventMemberJoin: + return true + case EventMemberLeave: + return true + case EventMemberFailed: + return true + case EventMemberUpdate: + return true + case EventMemberReap: + return true + default: + return false + } +} + +func (c *memberEventCoalescer) Coalesce(raw Event) { + e := raw.(MemberEvent) + for _, m := range e.Members { + c.latestEvents[m.Name] = coalesceEvent{ + Type: e.Type, + Member: &m, + } + } +} + +func (c *memberEventCoalescer) Flush(outCh chan<- Event) { + // Coalesce the various events we got into a single set of events. + events := make(map[EventType]*MemberEvent) + for name, cevent := range c.latestEvents { + previous, ok := c.lastEvents[name] + + // If we sent the same event before, then ignore + // unless it is a MemberUpdate + if ok && previous == cevent.Type && cevent.Type != EventMemberUpdate { + continue + } + + // Update our last event + c.lastEvents[name] = cevent.Type + + // Add it to our event + newEvent, ok := events[cevent.Type] + if !ok { + newEvent = &MemberEvent{Type: cevent.Type} + events[cevent.Type] = newEvent + } + newEvent.Members = append(newEvent.Members, *cevent.Member) + } + + // Send out those events + for _, event := range events { + outCh <- *event + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/coalesce_user.go b/vendor/github.com/hashicorp/serf/serf/coalesce_user.go new file mode 100644 index 0000000000..1551b6c52c --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/coalesce_user.go @@ -0,0 +1,52 @@ +package serf + +type latestUserEvents struct { + LTime LamportTime + Events []Event +} + +type userEventCoalescer struct { + // Maps an event name into the latest versions + events map[string]*latestUserEvents +} + +func (c *userEventCoalescer) Handle(e Event) bool { + // Only handle EventUser messages + if e.EventType() != EventUser { + return false + } + + // Check if coalescing is enabled + user := e.(UserEvent) + return user.Coalesce +} + +func (c *userEventCoalescer) Coalesce(e Event) { + user := e.(UserEvent) + latest, ok := c.events[user.Name] + + // Create a new entry if there are none, or + // if this message has the newest LTime + if !ok || latest.LTime < user.LTime { + latest = &latestUserEvents{ + LTime: user.LTime, + Events: []Event{e}, + } + c.events[user.Name] = latest + return + } + + // If the the same age, save it + if latest.LTime == user.LTime { + latest.Events = append(latest.Events, e) + } +} + +func (c *userEventCoalescer) Flush(outChan chan<- Event) { + for _, latest := range c.events { + for _, e := range latest.Events { + outChan <- e + } + } + c.events = make(map[string]*latestUserEvents) +} diff --git a/vendor/github.com/hashicorp/serf/serf/config.go b/vendor/github.com/hashicorp/serf/serf/config.go new file mode 100644 index 0000000000..74f21ffbdf --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/config.go @@ -0,0 +1,267 @@ +package serf + +import ( + "io" + "log" + "os" + "time" + + "github.com/hashicorp/memberlist" +) + +// ProtocolVersionMap is the mapping of Serf delegate protocol versions +// to memberlist protocol versions. We mask the memberlist protocols using +// our own protocol version. +var ProtocolVersionMap map[uint8]uint8 + +func init() { + ProtocolVersionMap = map[uint8]uint8{ + 5: 2, + 4: 2, + 3: 2, + 2: 2, + } +} + +// Config is the configuration for creating a Serf instance. +type Config struct { + // The name of this node. This must be unique in the cluster. If this + // is not set, Serf will set it to the hostname of the running machine. + NodeName string + + // The tags for this role, if any. This is used to provide arbitrary + // key/value metadata per-node. For example, a "role" tag may be used to + // differentiate "load-balancer" from a "web" role as parts of the same cluster. + // Tags are deprecating 'Role', and instead it acts as a special key in this + // map. + Tags map[string]string + + // EventCh is a channel that receives all the Serf events. The events + // are sent on this channel in proper ordering. Care must be taken that + // this channel doesn't block, either by processing the events quick + // enough or buffering the channel, otherwise it can block state updates + // within Serf itself. If no EventCh is specified, no events will be fired, + // but point-in-time snapshots of members can still be retrieved by + // calling Members on Serf. + EventCh chan<- Event + + // ProtocolVersion is the protocol version to speak. This must be between + // ProtocolVersionMin and ProtocolVersionMax. + ProtocolVersion uint8 + + // BroadcastTimeout is the amount of time to wait for a broadcast + // message to be sent to the cluster. Broadcast messages are used for + // things like leave messages and force remove messages. If this is not + // set, a timeout of 5 seconds will be set. + BroadcastTimeout time.Duration + + // The settings below relate to Serf's event coalescence feature. Serf + // is able to coalesce multiple events into single events in order to + // reduce the amount of noise that is sent along the EventCh. For example + // if five nodes quickly join, the EventCh will be sent one EventMemberJoin + // containing the five nodes rather than five individual EventMemberJoin + // events. Coalescence can mitigate potential flapping behavior. + // + // Coalescence is disabled by default and can be enabled by setting + // CoalescePeriod. + // + // CoalescePeriod specifies the time duration to coalesce events. + // For example, if this is set to 5 seconds, then all events received + // within 5 seconds that can be coalesced will be. + // + // QuiescentPeriod specifies the duration of time where if no events + // are received, coalescence immediately happens. For example, if + // CoalscePeriod is set to 10 seconds but QuiscentPeriod is set to 2 + // seconds, then the events will be coalesced and dispatched if no + // new events are received within 2 seconds of the last event. Otherwise, + // every event will always be delayed by at least 10 seconds. + CoalescePeriod time.Duration + QuiescentPeriod time.Duration + + // The settings below relate to Serf's user event coalescing feature. + // The settings operate like above but only affect user messages and + // not the Member* messages that Serf generates. + UserCoalescePeriod time.Duration + UserQuiescentPeriod time.Duration + + // The settings below relate to Serf keeping track of recently + // failed/left nodes and attempting reconnects. + // + // ReapInterval is the interval when the reaper runs. If this is not + // set (it is zero), it will be set to a reasonable default. + // + // ReconnectInterval is the interval when we attempt to reconnect + // to failed nodes. If this is not set (it is zero), it will be set + // to a reasonable default. + // + // ReconnectTimeout is the amount of time to attempt to reconnect to + // a failed node before giving up and considering it completely gone. + // + // TombstoneTimeout is the amount of time to keep around nodes + // that gracefully left as tombstones for syncing state with other + // Serf nodes. + ReapInterval time.Duration + ReconnectInterval time.Duration + ReconnectTimeout time.Duration + TombstoneTimeout time.Duration + + // FlapTimeout is the amount of time less than which we consider a node + // being failed and rejoining looks like a flap for telemetry purposes. + // This should be set less than a typical reboot time, but large enough + // to see actual events, given our expected detection times for a failed + // node. + FlapTimeout time.Duration + + // QueueDepthWarning is used to generate warning message if the + // number of queued messages to broadcast exceeds this number. This + // is to provide the user feedback if events are being triggered + // faster than they can be disseminated + QueueDepthWarning int + + // MaxQueueDepth is used to start dropping messages if the number + // of queued messages to broadcast exceeds this number. This is to + // prevent an unbounded growth of memory utilization + MaxQueueDepth int + + // RecentIntentTimeout is used to determine how long we store recent + // join and leave intents. This is used to guard against the case where + // Serf broadcasts an intent that arrives before the Memberlist event. + // It is important that this not be too short to avoid continuous + // rebroadcasting of dead events. + RecentIntentTimeout time.Duration + + // EventBuffer is used to control how many events are buffered. + // This is used to prevent re-delivery of events to a client. The buffer + // must be large enough to handle all "recent" events, since Serf will + // not deliver messages that are older than the oldest entry in the buffer. + // Thus if a client is generating too many events, it's possible that the + // buffer gets overrun and messages are not delivered. + EventBuffer int + + // QueryBuffer is used to control how many queries are buffered. + // This is used to prevent re-delivery of queries to a client. The buffer + // must be large enough to handle all "recent" events, since Serf will not + // deliver queries older than the oldest entry in the buffer. + // Thus if a client is generating too many queries, it's possible that the + // buffer gets overrun and messages are not delivered. + QueryBuffer int + + // QueryTimeoutMult configures the default timeout multipler for a query to run if no + // specific value is provided. Queries are real-time by nature, where the + // reply is time sensitive. As a result, results are collected in an async + // fashion, however the query must have a bounded duration. We want the timeout + // to be long enough that all nodes have time to receive the message, run a handler, + // and generate a reply. Once the timeout is exceeded, any further replies are ignored. + // The default value is + // + // Timeout = GossipInterval * QueryTimeoutMult * log(N+1) + // + QueryTimeoutMult int + + // QueryResponseSizeLimit and QuerySizeLimit limit the inbound and + // outbound payload sizes for queries, respectively. These must fit + // in a UDP packet with some additional overhead, so tuning these + // past the default values of 1024 will depend on your network + // configuration. + QueryResponseSizeLimit int + QuerySizeLimit int + + // MemberlistConfig is the memberlist configuration that Serf will + // use to do the underlying membership management and gossip. Some + // fields in the MemberlistConfig will be overwritten by Serf no + // matter what: + // + // * Name - This will always be set to the same as the NodeName + // in this configuration. + // + // * Events - Serf uses a custom event delegate. + // + // * Delegate - Serf uses a custom delegate. + // + MemberlistConfig *memberlist.Config + + // LogOutput is the location to write logs to. If this is not set, + // logs will go to stderr. + LogOutput io.Writer + + // Logger is a custom logger which you provide. If Logger is set, it will use + // this for the internal logger. If Logger is not set, it will fall back to the + // behavior for using LogOutput. You cannot specify both LogOutput and Logger + // at the same time. + Logger *log.Logger + + // SnapshotPath if provided is used to snapshot live nodes as well + // as lamport clock values. When Serf is started with a snapshot, + // it will attempt to join all the previously known nodes until one + // succeeds and will also avoid replaying old user events. + SnapshotPath string + + // RejoinAfterLeave controls our interaction with the snapshot file. + // When set to false (default), a leave causes a Serf to not rejoin + // the cluster until an explicit join is received. If this is set to + // true, we ignore the leave, and rejoin the cluster on start. + RejoinAfterLeave bool + + // EnableNameConflictResolution controls if Serf will actively attempt + // to resolve a name conflict. Since each Serf member must have a unique + // name, a cluster can run into issues if multiple nodes claim the same + // name. Without automatic resolution, Serf merely logs some warnings, but + // otherwise does not take any action. Automatic resolution detects the + // conflict and issues a special query which asks the cluster for the + // Name -> IP:Port mapping. If there is a simple majority of votes, that + // node stays while the other node will leave the cluster and exit. + EnableNameConflictResolution bool + + // DisableCoordinates controls if Serf will maintain an estimate of this + // node's network coordinate internally. A network coordinate is useful + // for estimating the network distance (i.e. round trip time) between + // two nodes. Enabling this option adds some overhead to ping messages. + DisableCoordinates bool + + // KeyringFile provides the location of a writable file where Serf can + // persist changes to the encryption keyring. + KeyringFile string + + // Merge can be optionally provided to intercept a cluster merge + // and conditionally abort the merge. + Merge MergeDelegate +} + +// Init allocates the subdata structures +func (c *Config) Init() { + if c.Tags == nil { + c.Tags = make(map[string]string) + } +} + +// DefaultConfig returns a Config struct that contains reasonable defaults +// for most of the configurations. +func DefaultConfig() *Config { + hostname, err := os.Hostname() + if err != nil { + panic(err) + } + + return &Config{ + NodeName: hostname, + BroadcastTimeout: 5 * time.Second, + EventBuffer: 512, + QueryBuffer: 512, + LogOutput: os.Stderr, + ProtocolVersion: 4, + ReapInterval: 15 * time.Second, + RecentIntentTimeout: 5 * time.Minute, + ReconnectInterval: 30 * time.Second, + ReconnectTimeout: 24 * time.Hour, + QueueDepthWarning: 128, + MaxQueueDepth: 4096, + TombstoneTimeout: 24 * time.Hour, + FlapTimeout: 60 * time.Second, + MemberlistConfig: memberlist.DefaultLANConfig(), + QueryTimeoutMult: 16, + QueryResponseSizeLimit: 1024, + QuerySizeLimit: 1024, + EnableNameConflictResolution: true, + DisableCoordinates: false, + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/conflict_delegate.go b/vendor/github.com/hashicorp/serf/serf/conflict_delegate.go new file mode 100644 index 0000000000..65a50156c0 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/conflict_delegate.go @@ -0,0 +1,13 @@ +package serf + +import ( + "github.com/hashicorp/memberlist" +) + +type conflictDelegate struct { + serf *Serf +} + +func (c *conflictDelegate) NotifyConflict(existing, other *memberlist.Node) { + c.serf.handleNodeConflict(existing, other) +} diff --git a/vendor/github.com/hashicorp/serf/serf/delegate.go b/vendor/github.com/hashicorp/serf/serf/delegate.go new file mode 100644 index 0000000000..8f51cb7d08 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/delegate.go @@ -0,0 +1,275 @@ +package serf + +import ( + "bytes" + "fmt" + + "github.com/armon/go-metrics" + "github.com/hashicorp/go-msgpack/codec" +) + +// delegate is the memberlist.Delegate implementation that Serf uses. +type delegate struct { + serf *Serf +} + +func (d *delegate) NodeMeta(limit int) []byte { + roleBytes := d.serf.encodeTags(d.serf.config.Tags) + if len(roleBytes) > limit { + panic(fmt.Errorf("Node tags '%v' exceeds length limit of %d bytes", d.serf.config.Tags, limit)) + } + + return roleBytes +} + +func (d *delegate) NotifyMsg(buf []byte) { + // If we didn't actually receive any data, then ignore it. + if len(buf) == 0 { + return + } + metrics.AddSample([]string{"serf", "msgs", "received"}, float32(len(buf))) + + rebroadcast := false + rebroadcastQueue := d.serf.broadcasts + t := messageType(buf[0]) + switch t { + case messageLeaveType: + var leave messageLeave + if err := decodeMessage(buf[1:], &leave); err != nil { + d.serf.logger.Printf("[ERR] serf: Error decoding leave message: %s", err) + break + } + + d.serf.logger.Printf("[DEBUG] serf: messageLeaveType: %s", leave.Node) + rebroadcast = d.serf.handleNodeLeaveIntent(&leave) + + case messageJoinType: + var join messageJoin + if err := decodeMessage(buf[1:], &join); err != nil { + d.serf.logger.Printf("[ERR] serf: Error decoding join message: %s", err) + break + } + + d.serf.logger.Printf("[DEBUG] serf: messageJoinType: %s", join.Node) + rebroadcast = d.serf.handleNodeJoinIntent(&join) + + case messageUserEventType: + var event messageUserEvent + if err := decodeMessage(buf[1:], &event); err != nil { + d.serf.logger.Printf("[ERR] serf: Error decoding user event message: %s", err) + break + } + + d.serf.logger.Printf("[DEBUG] serf: messageUserEventType: %s", event.Name) + rebroadcast = d.serf.handleUserEvent(&event) + rebroadcastQueue = d.serf.eventBroadcasts + + case messageQueryType: + var query messageQuery + if err := decodeMessage(buf[1:], &query); err != nil { + d.serf.logger.Printf("[ERR] serf: Error decoding query message: %s", err) + break + } + + d.serf.logger.Printf("[DEBUG] serf: messageQueryType: %s", query.Name) + rebroadcast = d.serf.handleQuery(&query) + rebroadcastQueue = d.serf.queryBroadcasts + + case messageQueryResponseType: + var resp messageQueryResponse + if err := decodeMessage(buf[1:], &resp); err != nil { + d.serf.logger.Printf("[ERR] serf: Error decoding query response message: %s", err) + break + } + + d.serf.logger.Printf("[DEBUG] serf: messageQueryResponseType: %v", resp.From) + d.serf.handleQueryResponse(&resp) + + case messageRelayType: + var header relayHeader + var handle codec.MsgpackHandle + reader := bytes.NewReader(buf[1:]) + decoder := codec.NewDecoder(reader, &handle) + if err := decoder.Decode(&header); err != nil { + d.serf.logger.Printf("[ERR] serf: Error decoding relay header: %s", err) + break + } + + // The remaining contents are the message itself, so forward that + raw := make([]byte, reader.Len()) + reader.Read(raw) + d.serf.logger.Printf("[DEBUG] serf: Relaying response to addr: %s", header.DestAddr.String()) + if err := d.serf.memberlist.SendTo(&header.DestAddr, raw); err != nil { + d.serf.logger.Printf("[ERR] serf: Error forwarding message to %s: %s", header.DestAddr.String(), err) + break + } + + default: + d.serf.logger.Printf("[WARN] serf: Received message of unknown type: %d", t) + } + + if rebroadcast { + // Copy the buffer since it we cannot rely on the slice not changing + newBuf := make([]byte, len(buf)) + copy(newBuf, buf) + + rebroadcastQueue.QueueBroadcast(&broadcast{ + msg: newBuf, + notify: nil, + }) + } +} + +func (d *delegate) GetBroadcasts(overhead, limit int) [][]byte { + msgs := d.serf.broadcasts.GetBroadcasts(overhead, limit) + + // Determine the bytes used already + bytesUsed := 0 + for _, msg := range msgs { + lm := len(msg) + bytesUsed += lm + overhead + metrics.AddSample([]string{"serf", "msgs", "sent"}, float32(lm)) + } + + // Get any additional query broadcasts + queryMsgs := d.serf.queryBroadcasts.GetBroadcasts(overhead, limit-bytesUsed) + if queryMsgs != nil { + for _, m := range queryMsgs { + lm := len(m) + bytesUsed += lm + overhead + metrics.AddSample([]string{"serf", "msgs", "sent"}, float32(lm)) + } + msgs = append(msgs, queryMsgs...) + } + + // Get any additional event broadcasts + eventMsgs := d.serf.eventBroadcasts.GetBroadcasts(overhead, limit-bytesUsed) + if eventMsgs != nil { + for _, m := range eventMsgs { + lm := len(m) + bytesUsed += lm + overhead + metrics.AddSample([]string{"serf", "msgs", "sent"}, float32(lm)) + } + msgs = append(msgs, eventMsgs...) + } + + return msgs +} + +func (d *delegate) LocalState(join bool) []byte { + d.serf.memberLock.RLock() + defer d.serf.memberLock.RUnlock() + d.serf.eventLock.RLock() + defer d.serf.eventLock.RUnlock() + + // Create the message to send + pp := messagePushPull{ + LTime: d.serf.clock.Time(), + StatusLTimes: make(map[string]LamportTime, len(d.serf.members)), + LeftMembers: make([]string, 0, len(d.serf.leftMembers)), + EventLTime: d.serf.eventClock.Time(), + Events: d.serf.eventBuffer, + QueryLTime: d.serf.queryClock.Time(), + } + + // Add all the join LTimes + for name, member := range d.serf.members { + pp.StatusLTimes[name] = member.statusLTime + } + + // Add all the left nodes + for _, member := range d.serf.leftMembers { + pp.LeftMembers = append(pp.LeftMembers, member.Name) + } + + // Encode the push pull state + buf, err := encodeMessage(messagePushPullType, &pp) + if err != nil { + d.serf.logger.Printf("[ERR] serf: Failed to encode local state: %v", err) + return nil + } + return buf +} + +func (d *delegate) MergeRemoteState(buf []byte, isJoin bool) { + // Ensure we have a message + if len(buf) == 0 { + d.serf.logger.Printf("[ERR] serf: Remote state is zero bytes") + return + } + + // Check the message type + if messageType(buf[0]) != messagePushPullType { + d.serf.logger.Printf("[ERR] serf: Remote state has bad type prefix: %v", buf[0]) + return + } + + // Attempt a decode + pp := messagePushPull{} + if err := decodeMessage(buf[1:], &pp); err != nil { + d.serf.logger.Printf("[ERR] serf: Failed to decode remote state: %v", err) + return + } + + // Witness the Lamport clocks first. + // We subtract 1 since no message with that clock has been sent yet + if pp.LTime > 0 { + d.serf.clock.Witness(pp.LTime - 1) + } + if pp.EventLTime > 0 { + d.serf.eventClock.Witness(pp.EventLTime - 1) + } + if pp.QueryLTime > 0 { + d.serf.queryClock.Witness(pp.QueryLTime - 1) + } + + // Process the left nodes first to avoid the LTimes from being increment + // in the wrong order + leftMap := make(map[string]struct{}, len(pp.LeftMembers)) + leave := messageLeave{} + for _, name := range pp.LeftMembers { + leftMap[name] = struct{}{} + leave.LTime = pp.StatusLTimes[name] + leave.Node = name + d.serf.handleNodeLeaveIntent(&leave) + } + + // Update any other LTimes + join := messageJoin{} + for name, statusLTime := range pp.StatusLTimes { + // Skip the left nodes + if _, ok := leftMap[name]; ok { + continue + } + + // Create an artificial join message + join.LTime = statusLTime + join.Node = name + d.serf.handleNodeJoinIntent(&join) + } + + // If we are doing a join, and eventJoinIgnore is set + // then we set the eventMinTime to the EventLTime. This + // prevents any of the incoming events from being processed + if isJoin && d.serf.eventJoinIgnore { + d.serf.eventLock.Lock() + if pp.EventLTime > d.serf.eventMinTime { + d.serf.eventMinTime = pp.EventLTime + } + d.serf.eventLock.Unlock() + } + + // Process all the events + userEvent := messageUserEvent{} + for _, events := range pp.Events { + if events == nil { + continue + } + userEvent.LTime = events.LTime + for _, e := range events.Events { + userEvent.Name = e.Name + userEvent.Payload = e.Payload + d.serf.handleUserEvent(&userEvent) + } + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/event.go b/vendor/github.com/hashicorp/serf/serf/event.go new file mode 100644 index 0000000000..29211393f8 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/event.go @@ -0,0 +1,174 @@ +package serf + +import ( + "fmt" + "net" + "sync" + "time" +) + +// EventType are all the types of events that may occur and be sent +// along the Serf channel. +type EventType int + +const ( + EventMemberJoin EventType = iota + EventMemberLeave + EventMemberFailed + EventMemberUpdate + EventMemberReap + EventUser + EventQuery +) + +func (t EventType) String() string { + switch t { + case EventMemberJoin: + return "member-join" + case EventMemberLeave: + return "member-leave" + case EventMemberFailed: + return "member-failed" + case EventMemberUpdate: + return "member-update" + case EventMemberReap: + return "member-reap" + case EventUser: + return "user" + case EventQuery: + return "query" + default: + panic(fmt.Sprintf("unknown event type: %d", t)) + } +} + +// Event is a generic interface for exposing Serf events +// Clients will usually need to use a type switches to get +// to a more useful type +type Event interface { + EventType() EventType + String() string +} + +// MemberEvent is the struct used for member related events +// Because Serf coalesces events, an event may contain multiple members. +type MemberEvent struct { + Type EventType + Members []Member +} + +func (m MemberEvent) EventType() EventType { + return m.Type +} + +func (m MemberEvent) String() string { + switch m.Type { + case EventMemberJoin: + return "member-join" + case EventMemberLeave: + return "member-leave" + case EventMemberFailed: + return "member-failed" + case EventMemberUpdate: + return "member-update" + case EventMemberReap: + return "member-reap" + default: + panic(fmt.Sprintf("unknown event type: %d", m.Type)) + } +} + +// UserEvent is the struct used for events that are triggered +// by the user and are not related to members +type UserEvent struct { + LTime LamportTime + Name string + Payload []byte + Coalesce bool +} + +func (u UserEvent) EventType() EventType { + return EventUser +} + +func (u UserEvent) String() string { + return fmt.Sprintf("user-event: %s", u.Name) +} + +// Query is the struct used by EventQuery type events +type Query struct { + LTime LamportTime + Name string + Payload []byte + + serf *Serf + id uint32 // ID is not exported, since it may change + addr []byte // Address to respond to + port uint16 // Port to respond to + deadline time.Time // Must respond by this deadline + relayFactor uint8 // Number of duplicate responses to relay back to sender + respLock sync.Mutex +} + +func (q *Query) EventType() EventType { + return EventQuery +} + +func (q *Query) String() string { + return fmt.Sprintf("query: %s", q.Name) +} + +// Deadline returns the time by which a response must be sent +func (q *Query) Deadline() time.Time { + return q.deadline +} + +// Respond is used to send a response to the user query +func (q *Query) Respond(buf []byte) error { + q.respLock.Lock() + defer q.respLock.Unlock() + + // Check if we've already responded + if q.deadline.IsZero() { + return fmt.Errorf("response already sent") + } + + // Ensure we aren't past our response deadline + if time.Now().After(q.deadline) { + return fmt.Errorf("response is past the deadline") + } + + // Create response + resp := messageQueryResponse{ + LTime: q.LTime, + ID: q.id, + From: q.serf.config.NodeName, + Payload: buf, + } + + // Send a direct response + raw, err := encodeMessage(messageQueryResponseType, &resp) + if err != nil { + return fmt.Errorf("failed to format response: %v", err) + } + + // Check the size limit + if len(raw) > q.serf.config.QueryResponseSizeLimit { + return fmt.Errorf("response exceeds limit of %d bytes", q.serf.config.QueryResponseSizeLimit) + } + + // Send the response directly to the originator + addr := net.UDPAddr{IP: q.addr, Port: int(q.port)} + if err := q.serf.memberlist.SendTo(&addr, raw); err != nil { + return err + } + + // Relay the response through up to relayFactor other nodes + if err := q.serf.relayResponse(q.relayFactor, addr, &resp); err != nil { + return err + } + + // Clear the deadline, responses sent + q.deadline = time.Time{} + return nil +} diff --git a/vendor/github.com/hashicorp/serf/serf/event_delegate.go b/vendor/github.com/hashicorp/serf/serf/event_delegate.go new file mode 100644 index 0000000000..e201322819 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/event_delegate.go @@ -0,0 +1,21 @@ +package serf + +import ( + "github.com/hashicorp/memberlist" +) + +type eventDelegate struct { + serf *Serf +} + +func (e *eventDelegate) NotifyJoin(n *memberlist.Node) { + e.serf.handleNodeJoin(n) +} + +func (e *eventDelegate) NotifyLeave(n *memberlist.Node) { + e.serf.handleNodeLeave(n) +} + +func (e *eventDelegate) NotifyUpdate(n *memberlist.Node) { + e.serf.handleNodeUpdate(n) +} diff --git a/vendor/github.com/hashicorp/serf/serf/internal_query.go b/vendor/github.com/hashicorp/serf/serf/internal_query.go new file mode 100644 index 0000000000..128b2cf214 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/internal_query.go @@ -0,0 +1,312 @@ +package serf + +import ( + "encoding/base64" + "log" + "strings" +) + +const ( + // This is the prefix we use for queries that are internal to Serf. + // They are handled internally, and not forwarded to a client. + InternalQueryPrefix = "_serf_" + + // pingQuery is run to check for reachability + pingQuery = "ping" + + // conflictQuery is run to resolve a name conflict + conflictQuery = "conflict" + + // installKeyQuery is used to install a new key + installKeyQuery = "install-key" + + // useKeyQuery is used to change the primary encryption key + useKeyQuery = "use-key" + + // removeKeyQuery is used to remove a key from the keyring + removeKeyQuery = "remove-key" + + // listKeysQuery is used to list all known keys in the cluster + listKeysQuery = "list-keys" +) + +// internalQueryName is used to generate a query name for an internal query +func internalQueryName(name string) string { + return InternalQueryPrefix + name +} + +// serfQueries is used to listen for queries that start with +// _serf and respond to them as appropriate. +type serfQueries struct { + inCh chan Event + logger *log.Logger + outCh chan<- Event + serf *Serf + shutdownCh <-chan struct{} +} + +// nodeKeyResponse is used to store the result from an individual node while +// replying to key modification queries +type nodeKeyResponse struct { + // Result indicates true/false if there were errors or not + Result bool + + // Message contains error messages or other information + Message string + + // Keys is used in listing queries to relay a list of installed keys + Keys []string +} + +// newSerfQueries is used to create a new serfQueries. We return an event +// channel that is ingested and forwarded to an outCh. Any Queries that +// have the InternalQueryPrefix are handled instead of forwarded. +func newSerfQueries(serf *Serf, logger *log.Logger, outCh chan<- Event, shutdownCh <-chan struct{}) (chan<- Event, error) { + inCh := make(chan Event, 1024) + q := &serfQueries{ + inCh: inCh, + logger: logger, + outCh: outCh, + serf: serf, + shutdownCh: shutdownCh, + } + go q.stream() + return inCh, nil +} + +// stream is a long running routine to ingest the event stream +func (s *serfQueries) stream() { + for { + select { + case e := <-s.inCh: + // Check if this is a query we should process + if q, ok := e.(*Query); ok && strings.HasPrefix(q.Name, InternalQueryPrefix) { + go s.handleQuery(q) + + } else if s.outCh != nil { + s.outCh <- e + } + + case <-s.shutdownCh: + return + } + } +} + +// handleQuery is invoked when we get an internal query +func (s *serfQueries) handleQuery(q *Query) { + // Get the queryName after the initial prefix + queryName := q.Name[len(InternalQueryPrefix):] + switch queryName { + case pingQuery: + // Nothing to do, we will ack the query + case conflictQuery: + s.handleConflict(q) + case installKeyQuery: + s.handleInstallKey(q) + case useKeyQuery: + s.handleUseKey(q) + case removeKeyQuery: + s.handleRemoveKey(q) + case listKeysQuery: + s.handleListKeys(q) + default: + s.logger.Printf("[WARN] serf: Unhandled internal query '%s'", queryName) + } +} + +// handleConflict is invoked when we get a query that is attempting to +// disambiguate a name conflict. They payload is a node name, and the response +// should the address we believe that node is at, if any. +func (s *serfQueries) handleConflict(q *Query) { + // The target node name is the payload + node := string(q.Payload) + + // Do not respond to the query if it is about us + if node == s.serf.config.NodeName { + return + } + s.logger.Printf("[DEBUG] serf: Got conflict resolution query for '%s'", node) + + // Look for the member info + var out *Member + s.serf.memberLock.Lock() + if member, ok := s.serf.members[node]; ok { + out = &member.Member + } + s.serf.memberLock.Unlock() + + // Encode the response + buf, err := encodeMessage(messageConflictResponseType, out) + if err != nil { + s.logger.Printf("[ERR] serf: Failed to encode conflict query response: %v", err) + return + } + + // Send our answer + if err := q.Respond(buf); err != nil { + s.logger.Printf("[ERR] serf: Failed to respond to conflict query: %v", err) + } +} + +// sendKeyResponse handles responding to key-related queries. +func (s *serfQueries) sendKeyResponse(q *Query, resp *nodeKeyResponse) { + buf, err := encodeMessage(messageKeyResponseType, resp) + if err != nil { + s.logger.Printf("[ERR] serf: Failed to encode key response: %v", err) + return + } + + if err := q.Respond(buf); err != nil { + s.logger.Printf("[ERR] serf: Failed to respond to key query: %v", err) + return + } +} + +// handleInstallKey is invoked whenever a new encryption key is received from +// another member in the cluster, and handles the process of installing it onto +// the memberlist keyring. This type of query may fail if the provided key does +// not fit the constraints that memberlist enforces. If the query fails, the +// response will contain the error message so that it may be relayed. +func (s *serfQueries) handleInstallKey(q *Query) { + response := nodeKeyResponse{Result: false} + keyring := s.serf.config.MemberlistConfig.Keyring + req := keyRequest{} + + err := decodeMessage(q.Payload[1:], &req) + if err != nil { + s.logger.Printf("[ERR] serf: Failed to decode key request: %v", err) + goto SEND + } + + if !s.serf.EncryptionEnabled() { + response.Message = "No keyring to modify (encryption not enabled)" + s.logger.Printf("[ERR] serf: No keyring to modify (encryption not enabled)") + goto SEND + } + + s.logger.Printf("[INFO] serf: Received install-key query") + if err := keyring.AddKey(req.Key); err != nil { + response.Message = err.Error() + s.logger.Printf("[ERR] serf: Failed to install key: %s", err) + goto SEND + } + + if err := s.serf.writeKeyringFile(); err != nil { + response.Message = err.Error() + s.logger.Printf("[ERR] serf: Failed to write keyring file: %s", err) + goto SEND + } + + response.Result = true + +SEND: + s.sendKeyResponse(q, &response) +} + +// handleUseKey is invoked whenever a query is received to mark a different key +// in the internal keyring as the primary key. This type of query may fail due +// to operator error (requested key not in ring), and thus sends error messages +// back in the response. +func (s *serfQueries) handleUseKey(q *Query) { + response := nodeKeyResponse{Result: false} + keyring := s.serf.config.MemberlistConfig.Keyring + req := keyRequest{} + + err := decodeMessage(q.Payload[1:], &req) + if err != nil { + s.logger.Printf("[ERR] serf: Failed to decode key request: %v", err) + goto SEND + } + + if !s.serf.EncryptionEnabled() { + response.Message = "No keyring to modify (encryption not enabled)" + s.logger.Printf("[ERR] serf: No keyring to modify (encryption not enabled)") + goto SEND + } + + s.logger.Printf("[INFO] serf: Received use-key query") + if err := keyring.UseKey(req.Key); err != nil { + response.Message = err.Error() + s.logger.Printf("[ERR] serf: Failed to change primary key: %s", err) + goto SEND + } + + if err := s.serf.writeKeyringFile(); err != nil { + response.Message = err.Error() + s.logger.Printf("[ERR] serf: Failed to write keyring file: %s", err) + goto SEND + } + + response.Result = true + +SEND: + s.sendKeyResponse(q, &response) +} + +// handleRemoveKey is invoked when a query is received to remove a particular +// key from the keyring. This type of query can fail if the key requested for +// deletion is currently the primary key in the keyring, so therefore it will +// reply to the query with any relevant errors from the operation. +func (s *serfQueries) handleRemoveKey(q *Query) { + response := nodeKeyResponse{Result: false} + keyring := s.serf.config.MemberlistConfig.Keyring + req := keyRequest{} + + err := decodeMessage(q.Payload[1:], &req) + if err != nil { + s.logger.Printf("[ERR] serf: Failed to decode key request: %v", err) + goto SEND + } + + if !s.serf.EncryptionEnabled() { + response.Message = "No keyring to modify (encryption not enabled)" + s.logger.Printf("[ERR] serf: No keyring to modify (encryption not enabled)") + goto SEND + } + + s.logger.Printf("[INFO] serf: Received remove-key query") + if err := keyring.RemoveKey(req.Key); err != nil { + response.Message = err.Error() + s.logger.Printf("[ERR] serf: Failed to remove key: %s", err) + goto SEND + } + + if err := s.serf.writeKeyringFile(); err != nil { + response.Message = err.Error() + s.logger.Printf("[ERR] serf: Failed to write keyring file: %s", err) + goto SEND + } + + response.Result = true + +SEND: + s.sendKeyResponse(q, &response) +} + +// handleListKeys is invoked when a query is received to return a list of all +// installed keys the Serf instance knows of. For performance, the keys are +// encoded to base64 on each of the members to remove this burden from the +// node asking for the results. +func (s *serfQueries) handleListKeys(q *Query) { + response := nodeKeyResponse{Result: false} + keyring := s.serf.config.MemberlistConfig.Keyring + + if !s.serf.EncryptionEnabled() { + response.Message = "Keyring is empty (encryption not enabled)" + s.logger.Printf("[ERR] serf: Keyring is empty (encryption not enabled)") + goto SEND + } + + s.logger.Printf("[INFO] serf: Received list-keys query") + for _, keyBytes := range keyring.GetKeys() { + // Encode the keys before sending the response. This should help take + // some the burden of doing this off of the asking member. + key := base64.StdEncoding.EncodeToString(keyBytes) + response.Keys = append(response.Keys, key) + } + response.Result = true + +SEND: + s.sendKeyResponse(q, &response) +} diff --git a/vendor/github.com/hashicorp/serf/serf/keymanager.go b/vendor/github.com/hashicorp/serf/serf/keymanager.go new file mode 100644 index 0000000000..fd53182fc5 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/keymanager.go @@ -0,0 +1,192 @@ +package serf + +import ( + "encoding/base64" + "fmt" + "sync" +) + +// KeyManager encapsulates all functionality within Serf for handling +// encryption keyring changes across a cluster. +type KeyManager struct { + serf *Serf + + // Lock to protect read and write operations + l sync.RWMutex +} + +// keyRequest is used to contain input parameters which get broadcasted to all +// nodes as part of a key query operation. +type keyRequest struct { + Key []byte +} + +// KeyResponse is used to relay a query for a list of all keys in use. +type KeyResponse struct { + Messages map[string]string // Map of node name to response message + NumNodes int // Total nodes memberlist knows of + NumResp int // Total responses received + NumErr int // Total errors from request + + // Keys is a mapping of the base64-encoded value of the key bytes to the + // number of nodes that have the key installed. + Keys map[string]int +} + +// KeyRequestOptions is used to contain optional parameters for a keyring operation +type KeyRequestOptions struct { + // RelayFactor is the number of duplicate query responses to send by relaying through + // other nodes, for redundancy + RelayFactor uint8 +} + +// streamKeyResp takes care of reading responses from a channel and composing +// them into a KeyResponse. It will update a KeyResponse *in place* and +// therefore has nothing to return. +func (k *KeyManager) streamKeyResp(resp *KeyResponse, ch <-chan NodeResponse) { + for r := range ch { + var nodeResponse nodeKeyResponse + + resp.NumResp++ + + // Decode the response + if len(r.Payload) < 1 || messageType(r.Payload[0]) != messageKeyResponseType { + resp.Messages[r.From] = fmt.Sprintf( + "Invalid key query response type: %v", r.Payload) + resp.NumErr++ + goto NEXT + } + if err := decodeMessage(r.Payload[1:], &nodeResponse); err != nil { + resp.Messages[r.From] = fmt.Sprintf( + "Failed to decode key query response: %v", r.Payload) + resp.NumErr++ + goto NEXT + } + + if !nodeResponse.Result { + resp.Messages[r.From] = nodeResponse.Message + resp.NumErr++ + } + + // Currently only used for key list queries, this adds keys to a counter + // and increments them for each node response which contains them. + for _, key := range nodeResponse.Keys { + if _, ok := resp.Keys[key]; !ok { + resp.Keys[key] = 1 + } else { + resp.Keys[key]++ + } + } + + NEXT: + // Return early if all nodes have responded. This allows us to avoid + // waiting for the full timeout when there is nothing left to do. + if resp.NumResp == resp.NumNodes { + return + } + } +} + +// handleKeyRequest performs query broadcasting to all members for any type of +// key operation and manages gathering responses and packing them up into a +// KeyResponse for uniform response handling. +func (k *KeyManager) handleKeyRequest(key, query string, opts *KeyRequestOptions) (*KeyResponse, error) { + resp := &KeyResponse{ + Messages: make(map[string]string), + Keys: make(map[string]int), + } + qName := internalQueryName(query) + + // Decode the new key into raw bytes + rawKey, err := base64.StdEncoding.DecodeString(key) + if err != nil { + return resp, err + } + + // Encode the query request + req, err := encodeMessage(messageKeyRequestType, keyRequest{Key: rawKey}) + if err != nil { + return resp, err + } + + qParam := k.serf.DefaultQueryParams() + if opts != nil { + qParam.RelayFactor = opts.RelayFactor + } + queryResp, err := k.serf.Query(qName, req, qParam) + if err != nil { + return resp, err + } + + // Handle the response stream and populate the KeyResponse + resp.NumNodes = k.serf.memberlist.NumMembers() + k.streamKeyResp(resp, queryResp.respCh) + + // Check the response for any reported failure conditions + if resp.NumErr != 0 { + return resp, fmt.Errorf("%d/%d nodes reported failure", resp.NumErr, resp.NumNodes) + } + if resp.NumResp != resp.NumNodes { + return resp, fmt.Errorf("%d/%d nodes reported success", resp.NumResp, resp.NumNodes) + } + + return resp, nil +} + +// InstallKey handles broadcasting a query to all members and gathering +// responses from each of them, returning a list of messages from each node +// and any applicable error conditions. +func (k *KeyManager) InstallKey(key string) (*KeyResponse, error) { + return k.InstallKeyWithOptions(key, nil) +} + +func (k *KeyManager) InstallKeyWithOptions(key string, opts *KeyRequestOptions) (*KeyResponse, error) { + k.l.Lock() + defer k.l.Unlock() + + return k.handleKeyRequest(key, installKeyQuery, opts) +} + +// UseKey handles broadcasting a primary key change to all members in the +// cluster, and gathering any response messages. If successful, there should +// be an empty KeyResponse returned. +func (k *KeyManager) UseKey(key string) (*KeyResponse, error) { + return k.UseKeyWithOptions(key, nil) +} + +func (k *KeyManager) UseKeyWithOptions(key string, opts *KeyRequestOptions) (*KeyResponse, error) { + k.l.Lock() + defer k.l.Unlock() + + return k.handleKeyRequest(key, useKeyQuery, opts) +} + +// RemoveKey handles broadcasting a key to the cluster for removal. Each member +// will receive this event, and if they have the key in their keyring, remove +// it. If any errors are encountered, RemoveKey will collect and relay them. +func (k *KeyManager) RemoveKey(key string) (*KeyResponse, error) { + return k.RemoveKeyWithOptions(key, nil) +} + +func (k *KeyManager) RemoveKeyWithOptions(key string, opts *KeyRequestOptions) (*KeyResponse, error) { + k.l.Lock() + defer k.l.Unlock() + + return k.handleKeyRequest(key, removeKeyQuery, opts) +} + +// ListKeys is used to collect installed keys from members in a Serf cluster +// and return an aggregated list of all installed keys. This is useful to +// operators to ensure that there are no lingering keys installed on any agents. +// Since having multiple keys installed can cause performance penalties in some +// cases, it's important to verify this information and remove unneeded keys. +func (k *KeyManager) ListKeys() (*KeyResponse, error) { + return k.ListKeysWithOptions(nil) +} + +func (k *KeyManager) ListKeysWithOptions(opts *KeyRequestOptions) (*KeyResponse, error) { + k.l.RLock() + defer k.l.RUnlock() + + return k.handleKeyRequest("", listKeysQuery, opts) +} \ No newline at end of file diff --git a/vendor/github.com/hashicorp/serf/serf/lamport.go b/vendor/github.com/hashicorp/serf/serf/lamport.go new file mode 100644 index 0000000000..08f4aa7a62 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/lamport.go @@ -0,0 +1,45 @@ +package serf + +import ( + "sync/atomic" +) + +// LamportClock is a thread safe implementation of a lamport clock. It +// uses efficient atomic operations for all of its functions, falling back +// to a heavy lock only if there are enough CAS failures. +type LamportClock struct { + counter uint64 +} + +// LamportTime is the value of a LamportClock. +type LamportTime uint64 + +// Time is used to return the current value of the lamport clock +func (l *LamportClock) Time() LamportTime { + return LamportTime(atomic.LoadUint64(&l.counter)) +} + +// Increment is used to increment and return the value of the lamport clock +func (l *LamportClock) Increment() LamportTime { + return LamportTime(atomic.AddUint64(&l.counter, 1)) +} + +// Witness is called to update our local clock if necessary after +// witnessing a clock value received from another process +func (l *LamportClock) Witness(v LamportTime) { +WITNESS: + // If the other value is old, we do not need to do anything + cur := atomic.LoadUint64(&l.counter) + other := uint64(v) + if other < cur { + return + } + + // Ensure that our local clock is at least one ahead. + if !atomic.CompareAndSwapUint64(&l.counter, cur, other+1) { + // The CAS failed, so we just retry. Eventually our CAS should + // succeed or a future witness will pass us by and our witness + // will end. + goto WITNESS + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/merge_delegate.go b/vendor/github.com/hashicorp/serf/serf/merge_delegate.go new file mode 100644 index 0000000000..7fdc732887 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/merge_delegate.go @@ -0,0 +1,44 @@ +package serf + +import ( + "net" + + "github.com/hashicorp/memberlist" +) + +type MergeDelegate interface { + NotifyMerge([]*Member) error +} + +type mergeDelegate struct { + serf *Serf +} + +func (m *mergeDelegate) NotifyMerge(nodes []*memberlist.Node) error { + members := make([]*Member, len(nodes)) + for idx, n := range nodes { + members[idx] = m.nodeToMember(n) + } + return m.serf.config.Merge.NotifyMerge(members) +} + +func (m *mergeDelegate) NotifyAlive(peer *memberlist.Node) error { + member := m.nodeToMember(peer) + return m.serf.config.Merge.NotifyMerge([]*Member{member}) +} + +func (m *mergeDelegate) nodeToMember(n *memberlist.Node) *Member { + return &Member{ + Name: n.Name, + Addr: net.IP(n.Addr), + Port: n.Port, + Tags: m.serf.decodeTags(n.Meta), + Status: StatusNone, + ProtocolMin: n.PMin, + ProtocolMax: n.PMax, + ProtocolCur: n.PCur, + DelegateMin: n.DMin, + DelegateMax: n.DMax, + DelegateCur: n.DCur, + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/messages.go b/vendor/github.com/hashicorp/serf/serf/messages.go new file mode 100644 index 0000000000..20df5b8e83 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/messages.go @@ -0,0 +1,173 @@ +package serf + +import ( + "bytes" + "net" + "time" + + "github.com/hashicorp/go-msgpack/codec" +) + +// messageType are the types of gossip messages Serf will send along +// memberlist. +type messageType uint8 + +const ( + messageLeaveType messageType = iota + messageJoinType + messagePushPullType + messageUserEventType + messageQueryType + messageQueryResponseType + messageConflictResponseType + messageKeyRequestType + messageKeyResponseType + messageRelayType +) + +const ( + // Ack flag is used to force receiver to send an ack back + queryFlagAck uint32 = 1 << iota + + // NoBroadcast is used to prevent re-broadcast of a query. + // this can be used to selectively send queries to individual members + queryFlagNoBroadcast +) + +// filterType is used with a queryFilter to specify the type of +// filter we are sending +type filterType uint8 + +const ( + filterNodeType filterType = iota + filterTagType +) + +// messageJoin is the message broadcasted after we join to +// associated the node with a lamport clock +type messageJoin struct { + LTime LamportTime + Node string +} + +// messageLeave is the message broadcasted to signal the intentional to +// leave. +type messageLeave struct { + LTime LamportTime + Node string +} + +// messagePushPullType is used when doing a state exchange. This +// is a relatively large message, but is sent infrequently +type messagePushPull struct { + LTime LamportTime // Current node lamport time + StatusLTimes map[string]LamportTime // Maps the node to its status time + LeftMembers []string // List of left nodes + EventLTime LamportTime // Lamport time for event clock + Events []*userEvents // Recent events + QueryLTime LamportTime // Lamport time for query clock +} + +// messageUserEvent is used for user-generated events +type messageUserEvent struct { + LTime LamportTime + Name string + Payload []byte + CC bool // "Can Coalesce". Zero value is compatible with Serf 0.1 +} + +// messageQuery is used for query events +type messageQuery struct { + LTime LamportTime // Event lamport time + ID uint32 // Query ID, randomly generated + Addr []byte // Source address, used for a direct reply + Port uint16 // Source port, used for a direct reply + Filters [][]byte // Potential query filters + Flags uint32 // Used to provide various flags + RelayFactor uint8 // Used to set the number of duplicate relayed responses + Timeout time.Duration // Maximum time between delivery and response + Name string // Query name + Payload []byte // Query payload +} + +// Ack checks if the ack flag is set +func (m *messageQuery) Ack() bool { + return (m.Flags & queryFlagAck) != 0 +} + +// NoBroadcast checks if the no broadcast flag is set +func (m *messageQuery) NoBroadcast() bool { + return (m.Flags & queryFlagNoBroadcast) != 0 +} + +// filterNode is used with the filterNodeType, and is a list +// of node names +type filterNode []string + +// filterTag is used with the filterTagType and is a regular +// expression to apply to a tag +type filterTag struct { + Tag string + Expr string +} + +// messageQueryResponse is used to respond to a query +type messageQueryResponse struct { + LTime LamportTime // Event lamport time + ID uint32 // Query ID + From string // Node name + Flags uint32 // Used to provide various flags + Payload []byte // Optional response payload +} + +// Ack checks if the ack flag is set +func (m *messageQueryResponse) Ack() bool { + return (m.Flags & queryFlagAck) != 0 +} + +func decodeMessage(buf []byte, out interface{}) error { + var handle codec.MsgpackHandle + return codec.NewDecoder(bytes.NewReader(buf), &handle).Decode(out) +} + +func encodeMessage(t messageType, msg interface{}) ([]byte, error) { + buf := bytes.NewBuffer(nil) + buf.WriteByte(uint8(t)) + + handle := codec.MsgpackHandle{} + encoder := codec.NewEncoder(buf, &handle) + err := encoder.Encode(msg) + return buf.Bytes(), err +} + +// relayHeader is used to store the end destination of a relayed message +type relayHeader struct { + DestAddr net.UDPAddr +} + +// encodeRelayMessage wraps a message in the messageRelayType, adding the length and +// address of the end recipient to the front of the message +func encodeRelayMessage(t messageType, addr net.UDPAddr, msg interface{}) ([]byte, error) { + buf := bytes.NewBuffer(nil) + handle := codec.MsgpackHandle{} + encoder := codec.NewEncoder(buf, &handle) + + buf.WriteByte(uint8(messageRelayType)) + if err := encoder.Encode(relayHeader{DestAddr: addr}); err != nil { + return nil, err + } + + buf.WriteByte(uint8(t)) + err := encoder.Encode(msg) + return buf.Bytes(), err +} + +func encodeFilter(f filterType, filt interface{}) ([]byte, error) { + buf := bytes.NewBuffer(nil) + buf.WriteByte(uint8(f)) + + handle := codec.MsgpackHandle{} + encoder := codec.NewEncoder(buf, &handle) + err := encoder.Encode(filt) + return buf.Bytes(), err +} diff --git a/vendor/github.com/hashicorp/serf/serf/ping_delegate.go b/vendor/github.com/hashicorp/serf/serf/ping_delegate.go new file mode 100644 index 0000000000..a482685a20 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/ping_delegate.go @@ -0,0 +1,89 @@ +package serf + +import ( + "bytes" + "log" + "time" + + "github.com/armon/go-metrics" + "github.com/hashicorp/go-msgpack/codec" + "github.com/hashicorp/memberlist" + "github.com/hashicorp/serf/coordinate" +) + +// pingDelegate is notified when memberlist successfully completes a direct ping +// of a peer node. We use this to update our estimated network coordinate, as +// well as cache the coordinate of the peer. +type pingDelegate struct { + serf *Serf +} + +const ( + // PingVersion is an internal version for the ping message, above the normal + // versioning we get from the protocol version. This enables small updates + // to the ping message without a full protocol bump. + PingVersion = 1 +) + +// AckPayload is called to produce a payload to send back in response to a ping +// request. +func (p *pingDelegate) AckPayload() []byte { + var buf bytes.Buffer + + // The first byte is the version number, forming a simple header. + version := []byte{PingVersion} + buf.Write(version) + + // The rest of the message is the serialized coordinate. + enc := codec.NewEncoder(&buf, &codec.MsgpackHandle{}) + if err := enc.Encode(p.serf.coordClient.GetCoordinate()); err != nil { + log.Printf("[ERR] serf: Failed to encode coordinate: %v\n", err) + } + return buf.Bytes() +} + +// NotifyPingComplete is called when this node successfully completes a direct ping +// of a peer node. +func (p *pingDelegate) NotifyPingComplete(other *memberlist.Node, rtt time.Duration, payload []byte) { + if payload == nil || len(payload) == 0 { + return + } + + // Verify ping version in the header. + version := payload[0] + if version != PingVersion { + log.Printf("[ERR] serf: Unsupported ping version: %v", version) + return + } + + // Process the remainder of the message as a coordinate. + r := bytes.NewReader(payload[1:]) + dec := codec.NewDecoder(r, &codec.MsgpackHandle{}) + var coord coordinate.Coordinate + if err := dec.Decode(&coord); err != nil { + log.Printf("[ERR] serf: Failed to decode coordinate from ping: %v", err) + } + + // Apply the update. Since this is a coordinate coming from some place + // else we harden this and look for dimensionality problems proactively. + before := p.serf.coordClient.GetCoordinate() + if before.IsCompatibleWith(&coord) { + after := p.serf.coordClient.Update(other.Name, &coord, rtt) + + // Publish some metrics to give us an idea of how much we are + // adjusting each time we update. + d := float32(before.DistanceTo(after).Seconds() * 1.0e3) + metrics.AddSample([]string{"serf", "coordinate", "adjustment-ms"}, d) + + // Cache the coordinate for the other node, and add our own + // to the cache as well since it just got updated. This lets + // users call GetCachedCoordinate with our node name, which is + // more friendly. + p.serf.coordCacheLock.Lock() + p.serf.coordCache[other.Name] = &coord + p.serf.coordCache[p.serf.config.NodeName] = p.serf.coordClient.GetCoordinate() + p.serf.coordCacheLock.Unlock() + } else { + log.Printf("[ERR] serf: Rejected bad coordinate: %v\n", coord) + } +} diff --git a/vendor/github.com/hashicorp/serf/serf/query.go b/vendor/github.com/hashicorp/serf/serf/query.go new file mode 100644 index 0000000000..5412821e30 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/query.go @@ -0,0 +1,294 @@ +package serf + +import ( + "fmt" + "math" + "math/rand" + "net" + "regexp" + "sync" + "time" +) + +// QueryParam is provided to Query() to configure the parameters of the +// query. If not provided, sane defaults will be used. +type QueryParam struct { + // If provided, we restrict the nodes that should respond to those + // with names in this list + FilterNodes []string + + // FilterTags maps a tag name to a regular expression that is applied + // to restrict the nodes that should respond + FilterTags map[string]string + + // If true, we are requesting an delivery acknowledgement from + // every node that meets the filter requirement. This means nodes + // the receive the message but do not pass the filters, will not + // send an ack. + RequestAck bool + + // RelayFactor controls the number of duplicate responses to relay + // back to the sender through other nodes for redundancy. + RelayFactor uint8 + + // The timeout limits how long the query is left open. If not provided, + // then a default timeout is used based on the configuration of Serf + Timeout time.Duration +} + +// DefaultQueryTimeout returns the default timeout value for a query +// Computed as GossipInterval * QueryTimeoutMult * log(N+1) +func (s *Serf) DefaultQueryTimeout() time.Duration { + n := s.memberlist.NumMembers() + timeout := s.config.MemberlistConfig.GossipInterval + timeout *= time.Duration(s.config.QueryTimeoutMult) + timeout *= time.Duration(math.Ceil(math.Log10(float64(n + 1)))) + return timeout +} + +// DefaultQueryParam is used to return the default query parameters +func (s *Serf) DefaultQueryParams() *QueryParam { + return &QueryParam{ + FilterNodes: nil, + FilterTags: nil, + RequestAck: false, + Timeout: s.DefaultQueryTimeout(), + } +} + +// encodeFilters is used to convert the filters into the wire format +func (q *QueryParam) encodeFilters() ([][]byte, error) { + var filters [][]byte + + // Add the node filter + if len(q.FilterNodes) > 0 { + if buf, err := encodeFilter(filterNodeType, q.FilterNodes); err != nil { + return nil, err + } else { + filters = append(filters, buf) + } + } + + // Add the tag filters + for tag, expr := range q.FilterTags { + filt := filterTag{tag, expr} + if buf, err := encodeFilter(filterTagType, &filt); err != nil { + return nil, err + } else { + filters = append(filters, buf) + } + } + + return filters, nil +} + +// QueryResponse is returned for each new Query. It is used to collect +// Ack's as well as responses and to provide those back to a client. +type QueryResponse struct { + // ackCh is used to send the name of a node for which we've received an ack + ackCh chan string + + // deadline is the query end time (start + query timeout) + deadline time.Time + + // Query ID + id uint32 + + // Stores the LTime of the query + lTime LamportTime + + // respCh is used to send a response from a node + respCh chan NodeResponse + + // acks/responses are used to track the nodes that have sent an ack/response + acks map[string]struct{} + responses map[string]struct{} + + closed bool + closeLock sync.Mutex +} + +// newQueryResponse is used to construct a new query response +func newQueryResponse(n int, q *messageQuery) *QueryResponse { + resp := &QueryResponse{ + deadline: time.Now().Add(q.Timeout), + id: q.ID, + lTime: q.LTime, + respCh: make(chan NodeResponse, n), + responses: make(map[string]struct{}), + } + if q.Ack() { + resp.ackCh = make(chan string, n) + resp.acks = make(map[string]struct{}) + } + return resp +} + +// Close is used to close the query, which will close the underlying +// channels and prevent further deliveries +func (r *QueryResponse) Close() { + r.closeLock.Lock() + defer r.closeLock.Unlock() + if r.closed { + return + } + r.closed = true + if r.ackCh != nil { + close(r.ackCh) + } + if r.respCh != nil { + close(r.respCh) + } +} + +// Deadline returns the ending deadline of the query +func (r *QueryResponse) Deadline() time.Time { + return r.deadline +} + +// Finished returns if the query is finished running +func (r *QueryResponse) Finished() bool { + return r.closed || time.Now().After(r.deadline) +} + +// AckCh returns a channel that can be used to listen for acks +// Channel will be closed when the query is finished. This is nil, +// if the query did not specify RequestAck. +func (r *QueryResponse) AckCh() <-chan string { + return r.ackCh +} + +// ResponseCh returns a channel that can be used to listen for responses. +// Channel will be closed when the query is finished. +func (r *QueryResponse) ResponseCh() <-chan NodeResponse { + return r.respCh +} + +// NodeResponse is used to represent a single response from a node +type NodeResponse struct { + From string + Payload []byte +} + +// shouldProcessQuery checks if a query should be proceeded given +// a set of filers. +func (s *Serf) shouldProcessQuery(filters [][]byte) bool { + for _, filter := range filters { + switch filterType(filter[0]) { + case filterNodeType: + // Decode the filter + var nodes filterNode + if err := decodeMessage(filter[1:], &nodes); err != nil { + s.logger.Printf("[WARN] serf: failed to decode filterNodeType: %v", err) + return false + } + + // Check if we are being targeted + found := false + for _, n := range nodes { + if n == s.config.NodeName { + found = true + break + } + } + if !found { + return false + } + + case filterTagType: + // Decode the filter + var filt filterTag + if err := decodeMessage(filter[1:], &filt); err != nil { + s.logger.Printf("[WARN] serf: failed to decode filterTagType: %v", err) + return false + } + + // Check if we match this regex + tags := s.config.Tags + matched, err := regexp.MatchString(filt.Expr, tags[filt.Tag]) + if err != nil { + s.logger.Printf("[WARN] serf: failed to compile filter regex (%s): %v", filt.Expr, err) + return false + } + if !matched { + return false + } + + default: + s.logger.Printf("[WARN] serf: query has unrecognized filter type: %d", filter[0]) + return false + } + } + return true +} + +// relayResponse will relay a copy of the given response to up to relayFactor +// other members. +func (s *Serf) relayResponse(relayFactor uint8, addr net.UDPAddr, resp *messageQueryResponse) error { + if relayFactor == 0 { + return nil + } + + // Needs to be worth it; we need to have at least relayFactor *other* + // nodes. If you have a tiny cluster then the relayFactor shouldn't + // be needed. + members := s.Members() + if len(members) < int(relayFactor)+1 { + return nil + } + + // Prep the relay message, which is a wrapped version of the original. + raw, err := encodeRelayMessage(messageQueryResponseType, addr, &resp) + if err != nil { + return fmt.Errorf("failed to format relayed response: %v", err) + } + if len(raw) > s.config.QueryResponseSizeLimit { + return fmt.Errorf("relayed response exceeds limit of %d bytes", s.config.QueryResponseSizeLimit) + } + + // Relay to a random set of peers. + localName := s.LocalMember().Name + relayMembers := kRandomMembers(int(relayFactor), members, func(m Member) bool { + return m.Status != StatusAlive || m.ProtocolMax < 5 || m.Name == localName + }) + for _, m := range relayMembers { + relayAddr := net.UDPAddr{IP: m.Addr, Port: int(m.Port)} + if err := s.memberlist.SendTo(&relayAddr, raw); err != nil { + return fmt.Errorf("failed to send relay response: %v", err) + } + } + return nil +} + +// kRandomMembers selects up to k members from a given list, optionally +// filtering by the given filterFunc +func kRandomMembers(k int, members []Member, filterFunc func(Member) bool) []Member { + n := len(members) + kMembers := make([]Member, 0, k) +OUTER: + // Probe up to 3*n times, with large n this is not necessary + // since k << n, but with small n we want search to be + // exhaustive + for i := 0; i < 3*n && len(kMembers) < k; i++ { + // Get random member + idx := rand.Intn(n) + member := members[idx] + + // Give the filter a shot at it. + if filterFunc != nil && filterFunc(member) { + continue OUTER + } + + // Check if we have this member already + for j := 0; j < len(kMembers); j++ { + if member.Name == kMembers[j].Name { + continue OUTER + } + } + + // Append the member + kMembers = append(kMembers, member) + } + + return kMembers +} diff --git a/vendor/github.com/hashicorp/serf/serf/serf.go b/vendor/github.com/hashicorp/serf/serf/serf.go new file mode 100644 index 0000000000..af2d17ede2 --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/serf.go @@ -0,0 +1,1739 @@ +package serf + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "io/ioutil" + "log" + "math/rand" + "net" + "os" + "strconv" + "sync" + "time" + + "github.com/armon/go-metrics" + "github.com/hashicorp/go-msgpack/codec" + "github.com/hashicorp/memberlist" + "github.com/hashicorp/serf/coordinate" +) + +// These are the protocol versions that Serf can _understand_. These are +// Serf-level protocol versions that are passed down as the delegate +// version to memberlist below. +const ( + ProtocolVersionMin uint8 = 2 + ProtocolVersionMax = 5 +) + +const ( + // Used to detect if the meta data is tags + // or if it is a raw role + tagMagicByte uint8 = 255 +) + +var ( + // FeatureNotSupported is returned if a feature cannot be used + // due to an older protocol version being used. + FeatureNotSupported = fmt.Errorf("Feature not supported") +) + +func init() { + // Seed the random number generator + rand.Seed(time.Now().UnixNano()) +} + +// Serf is a single node that is part of a single cluster that gets +// events about joins/leaves/failures/etc. It is created with the Create +// method. +// +// All functions on the Serf structure are safe to call concurrently. +type Serf struct { + // The clocks for different purposes. These MUST be the first things + // in this struct due to Golang issue #599. + clock LamportClock + eventClock LamportClock + queryClock LamportClock + + broadcasts *memberlist.TransmitLimitedQueue + config *Config + failedMembers []*memberState + leftMembers []*memberState + memberlist *memberlist.Memberlist + memberLock sync.RWMutex + members map[string]*memberState + + // recentIntents the lamport time and type of intent for a given node in + // case we get an intent before the relevant memberlist event. This is + // indexed by node, and always store the latest lamport time / intent + // we've seen. The memberLock protects this structure. + recentIntents map[string]nodeIntent + + eventBroadcasts *memberlist.TransmitLimitedQueue + eventBuffer []*userEvents + eventJoinIgnore bool + eventMinTime LamportTime + eventLock sync.RWMutex + + queryBroadcasts *memberlist.TransmitLimitedQueue + queryBuffer []*queries + queryMinTime LamportTime + queryResponse map[LamportTime]*QueryResponse + queryLock sync.RWMutex + + logger *log.Logger + joinLock sync.Mutex + stateLock sync.Mutex + state SerfState + shutdownCh chan struct{} + + snapshotter *Snapshotter + keyManager *KeyManager + + coordClient *coordinate.Client + coordCache map[string]*coordinate.Coordinate + coordCacheLock sync.RWMutex +} + +// SerfState is the state of the Serf instance. +type SerfState int + +const ( + SerfAlive SerfState = iota + SerfLeaving + SerfLeft + SerfShutdown +) + +func (s SerfState) String() string { + switch s { + case SerfAlive: + return "alive" + case SerfLeaving: + return "leaving" + case SerfLeft: + return "left" + case SerfShutdown: + return "shutdown" + default: + return "unknown" + } +} + +// Member is a single member of the Serf cluster. +type Member struct { + Name string + Addr net.IP + Port uint16 + Tags map[string]string + Status MemberStatus + + // The minimum, maximum, and current values of the protocol versions + // and delegate (Serf) protocol versions that each member can understand + // or is speaking. + ProtocolMin uint8 + ProtocolMax uint8 + ProtocolCur uint8 + DelegateMin uint8 + DelegateMax uint8 + DelegateCur uint8 +} + +// MemberStatus is the state that a member is in. +type MemberStatus int + +const ( + StatusNone MemberStatus = iota + StatusAlive + StatusLeaving + StatusLeft + StatusFailed +) + +func (s MemberStatus) String() string { + switch s { + case StatusNone: + return "none" + case StatusAlive: + return "alive" + case StatusLeaving: + return "leaving" + case StatusLeft: + return "left" + case StatusFailed: + return "failed" + default: + panic(fmt.Sprintf("unknown MemberStatus: %d", s)) + } +} + +// memberState is used to track members that are no longer active due to +// leaving, failing, partitioning, etc. It tracks the member along with +// when that member was marked as leaving. +type memberState struct { + Member + statusLTime LamportTime // lamport clock time of last received message + leaveTime time.Time // wall clock time of leave +} + +// nodeIntent is used to buffer intents for out-of-order deliveries. +type nodeIntent struct { + // Type is the intent being tracked. Only messageJoinType and + // messageLeaveType are tracked. + Type messageType + + // WallTime is the wall clock time we saw this intent in order to + // expire it from the buffer. + WallTime time.Time + + // LTime is the Lamport time, used for cluster-wide ordering of events. + LTime LamportTime +} + +// userEvent is used to buffer events to prevent re-delivery +type userEvent struct { + Name string + Payload []byte +} + +func (ue *userEvent) Equals(other *userEvent) bool { + if ue.Name != other.Name { + return false + } + if bytes.Compare(ue.Payload, other.Payload) != 0 { + return false + } + return true +} + +// userEvents stores all the user events at a specific time +type userEvents struct { + LTime LamportTime + Events []userEvent +} + +// queries stores all the query ids at a specific time +type queries struct { + LTime LamportTime + QueryIDs []uint32 +} + +const ( + UserEventSizeLimit = 512 // Maximum byte size for event name and payload + snapshotSizeLimit = 128 * 1024 // Maximum 128 KB snapshot +) + +// Create creates a new Serf instance, starting all the background tasks +// to maintain cluster membership information. +// +// After calling this function, the configuration should no longer be used +// or modified by the caller. +func Create(conf *Config) (*Serf, error) { + conf.Init() + if conf.ProtocolVersion < ProtocolVersionMin { + return nil, fmt.Errorf("Protocol version '%d' too low. Must be in range: [%d, %d]", + conf.ProtocolVersion, ProtocolVersionMin, ProtocolVersionMax) + } else if conf.ProtocolVersion > ProtocolVersionMax { + return nil, fmt.Errorf("Protocol version '%d' too high. Must be in range: [%d, %d]", + conf.ProtocolVersion, ProtocolVersionMin, ProtocolVersionMax) + } + + if conf.LogOutput != nil && conf.Logger != nil { + return nil, fmt.Errorf("Cannot specify both LogOutput and Logger. Please choose a single log configuration setting.") + } + + logDest := conf.LogOutput + if logDest == nil { + logDest = os.Stderr + } + + logger := conf.Logger + if logger == nil { + logger = log.New(logDest, "", log.LstdFlags) + } + + serf := &Serf{ + config: conf, + logger: logger, + members: make(map[string]*memberState), + queryResponse: make(map[LamportTime]*QueryResponse), + shutdownCh: make(chan struct{}), + state: SerfAlive, + } + + // Check that the meta data length is okay + if len(serf.encodeTags(conf.Tags)) > memberlist.MetaMaxSize { + return nil, fmt.Errorf("Encoded length of tags exceeds limit of %d bytes", memberlist.MetaMaxSize) + } + + // Check if serf member event coalescing is enabled + if conf.CoalescePeriod > 0 && conf.QuiescentPeriod > 0 && conf.EventCh != nil { + c := &memberEventCoalescer{ + lastEvents: make(map[string]EventType), + latestEvents: make(map[string]coalesceEvent), + } + + conf.EventCh = coalescedEventCh(conf.EventCh, serf.shutdownCh, + conf.CoalescePeriod, conf.QuiescentPeriod, c) + } + + // Check if user event coalescing is enabled + if conf.UserCoalescePeriod > 0 && conf.UserQuiescentPeriod > 0 && conf.EventCh != nil { + c := &userEventCoalescer{ + events: make(map[string]*latestUserEvents), + } + + conf.EventCh = coalescedEventCh(conf.EventCh, serf.shutdownCh, + conf.UserCoalescePeriod, conf.UserQuiescentPeriod, c) + } + + // Listen for internal Serf queries. This is setup before the snapshotter, since + // we want to capture the query-time, but the internal listener does not passthrough + // the queries + outCh, err := newSerfQueries(serf, serf.logger, conf.EventCh, serf.shutdownCh) + if err != nil { + return nil, fmt.Errorf("Failed to setup serf query handler: %v", err) + } + conf.EventCh = outCh + + // Set up network coordinate client. + if !conf.DisableCoordinates { + serf.coordClient, err = coordinate.NewClient(coordinate.DefaultConfig()) + if err != nil { + return nil, fmt.Errorf("Failed to create coordinate client: %v", err) + } + } + + // Try access the snapshot + var oldClock, oldEventClock, oldQueryClock LamportTime + var prev []*PreviousNode + if conf.SnapshotPath != "" { + eventCh, snap, err := NewSnapshotter( + conf.SnapshotPath, + snapshotSizeLimit, + conf.RejoinAfterLeave, + serf.logger, + &serf.clock, + serf.coordClient, + conf.EventCh, + serf.shutdownCh) + if err != nil { + return nil, fmt.Errorf("Failed to setup snapshot: %v", err) + } + serf.snapshotter = snap + conf.EventCh = eventCh + prev = snap.AliveNodes() + oldClock = snap.LastClock() + oldEventClock = snap.LastEventClock() + oldQueryClock = snap.LastQueryClock() + serf.eventMinTime = oldEventClock + 1 + serf.queryMinTime = oldQueryClock + 1 + } + + // Set up the coordinate cache. We do this after we read the snapshot to + // make sure we get a good initial value from there, if we got one. + if !conf.DisableCoordinates { + serf.coordCache = make(map[string]*coordinate.Coordinate) + serf.coordCache[conf.NodeName] = serf.coordClient.GetCoordinate() + } + + // Setup the various broadcast queues, which we use to send our own + // custom broadcasts along the gossip channel. + serf.broadcasts = &memberlist.TransmitLimitedQueue{ + NumNodes: serf.NumNodes, + RetransmitMult: conf.MemberlistConfig.RetransmitMult, + } + serf.eventBroadcasts = &memberlist.TransmitLimitedQueue{ + NumNodes: serf.NumNodes, + RetransmitMult: conf.MemberlistConfig.RetransmitMult, + } + serf.queryBroadcasts = &memberlist.TransmitLimitedQueue{ + NumNodes: serf.NumNodes, + RetransmitMult: conf.MemberlistConfig.RetransmitMult, + } + + // Create the buffer for recent intents + serf.recentIntents = make(map[string]nodeIntent) + + // Create a buffer for events and queries + serf.eventBuffer = make([]*userEvents, conf.EventBuffer) + serf.queryBuffer = make([]*queries, conf.QueryBuffer) + + // Ensure our lamport clock is at least 1, so that the default + // join LTime of 0 does not cause issues + serf.clock.Increment() + serf.eventClock.Increment() + serf.queryClock.Increment() + + // Restore the clock from snap if we have one + serf.clock.Witness(oldClock) + serf.eventClock.Witness(oldEventClock) + serf.queryClock.Witness(oldQueryClock) + + // Modify the memberlist configuration with keys that we set + conf.MemberlistConfig.Events = &eventDelegate{serf: serf} + conf.MemberlistConfig.Conflict = &conflictDelegate{serf: serf} + conf.MemberlistConfig.Delegate = &delegate{serf: serf} + conf.MemberlistConfig.DelegateProtocolVersion = conf.ProtocolVersion + conf.MemberlistConfig.DelegateProtocolMin = ProtocolVersionMin + conf.MemberlistConfig.DelegateProtocolMax = ProtocolVersionMax + conf.MemberlistConfig.Name = conf.NodeName + conf.MemberlistConfig.ProtocolVersion = ProtocolVersionMap[conf.ProtocolVersion] + if !conf.DisableCoordinates { + conf.MemberlistConfig.Ping = &pingDelegate{serf: serf} + } + + // Setup a merge delegate if necessary + if conf.Merge != nil { + md := &mergeDelegate{serf: serf} + conf.MemberlistConfig.Merge = md + conf.MemberlistConfig.Alive = md + } + + // Create the underlying memberlist that will manage membership + // and failure detection for the Serf instance. + memberlist, err := memberlist.Create(conf.MemberlistConfig) + if err != nil { + return nil, fmt.Errorf("Failed to create memberlist: %v", err) + } + + serf.memberlist = memberlist + + // Create a key manager for handling all encryption key changes + serf.keyManager = &KeyManager{serf: serf} + + // Start the background tasks. See the documentation above each method + // for more information on their role. + go serf.handleReap() + go serf.handleReconnect() + go serf.checkQueueDepth("Intent", serf.broadcasts) + go serf.checkQueueDepth("Event", serf.eventBroadcasts) + go serf.checkQueueDepth("Query", serf.queryBroadcasts) + + // Attempt to re-join the cluster if we have known nodes + if len(prev) != 0 { + go serf.handleRejoin(prev) + } + + return serf, nil +} + +// ProtocolVersion returns the current protocol version in use by Serf. +// This is the Serf protocol version, not the memberlist protocol version. +func (s *Serf) ProtocolVersion() uint8 { + return s.config.ProtocolVersion +} + +// EncryptionEnabled is a predicate that determines whether or not encryption +// is enabled, which can be possible in one of 2 cases: +// - Single encryption key passed at agent start (no persistence) +// - Keyring file provided at agent start +func (s *Serf) EncryptionEnabled() bool { + return s.config.MemberlistConfig.Keyring != nil +} + +// KeyManager returns the key manager for the current Serf instance. +func (s *Serf) KeyManager() *KeyManager { + return s.keyManager +} + +// UserEvent is used to broadcast a custom user event with a given +// name and payload. The events must be fairly small, and if the +// size limit is exceeded and error will be returned. If coalesce is enabled, +// nodes are allowed to coalesce this event. Coalescing is only available +// starting in v0.2 +func (s *Serf) UserEvent(name string, payload []byte, coalesce bool) error { + // Check the size limit + if len(name)+len(payload) > UserEventSizeLimit { + return fmt.Errorf("user event exceeds limit of %d bytes", UserEventSizeLimit) + } + + // Create a message + msg := messageUserEvent{ + LTime: s.eventClock.Time(), + Name: name, + Payload: payload, + CC: coalesce, + } + s.eventClock.Increment() + + // Process update locally + s.handleUserEvent(&msg) + + // Start broadcasting the event + raw, err := encodeMessage(messageUserEventType, &msg) + if err != nil { + return err + } + s.eventBroadcasts.QueueBroadcast(&broadcast{ + msg: raw, + }) + return nil +} + +// Query is used to broadcast a new query. The query must be fairly small, +// and an error will be returned if the size limit is exceeded. This is only +// available with protocol version 4 and newer. Query parameters are optional, +// and if not provided, a sane set of defaults will be used. +func (s *Serf) Query(name string, payload []byte, params *QueryParam) (*QueryResponse, error) { + // Check that the latest protocol is in use + if s.ProtocolVersion() < 4 { + return nil, FeatureNotSupported + } + + // Provide default parameters if none given + if params == nil { + params = s.DefaultQueryParams() + } else if params.Timeout == 0 { + params.Timeout = s.DefaultQueryTimeout() + } + + // Get the local node + local := s.memberlist.LocalNode() + + // Encode the filters + filters, err := params.encodeFilters() + if err != nil { + return nil, fmt.Errorf("Failed to format filters: %v", err) + } + + // Setup the flags + var flags uint32 + if params.RequestAck { + flags |= queryFlagAck + } + + // Create a message + q := messageQuery{ + LTime: s.queryClock.Time(), + ID: uint32(rand.Int31()), + Addr: local.Addr, + Port: local.Port, + Filters: filters, + Flags: flags, + RelayFactor: params.RelayFactor, + Timeout: params.Timeout, + Name: name, + Payload: payload, + } + + // Encode the query + raw, err := encodeMessage(messageQueryType, &q) + if err != nil { + return nil, err + } + + // Check the size + if len(raw) > s.config.QuerySizeLimit { + return nil, fmt.Errorf("query exceeds limit of %d bytes", s.config.QuerySizeLimit) + } + + // Register QueryResponse to track acks and responses + resp := newQueryResponse(s.memberlist.NumMembers(), &q) + s.registerQueryResponse(params.Timeout, resp) + + // Process query locally + s.handleQuery(&q) + + // Start broadcasting the event + s.queryBroadcasts.QueueBroadcast(&broadcast{ + msg: raw, + }) + return resp, nil +} + +// registerQueryResponse is used to setup the listeners for the query, +// and to schedule closing the query after the timeout. +func (s *Serf) registerQueryResponse(timeout time.Duration, resp *QueryResponse) { + s.queryLock.Lock() + defer s.queryLock.Unlock() + + // Map the LTime to the QueryResponse. This is necessarily 1-to-1, + // since we increment the time for each new query. + s.queryResponse[resp.lTime] = resp + + // Setup a timer to close the response and deregister after the timeout + time.AfterFunc(timeout, func() { + s.queryLock.Lock() + delete(s.queryResponse, resp.lTime) + resp.Close() + s.queryLock.Unlock() + }) +} + +// SetTags is used to dynamically update the tags associated with +// the local node. This will propagate the change to the rest of +// the cluster. Blocks until a the message is broadcast out. +func (s *Serf) SetTags(tags map[string]string) error { + // Check that the meta data length is okay + if len(s.encodeTags(tags)) > memberlist.MetaMaxSize { + return fmt.Errorf("Encoded length of tags exceeds limit of %d bytes", + memberlist.MetaMaxSize) + } + + // Update the config + s.config.Tags = tags + + // Trigger a memberlist update + return s.memberlist.UpdateNode(s.config.BroadcastTimeout) +} + +// Join joins an existing Serf cluster. Returns the number of nodes +// successfully contacted. The returned error will be non-nil only in the +// case that no nodes could be contacted. If ignoreOld is true, then any +// user messages sent prior to the join will be ignored. +func (s *Serf) Join(existing []string, ignoreOld bool) (int, error) { + // Do a quick state check + if s.State() != SerfAlive { + return 0, fmt.Errorf("Serf can't Join after Leave or Shutdown") + } + + // Hold the joinLock, this is to make eventJoinIgnore safe + s.joinLock.Lock() + defer s.joinLock.Unlock() + + // Ignore any events from a potential join. This is safe since we hold + // the joinLock and nobody else can be doing a Join + if ignoreOld { + s.eventJoinIgnore = true + defer func() { + s.eventJoinIgnore = false + }() + } + + // Have memberlist attempt to join + num, err := s.memberlist.Join(existing) + + // If we joined any nodes, broadcast the join message + if num > 0 { + // Start broadcasting the update + if err := s.broadcastJoin(s.clock.Time()); err != nil { + return num, err + } + } + + return num, err +} + +// broadcastJoin broadcasts a new join intent with a +// given clock value. It is used on either join, or if +// we need to refute an older leave intent. Cannot be called +// with the memberLock held. +func (s *Serf) broadcastJoin(ltime LamportTime) error { + // Construct message to update our lamport clock + msg := messageJoin{ + LTime: ltime, + Node: s.config.NodeName, + } + s.clock.Witness(ltime) + + // Process update locally + s.handleNodeJoinIntent(&msg) + + // Start broadcasting the update + if err := s.broadcast(messageJoinType, &msg, nil); err != nil { + s.logger.Printf("[WARN] serf: Failed to broadcast join intent: %v", err) + return err + } + return nil +} + +// Leave gracefully exits the cluster. It is safe to call this multiple +// times. +func (s *Serf) Leave() error { + // Check the current state + s.stateLock.Lock() + if s.state == SerfLeft { + s.stateLock.Unlock() + return nil + } else if s.state == SerfLeaving { + s.stateLock.Unlock() + return fmt.Errorf("Leave already in progress") + } else if s.state == SerfShutdown { + s.stateLock.Unlock() + return fmt.Errorf("Leave called after Shutdown") + } + s.state = SerfLeaving + s.stateLock.Unlock() + + // If we have a snapshot, mark we are leaving + if s.snapshotter != nil { + s.snapshotter.Leave() + } + + // Construct the message for the graceful leave + msg := messageLeave{ + LTime: s.clock.Time(), + Node: s.config.NodeName, + } + s.clock.Increment() + + // Process the leave locally + s.handleNodeLeaveIntent(&msg) + + // Only broadcast the leave message if there is at least one + // other node alive. + if s.hasAliveMembers() { + notifyCh := make(chan struct{}) + if err := s.broadcast(messageLeaveType, &msg, notifyCh); err != nil { + return err + } + + select { + case <-notifyCh: + case <-time.After(s.config.BroadcastTimeout): + return errors.New("timeout while waiting for graceful leave") + } + } + + // Attempt the memberlist leave + err := s.memberlist.Leave(s.config.BroadcastTimeout) + if err != nil { + return err + } + + // Transition to Left only if we not already shutdown + s.stateLock.Lock() + if s.state != SerfShutdown { + s.state = SerfLeft + } + s.stateLock.Unlock() + return nil +} + +// hasAliveMembers is called to check for any alive members other than +// ourself. +func (s *Serf) hasAliveMembers() bool { + s.memberLock.RLock() + defer s.memberLock.RUnlock() + + hasAlive := false + for _, m := range s.members { + // Skip ourself, we want to know if OTHER members are alive + if m.Name == s.config.NodeName { + continue + } + + if m.Status == StatusAlive { + hasAlive = true + break + } + } + return hasAlive +} + +// LocalMember returns the Member information for the local node +func (s *Serf) LocalMember() Member { + s.memberLock.RLock() + defer s.memberLock.RUnlock() + return s.members[s.config.NodeName].Member +} + +// Members returns a point-in-time snapshot of the members of this cluster. +func (s *Serf) Members() []Member { + s.memberLock.RLock() + defer s.memberLock.RUnlock() + + members := make([]Member, 0, len(s.members)) + for _, m := range s.members { + members = append(members, m.Member) + } + + return members +} + +// RemoveFailedNode forcibly removes a failed node from the cluster +// immediately, instead of waiting for the reaper to eventually reclaim it. +// This also has the effect that Serf will no longer attempt to reconnect +// to this node. +func (s *Serf) RemoveFailedNode(node string) error { + // Construct the message to broadcast + msg := messageLeave{ + LTime: s.clock.Time(), + Node: node, + } + s.clock.Increment() + + // Process our own event + s.handleNodeLeaveIntent(&msg) + + // If we have no members, then we don't need to broadcast + if !s.hasAliveMembers() { + return nil + } + + // Broadcast the remove + notifyCh := make(chan struct{}) + if err := s.broadcast(messageLeaveType, &msg, notifyCh); err != nil { + return err + } + + // Wait for the broadcast + select { + case <-notifyCh: + case <-time.After(s.config.BroadcastTimeout): + return fmt.Errorf("timed out broadcasting node removal") + } + + return nil +} + +// Shutdown forcefully shuts down the Serf instance, stopping all network +// activity and background maintenance associated with the instance. +// +// This is not a graceful shutdown, and should be preceded by a call +// to Leave. Otherwise, other nodes in the cluster will detect this node's +// exit as a node failure. +// +// It is safe to call this method multiple times. +func (s *Serf) Shutdown() error { + s.stateLock.Lock() + defer s.stateLock.Unlock() + + if s.state == SerfShutdown { + return nil + } + + if s.state != SerfLeft { + s.logger.Printf("[WARN] serf: Shutdown without a Leave") + } + + // Wait to close the shutdown channel until after we've shut down the + // memberlist and its associated network resources, since the shutdown + // channel signals that we are cleaned up outside of Serf. + s.state = SerfShutdown + err := s.memberlist.Shutdown() + if err != nil { + return err + } + close(s.shutdownCh) + + // Wait for the snapshoter to finish if we have one + if s.snapshotter != nil { + s.snapshotter.Wait() + } + + return nil +} + +// ShutdownCh returns a channel that can be used to wait for +// Serf to shutdown. +func (s *Serf) ShutdownCh() <-chan struct{} { + return s.shutdownCh +} + +// Memberlist is used to get access to the underlying Memberlist instance +func (s *Serf) Memberlist() *memberlist.Memberlist { + return s.memberlist +} + +// State is the current state of this Serf instance. +func (s *Serf) State() SerfState { + s.stateLock.Lock() + defer s.stateLock.Unlock() + return s.state +} + +// broadcast takes a Serf message type, encodes it for the wire, and queues +// the broadcast. If a notify channel is given, this channel will be closed +// when the broadcast is sent. +func (s *Serf) broadcast(t messageType, msg interface{}, notify chan<- struct{}) error { + raw, err := encodeMessage(t, msg) + if err != nil { + return err + } + + s.broadcasts.QueueBroadcast(&broadcast{ + msg: raw, + notify: notify, + }) + return nil +} + +// handleNodeJoin is called when a node join event is received +// from memberlist. +func (s *Serf) handleNodeJoin(n *memberlist.Node) { + s.memberLock.Lock() + defer s.memberLock.Unlock() + + var oldStatus MemberStatus + member, ok := s.members[n.Name] + if !ok { + oldStatus = StatusNone + member = &memberState{ + Member: Member{ + Name: n.Name, + Addr: net.IP(n.Addr), + Port: n.Port, + Tags: s.decodeTags(n.Meta), + Status: StatusAlive, + }, + } + + // Check if we have a join or leave intent. The intent buffer + // will only hold one event for this node, so the more recent + // one will take effect. + if join, ok := recentIntent(s.recentIntents, n.Name, messageJoinType); ok { + member.statusLTime = join + } + if leave, ok := recentIntent(s.recentIntents, n.Name, messageLeaveType); ok { + member.Status = StatusLeaving + member.statusLTime = leave + } + + s.members[n.Name] = member + } else { + oldStatus = member.Status + deadTime := time.Now().Sub(member.leaveTime) + if oldStatus == StatusFailed && deadTime < s.config.FlapTimeout { + metrics.IncrCounter([]string{"serf", "member", "flap"}, 1) + } + + member.Status = StatusAlive + member.leaveTime = time.Time{} + member.Addr = net.IP(n.Addr) + member.Port = n.Port + member.Tags = s.decodeTags(n.Meta) + } + + // Update the protocol versions every time we get an event + member.ProtocolMin = n.PMin + member.ProtocolMax = n.PMax + member.ProtocolCur = n.PCur + member.DelegateMin = n.DMin + member.DelegateMax = n.DMax + member.DelegateCur = n.DCur + + // If node was previously in a failed state, then clean up some + // internal accounting. + // TODO(mitchellh): needs tests to verify not reaped + if oldStatus == StatusFailed || oldStatus == StatusLeft { + s.failedMembers = removeOldMember(s.failedMembers, member.Name) + s.leftMembers = removeOldMember(s.leftMembers, member.Name) + } + + // Update some metrics + metrics.IncrCounter([]string{"serf", "member", "join"}, 1) + + // Send an event along + s.logger.Printf("[INFO] serf: EventMemberJoin: %s %s", + member.Member.Name, member.Member.Addr) + if s.config.EventCh != nil { + s.config.EventCh <- MemberEvent{ + Type: EventMemberJoin, + Members: []Member{member.Member}, + } + } +} + +// handleNodeLeave is called when a node leave event is received +// from memberlist. +func (s *Serf) handleNodeLeave(n *memberlist.Node) { + s.memberLock.Lock() + defer s.memberLock.Unlock() + + member, ok := s.members[n.Name] + if !ok { + // We've never even heard of this node that is supposedly + // leaving. Just ignore it completely. + return + } + + switch member.Status { + case StatusLeaving: + member.Status = StatusLeft + member.leaveTime = time.Now() + s.leftMembers = append(s.leftMembers, member) + case StatusAlive: + member.Status = StatusFailed + member.leaveTime = time.Now() + s.failedMembers = append(s.failedMembers, member) + default: + // Unknown state that it was in? Just don't do anything + s.logger.Printf("[WARN] serf: Bad state when leave: %d", member.Status) + return + } + + // Send an event along + event := EventMemberLeave + eventStr := "EventMemberLeave" + if member.Status != StatusLeft { + event = EventMemberFailed + eventStr = "EventMemberFailed" + } + + // Update some metrics + metrics.IncrCounter([]string{"serf", "member", member.Status.String()}, 1) + + s.logger.Printf("[INFO] serf: %s: %s %s", + eventStr, member.Member.Name, member.Member.Addr) + if s.config.EventCh != nil { + s.config.EventCh <- MemberEvent{ + Type: event, + Members: []Member{member.Member}, + } + } +} + +// handleNodeUpdate is called when a node meta data update +// has taken place +func (s *Serf) handleNodeUpdate(n *memberlist.Node) { + s.memberLock.Lock() + defer s.memberLock.Unlock() + + member, ok := s.members[n.Name] + if !ok { + // We've never even heard of this node that is updating. + // Just ignore it completely. + return + } + + // Update the member attributes + member.Addr = net.IP(n.Addr) + member.Port = n.Port + member.Tags = s.decodeTags(n.Meta) + + // Snag the latest versions. NOTE - the current memberlist code will NOT + // fire an update event if the metadata (for Serf, tags) stays the same + // and only the protocol versions change. If we wake any Serf-level + // protocol changes where we want to get this event under those + // circumstances, we will need to update memberlist to do a check of + // versions as well as the metadata. + member.ProtocolMin = n.PMin + member.ProtocolMax = n.PMax + member.ProtocolCur = n.PCur + member.DelegateMin = n.DMin + member.DelegateMax = n.DMax + member.DelegateCur = n.DCur + + // Update some metrics + metrics.IncrCounter([]string{"serf", "member", "update"}, 1) + + // Send an event along + s.logger.Printf("[INFO] serf: EventMemberUpdate: %s", member.Member.Name) + if s.config.EventCh != nil { + s.config.EventCh <- MemberEvent{ + Type: EventMemberUpdate, + Members: []Member{member.Member}, + } + } +} + +// handleNodeLeaveIntent is called when an intent to leave is received. +func (s *Serf) handleNodeLeaveIntent(leaveMsg *messageLeave) bool { + // Witness a potentially newer time + s.clock.Witness(leaveMsg.LTime) + + s.memberLock.Lock() + defer s.memberLock.Unlock() + + member, ok := s.members[leaveMsg.Node] + if !ok { + // Rebroadcast only if this was an update we hadn't seen before. + return upsertIntent(s.recentIntents, leaveMsg.Node, messageLeaveType, leaveMsg.LTime, time.Now) + } + + // If the message is old, then it is irrelevant and we can skip it + if leaveMsg.LTime <= member.statusLTime { + return false + } + + // Refute us leaving if we are in the alive state + // Must be done in another goroutine since we have the memberLock + if leaveMsg.Node == s.config.NodeName && s.state == SerfAlive { + s.logger.Printf("[DEBUG] serf: Refuting an older leave intent") + go s.broadcastJoin(s.clock.Time()) + return false + } + + // State transition depends on current state + switch member.Status { + case StatusAlive: + member.Status = StatusLeaving + member.statusLTime = leaveMsg.LTime + return true + case StatusFailed: + member.Status = StatusLeft + member.statusLTime = leaveMsg.LTime + + // Remove from the failed list and add to the left list. We add + // to the left list so that when we do a sync, other nodes will + // remove it from their failed list. + s.failedMembers = removeOldMember(s.failedMembers, member.Name) + s.leftMembers = append(s.leftMembers, member) + + // We must push a message indicating the node has now + // left to allow higher-level applications to handle the + // graceful leave. + s.logger.Printf("[INFO] serf: EventMemberLeave (forced): %s %s", + member.Member.Name, member.Member.Addr) + if s.config.EventCh != nil { + s.config.EventCh <- MemberEvent{ + Type: EventMemberLeave, + Members: []Member{member.Member}, + } + } + return true + default: + return false + } +} + +// handleNodeJoinIntent is called when a node broadcasts a +// join message to set the lamport time of its join +func (s *Serf) handleNodeJoinIntent(joinMsg *messageJoin) bool { + // Witness a potentially newer time + s.clock.Witness(joinMsg.LTime) + + s.memberLock.Lock() + defer s.memberLock.Unlock() + + member, ok := s.members[joinMsg.Node] + if !ok { + // Rebroadcast only if this was an update we hadn't seen before. + return upsertIntent(s.recentIntents, joinMsg.Node, messageJoinType, joinMsg.LTime, time.Now) + } + + // Check if this time is newer than what we have + if joinMsg.LTime <= member.statusLTime { + return false + } + + // Update the LTime + member.statusLTime = joinMsg.LTime + + // If we are in the leaving state, we should go back to alive, + // since the leaving message must have been for an older time + if member.Status == StatusLeaving { + member.Status = StatusAlive + } + return true +} + +// handleUserEvent is called when a user event broadcast is +// received. Returns if the message should be rebroadcast. +func (s *Serf) handleUserEvent(eventMsg *messageUserEvent) bool { + // Witness a potentially newer time + s.eventClock.Witness(eventMsg.LTime) + + s.eventLock.Lock() + defer s.eventLock.Unlock() + + // Ignore if it is before our minimum event time + if eventMsg.LTime < s.eventMinTime { + return false + } + + // Check if this message is too old + curTime := s.eventClock.Time() + if curTime > LamportTime(len(s.eventBuffer)) && + eventMsg.LTime < curTime-LamportTime(len(s.eventBuffer)) { + s.logger.Printf( + "[WARN] serf: received old event %s from time %d (current: %d)", + eventMsg.Name, + eventMsg.LTime, + s.eventClock.Time()) + return false + } + + // Check if we've already seen this + idx := eventMsg.LTime % LamportTime(len(s.eventBuffer)) + seen := s.eventBuffer[idx] + userEvent := userEvent{Name: eventMsg.Name, Payload: eventMsg.Payload} + if seen != nil && seen.LTime == eventMsg.LTime { + for _, previous := range seen.Events { + if previous.Equals(&userEvent) { + return false + } + } + } else { + seen = &userEvents{LTime: eventMsg.LTime} + s.eventBuffer[idx] = seen + } + + // Add to recent events + seen.Events = append(seen.Events, userEvent) + + // Update some metrics + metrics.IncrCounter([]string{"serf", "events"}, 1) + metrics.IncrCounter([]string{"serf", "events", eventMsg.Name}, 1) + + if s.config.EventCh != nil { + s.config.EventCh <- UserEvent{ + LTime: eventMsg.LTime, + Name: eventMsg.Name, + Payload: eventMsg.Payload, + Coalesce: eventMsg.CC, + } + } + return true +} + +// handleQuery is called when a query broadcast is +// received. Returns if the message should be rebroadcast. +func (s *Serf) handleQuery(query *messageQuery) bool { + // Witness a potentially newer time + s.queryClock.Witness(query.LTime) + + s.queryLock.Lock() + defer s.queryLock.Unlock() + + // Ignore if it is before our minimum query time + if query.LTime < s.queryMinTime { + return false + } + + // Check if this message is too old + curTime := s.queryClock.Time() + if curTime > LamportTime(len(s.queryBuffer)) && + query.LTime < curTime-LamportTime(len(s.queryBuffer)) { + s.logger.Printf( + "[WARN] serf: received old query %s from time %d (current: %d)", + query.Name, + query.LTime, + s.queryClock.Time()) + return false + } + + // Check if we've already seen this + idx := query.LTime % LamportTime(len(s.queryBuffer)) + seen := s.queryBuffer[idx] + if seen != nil && seen.LTime == query.LTime { + for _, previous := range seen.QueryIDs { + if previous == query.ID { + // Seen this ID already + return false + } + } + } else { + seen = &queries{LTime: query.LTime} + s.queryBuffer[idx] = seen + } + + // Add to recent queries + seen.QueryIDs = append(seen.QueryIDs, query.ID) + + // Update some metrics + metrics.IncrCounter([]string{"serf", "queries"}, 1) + metrics.IncrCounter([]string{"serf", "queries", query.Name}, 1) + + // Check if we should rebroadcast, this may be disabled by a flag + rebroadcast := true + if query.NoBroadcast() { + rebroadcast = false + } + + // Filter the query + if !s.shouldProcessQuery(query.Filters) { + // Even if we don't process it further, we should rebroadcast, + // since it is the first time we've seen this. + return rebroadcast + } + + // Send ack if requested, without waiting for client to Respond() + if query.Ack() { + ack := messageQueryResponse{ + LTime: query.LTime, + ID: query.ID, + From: s.config.NodeName, + Flags: queryFlagAck, + } + raw, err := encodeMessage(messageQueryResponseType, &ack) + if err != nil { + s.logger.Printf("[ERR] serf: failed to format ack: %v", err) + } else { + addr := net.UDPAddr{IP: query.Addr, Port: int(query.Port)} + if err := s.memberlist.SendTo(&addr, raw); err != nil { + s.logger.Printf("[ERR] serf: failed to send ack: %v", err) + } + if err := s.relayResponse(query.RelayFactor, addr, &ack); err != nil { + s.logger.Printf("[ERR] serf: failed to relay ack: %v", err) + } + } + } + + if s.config.EventCh != nil { + s.config.EventCh <- &Query{ + LTime: query.LTime, + Name: query.Name, + Payload: query.Payload, + serf: s, + id: query.ID, + addr: query.Addr, + port: query.Port, + deadline: time.Now().Add(query.Timeout), + relayFactor: query.RelayFactor, + } + } + return rebroadcast +} + +// handleResponse is called when a query response is +// received. +func (s *Serf) handleQueryResponse(resp *messageQueryResponse) { + // Look for a corresponding QueryResponse + s.queryLock.RLock() + query, ok := s.queryResponse[resp.LTime] + s.queryLock.RUnlock() + if !ok { + s.logger.Printf("[WARN] serf: reply for non-running query (LTime: %d, ID: %d) From: %s", + resp.LTime, resp.ID, resp.From) + return + } + + // Verify the ID matches + if query.id != resp.ID { + s.logger.Printf("[WARN] serf: query reply ID mismatch (Local: %d, Response: %d)", + query.id, resp.ID) + return + } + + // Check if the query is closed + if query.Finished() { + return + } + + // Process each type of response + if resp.Ack() { + // Exit early if this is a duplicate ack + if _, ok := query.acks[resp.From]; ok { + metrics.IncrCounter([]string{"serf", "query_duplicate_acks"}, 1) + return + } + + metrics.IncrCounter([]string{"serf", "query_acks"}, 1) + select { + case query.ackCh <- resp.From: + query.acks[resp.From] = struct{}{} + default: + s.logger.Printf("[WARN] serf: Failed to deliver query ack, dropping") + } + } else { + // Exit early if this is a duplicate response + if _, ok := query.responses[resp.From]; ok { + metrics.IncrCounter([]string{"serf", "query_duplicate_responses"}, 1) + return + } + + metrics.IncrCounter([]string{"serf", "query_responses"}, 1) + select { + case query.respCh <- NodeResponse{From: resp.From, Payload: resp.Payload}: + query.responses[resp.From] = struct{}{} + default: + s.logger.Printf("[WARN] serf: Failed to deliver query response, dropping") + } + } +} + +// handleNodeConflict is invoked when a join detects a conflict over a name. +// This means two different nodes (IP/Port) are claiming the same name. Memberlist +// will reject the "new" node mapping, but we can still be notified +func (s *Serf) handleNodeConflict(existing, other *memberlist.Node) { + // Log a basic warning if the node is not us... + if existing.Name != s.config.NodeName { + s.logger.Printf("[WARN] serf: Name conflict for '%s' both %s:%d and %s:%d are claiming", + existing.Name, existing.Addr, existing.Port, other.Addr, other.Port) + return + } + + // The current node is conflicting! This is an error + s.logger.Printf("[ERR] serf: Node name conflicts with another node at %s:%d. Names must be unique! (Resolution enabled: %v)", + other.Addr, other.Port, s.config.EnableNameConflictResolution) + + // If automatic resolution is enabled, kick off the resolution + if s.config.EnableNameConflictResolution { + go s.resolveNodeConflict() + } +} + +// resolveNodeConflict is used to determine which node should remain during +// a name conflict. This is done by running an internal query. +func (s *Serf) resolveNodeConflict() { + // Get the local node + local := s.memberlist.LocalNode() + + // Start a name resolution query + qName := internalQueryName(conflictQuery) + payload := []byte(s.config.NodeName) + resp, err := s.Query(qName, payload, nil) + if err != nil { + s.logger.Printf("[ERR] serf: Failed to start name resolution query: %v", err) + return + } + + // Counter to determine winner + var responses, matching int + + // Gather responses + respCh := resp.ResponseCh() + for r := range respCh { + // Decode the response + if len(r.Payload) < 1 || messageType(r.Payload[0]) != messageConflictResponseType { + s.logger.Printf("[ERR] serf: Invalid conflict query response type: %v", r.Payload) + continue + } + var member Member + if err := decodeMessage(r.Payload[1:], &member); err != nil { + s.logger.Printf("[ERR] serf: Failed to decode conflict query response: %v", err) + continue + } + + // Update the counters + responses++ + if member.Addr.Equal(local.Addr) && member.Port == local.Port { + matching++ + } + } + + // Query over, determine if we should live + majority := (responses / 2) + 1 + if matching >= majority { + s.logger.Printf("[INFO] serf: majority in name conflict resolution [%d / %d]", + matching, responses) + return + } + + // Since we lost the vote, we need to exit + s.logger.Printf("[WARN] serf: minority in name conflict resolution, quiting [%d / %d]", + matching, responses) + if err := s.Shutdown(); err != nil { + s.logger.Printf("[ERR] serf: Failed to shutdown: %v", err) + } +} + +// handleReap periodically reaps the list of failed and left members, as well +// as old buffered intents. +func (s *Serf) handleReap() { + for { + select { + case <-time.After(s.config.ReapInterval): + s.memberLock.Lock() + now := time.Now() + s.failedMembers = s.reap(s.failedMembers, now, s.config.ReconnectTimeout) + s.leftMembers = s.reap(s.leftMembers, now, s.config.TombstoneTimeout) + reapIntents(s.recentIntents, now, s.config.RecentIntentTimeout) + s.memberLock.Unlock() + case <-s.shutdownCh: + return + } + } +} + +// handleReconnect attempts to reconnect to recently failed nodes +// on configured intervals. +func (s *Serf) handleReconnect() { + for { + select { + case <-time.After(s.config.ReconnectInterval): + s.reconnect() + case <-s.shutdownCh: + return + } + } +} + +// reap is called with a list of old members and a timeout, and removes +// members that have exceeded the timeout. The members are removed from +// both the old list and the members itself. Locking is left to the caller. +func (s *Serf) reap(old []*memberState, now time.Time, timeout time.Duration) []*memberState { + n := len(old) + for i := 0; i < n; i++ { + m := old[i] + + // Skip if the timeout is not yet reached + if now.Sub(m.leaveTime) <= timeout { + continue + } + + // Delete from the list + old[i], old[n-1] = old[n-1], nil + old = old[:n-1] + n-- + i-- + + // Delete from members + delete(s.members, m.Name) + + // Tell the coordinate client the node has gone away and delete + // its cached coordinates. + if !s.config.DisableCoordinates { + s.coordClient.ForgetNode(m.Name) + + s.coordCacheLock.Lock() + delete(s.coordCache, m.Name) + s.coordCacheLock.Unlock() + } + + // Send an event along + s.logger.Printf("[INFO] serf: EventMemberReap: %s", m.Name) + if s.config.EventCh != nil { + s.config.EventCh <- MemberEvent{ + Type: EventMemberReap, + Members: []Member{m.Member}, + } + } + } + + return old +} + +// reconnect attempts to reconnect to recently fail nodes. +func (s *Serf) reconnect() { + s.memberLock.RLock() + + // Nothing to do if there are no failed members + n := len(s.failedMembers) + if n == 0 { + s.memberLock.RUnlock() + return + } + + // Probability we should attempt to reconect is given + // by num failed / (num members - num failed - num left) + // This means that we probabilistically expect the cluster + // to attempt to connect to each failed member once per + // reconnect interval + numFailed := float32(len(s.failedMembers)) + numAlive := float32(len(s.members) - len(s.failedMembers) - len(s.leftMembers)) + if numAlive == 0 { + numAlive = 1 // guard against zero divide + } + prob := numFailed / numAlive + if rand.Float32() > prob { + s.memberLock.RUnlock() + s.logger.Printf("[DEBUG] serf: forgoing reconnect for random throttling") + return + } + + // Select a random member to try and join + idx := rand.Int31n(int32(n)) + mem := s.failedMembers[idx] + s.memberLock.RUnlock() + + // Format the addr + addr := net.UDPAddr{IP: mem.Addr, Port: int(mem.Port)} + s.logger.Printf("[INFO] serf: attempting reconnect to %v %s", mem.Name, addr.String()) + + // Attempt to join at the memberlist level + s.memberlist.Join([]string{addr.String()}) +} + +// checkQueueDepth periodically checks the size of a queue to see if +// it is too large +func (s *Serf) checkQueueDepth(name string, queue *memberlist.TransmitLimitedQueue) { + for { + select { + case <-time.After(time.Second): + numq := queue.NumQueued() + metrics.AddSample([]string{"serf", "queue", name}, float32(numq)) + if numq >= s.config.QueueDepthWarning { + s.logger.Printf("[WARN] serf: %s queue depth: %d", name, numq) + } + if numq > s.config.MaxQueueDepth { + s.logger.Printf("[WARN] serf: %s queue depth (%d) exceeds limit (%d), dropping messages!", + name, numq, s.config.MaxQueueDepth) + queue.Prune(s.config.MaxQueueDepth) + } + case <-s.shutdownCh: + return + } + } +} + +// removeOldMember is used to remove an old member from a list of old +// members. +func removeOldMember(old []*memberState, name string) []*memberState { + for i, m := range old { + if m.Name == name { + n := len(old) + old[i], old[n-1] = old[n-1], nil + return old[:n-1] + } + } + + return old +} + +// reapIntents clears out any intents that are older than the timeout. Make sure +// the memberLock is held when passing in the Serf instance's recentIntents +// member. +func reapIntents(intents map[string]nodeIntent, now time.Time, timeout time.Duration) { + for node, intent := range intents { + if now.Sub(intent.WallTime) > timeout { + delete(intents, node) + } + } +} + +// upsertIntent will update an existing intent with the supplied Lamport time, +// or create a new entry. This will return true if a new entry was added. The +// stamper is used to capture the wall clock time for expiring these buffered +// intents. Make sure the memberLock is held when passing in the Serf instance's +// recentIntents member. +func upsertIntent(intents map[string]nodeIntent, node string, itype messageType, + ltime LamportTime, stamper func() time.Time) bool { + if intent, ok := intents[node]; !ok || ltime > intent.LTime { + intents[node] = nodeIntent{ + Type: itype, + WallTime: stamper(), + LTime: ltime, + } + return true + } + + return false +} + +// recentIntent checks the recent intent buffer for a matching entry for a given +// node, and returns the Lamport time, if an intent is present, indicated by the +// returned boolean. Make sure the memberLock is held for read when passing in +// the Serf instance's recentIntents member. +func recentIntent(intents map[string]nodeIntent, node string, itype messageType) (LamportTime, bool) { + if intent, ok := intents[node]; ok && intent.Type == itype { + return intent.LTime, true + } + + return LamportTime(0), false +} + +// handleRejoin attempts to reconnect to previously known alive nodes +func (s *Serf) handleRejoin(previous []*PreviousNode) { + for _, prev := range previous { + // Do not attempt to join ourself + if prev.Name == s.config.NodeName { + continue + } + + s.logger.Printf("[INFO] serf: Attempting re-join to previously known node: %s", prev) + _, err := s.memberlist.Join([]string{prev.Addr}) + if err == nil { + s.logger.Printf("[INFO] serf: Re-joined to previously known node: %s", prev) + return + } + } + s.logger.Printf("[WARN] serf: Failed to re-join any previously known node") +} + +// encodeTags is used to encode a tag map +func (s *Serf) encodeTags(tags map[string]string) []byte { + // Support role-only backwards compatibility + if s.ProtocolVersion() < 3 { + role := tags["role"] + return []byte(role) + } + + // Use a magic byte prefix and msgpack encode the tags + var buf bytes.Buffer + buf.WriteByte(tagMagicByte) + enc := codec.NewEncoder(&buf, &codec.MsgpackHandle{}) + if err := enc.Encode(tags); err != nil { + panic(fmt.Sprintf("Failed to encode tags: %v", err)) + } + return buf.Bytes() +} + +// decodeTags is used to decode a tag map +func (s *Serf) decodeTags(buf []byte) map[string]string { + tags := make(map[string]string) + + // Backwards compatibility mode + if len(buf) == 0 || buf[0] != tagMagicByte { + tags["role"] = string(buf) + return tags + } + + // Decode the tags + r := bytes.NewReader(buf[1:]) + dec := codec.NewDecoder(r, &codec.MsgpackHandle{}) + if err := dec.Decode(&tags); err != nil { + s.logger.Printf("[ERR] serf: Failed to decode tags: %v", err) + } + return tags +} + +// Stats is used to provide operator debugging information +func (s *Serf) Stats() map[string]string { + toString := func(v uint64) string { + return strconv.FormatUint(v, 10) + } + stats := map[string]string{ + "members": toString(uint64(len(s.members))), + "failed": toString(uint64(len(s.failedMembers))), + "left": toString(uint64(len(s.leftMembers))), + "health_score": toString(uint64(s.memberlist.GetHealthScore())), + "member_time": toString(uint64(s.clock.Time())), + "event_time": toString(uint64(s.eventClock.Time())), + "query_time": toString(uint64(s.queryClock.Time())), + "intent_queue": toString(uint64(s.broadcasts.NumQueued())), + "event_queue": toString(uint64(s.eventBroadcasts.NumQueued())), + "query_queue": toString(uint64(s.queryBroadcasts.NumQueued())), + "encrypted": fmt.Sprintf("%v", s.EncryptionEnabled()), + } + return stats +} + +// WriteKeyringFile will serialize the current keyring and save it to a file. +func (s *Serf) writeKeyringFile() error { + if len(s.config.KeyringFile) == 0 { + return nil + } + + keyring := s.config.MemberlistConfig.Keyring + keysRaw := keyring.GetKeys() + keysEncoded := make([]string, len(keysRaw)) + + for i, key := range keysRaw { + keysEncoded[i] = base64.StdEncoding.EncodeToString(key) + } + + encodedKeys, err := json.MarshalIndent(keysEncoded, "", " ") + if err != nil { + return fmt.Errorf("Failed to encode keys: %s", err) + } + + // Use 0600 for permissions because key data is sensitive + if err = ioutil.WriteFile(s.config.KeyringFile, encodedKeys, 0600); err != nil { + return fmt.Errorf("Failed to write keyring file: %s", err) + } + + // Success! + return nil +} + +// GetCoordinate returns the network coordinate of the local node. +func (s *Serf) GetCoordinate() (*coordinate.Coordinate, error) { + if !s.config.DisableCoordinates { + return s.coordClient.GetCoordinate(), nil + } + + return nil, fmt.Errorf("Coordinates are disabled") +} + +// GetCachedCoordinate returns the network coordinate for the node with the given +// name. This will only be valid if DisableCoordinates is set to false. +func (s *Serf) GetCachedCoordinate(name string) (coord *coordinate.Coordinate, ok bool) { + if !s.config.DisableCoordinates { + s.coordCacheLock.RLock() + defer s.coordCacheLock.RUnlock() + if coord, ok = s.coordCache[name]; ok { + return coord, true + } + + return nil, false + } + + return nil, false +} + +// NumNodes returns the number of nodes in the serf cluster, regardless of +// their health or status. +func (s *Serf) NumNodes() (numNodes int) { + s.memberLock.RLock() + numNodes = len(s.members) + s.memberLock.RUnlock() + + return numNodes +} diff --git a/vendor/github.com/hashicorp/serf/serf/snapshot.go b/vendor/github.com/hashicorp/serf/serf/snapshot.go new file mode 100644 index 0000000000..6e1fbd596c --- /dev/null +++ b/vendor/github.com/hashicorp/serf/serf/snapshot.go @@ -0,0 +1,560 @@ +package serf + +import ( + "bufio" + "encoding/json" + "fmt" + "log" + "math/rand" + "net" + "os" + "strconv" + "strings" + "time" + + "github.com/armon/go-metrics" + "github.com/hashicorp/serf/coordinate" +) + +/* +Serf supports using a "snapshot" file that contains various +transactional data that is used to help Serf recover quickly +and gracefully from a failure. We append member events, as well +as the latest clock values to the file during normal operation, +and periodically checkpoint and roll over the file. During a restore, +we can replay the various member events to recall a list of known +nodes to re-join, as well as restore our clock values to avoid replaying +old events. +*/ + +const flushInterval = 500 * time.Millisecond +const clockUpdateInterval = 500 * time.Millisecond +const coordinateUpdateInterval = 60 * time.Second +const tmpExt = ".compact" + +// Snapshotter is responsible for ingesting events and persisting +// them to disk, and providing a recovery mechanism at start time. +type Snapshotter struct { + aliveNodes map[string]string + clock *LamportClock + coordClient *coordinate.Client + fh *os.File + buffered *bufio.Writer + inCh <-chan Event + lastFlush time.Time + lastClock LamportTime + lastEventClock LamportTime + lastQueryClock LamportTime + leaveCh chan struct{} + leaving bool + logger *log.Logger + maxSize int64 + path string + offset int64 + outCh chan<- Event + rejoinAfterLeave bool + shutdownCh <-chan struct{} + waitCh chan struct{} +} + +// PreviousNode is used to represent the previously known alive nodes +type PreviousNode struct { + Name string + Addr string +} + +func (p PreviousNode) String() string { + return fmt.Sprintf("%s: %s", p.Name, p.Addr) +} + +// NewSnapshotter creates a new Snapshotter that records events up to a +// max byte size before rotating the file. It can also be used to +// recover old state. Snapshotter works by reading an event channel it returns, +// passing through to an output channel, and persisting relevant events to disk. +// Setting rejoinAfterLeave makes leave not clear the state, and can be used +// if you intend to rejoin the same cluster after a leave. +func NewSnapshotter(path string, + maxSize int, + rejoinAfterLeave bool, + logger *log.Logger, + clock *LamportClock, + coordClient *coordinate.Client, + outCh chan<- Event, + shutdownCh <-chan struct{}) (chan<- Event, *Snapshotter, error) { + inCh := make(chan Event, 1024) + + // Try to open the file + fh, err := os.OpenFile(path, os.O_RDWR|os.O_APPEND|os.O_CREATE, 0644) + if err != nil { + return nil, nil, fmt.Errorf("failed to open snapshot: %v", err) + } + + // Determine the offset + info, err := fh.Stat() + if err != nil { + fh.Close() + return nil, nil, fmt.Errorf("failed to stat snapshot: %v", err) + } + offset := info.Size() + + // Create the snapshotter + snap := &Snapshotter{ + aliveNodes: make(map[string]string), + clock: clock, + coordClient: coordClient, + fh: fh, + buffered: bufio.NewWriter(fh), + inCh: inCh, + lastClock: 0, + lastEventClock: 0, + lastQueryClock: 0, + leaveCh: make(chan struct{}), + logger: logger, + maxSize: int64(maxSize), + path: path, + offset: offset, + outCh: outCh, + rejoinAfterLeave: rejoinAfterLeave, + shutdownCh: shutdownCh, + waitCh: make(chan struct{}), + } + + // Recover the last known state + if err := snap.replay(); err != nil { + fh.Close() + return nil, nil, err + } + + // Start handling new commands + go snap.stream() + return inCh, snap, nil +} + +// LastClock returns the last known clock time +func (s *Snapshotter) LastClock() LamportTime { + return s.lastClock +} + +// LastEventClock returns the last known event clock time +func (s *Snapshotter) LastEventClock() LamportTime { + return s.lastEventClock +} + +// LastQueryClock returns the last known query clock time +func (s *Snapshotter) LastQueryClock() LamportTime { + return s.lastQueryClock +} + +// AliveNodes returns the last known alive nodes +func (s *Snapshotter) AliveNodes() []*PreviousNode { + // Copy the previously known + previous := make([]*PreviousNode, 0, len(s.aliveNodes)) + for name, addr := range s.aliveNodes { + previous = append(previous, &PreviousNode{name, addr}) + } + + // Randomize the order, prevents hot shards + for i := range previous { + j := rand.Intn(i + 1) + previous[i], previous[j] = previous[j], previous[i] + } + return previous +} + +// Wait is used to wait until the snapshotter finishes shut down +func (s *Snapshotter) Wait() { + <-s.waitCh +} + +// Leave is used to remove known nodes to prevent a restart from +// causing a join. Otherwise nodes will re-join after leaving! +func (s *Snapshotter) Leave() { + select { + case s.leaveCh <- struct{}{}: + case <-s.shutdownCh: + } +} + +// stream is a long running routine that is used to handle events +func (s *Snapshotter) stream() { + clockTicker := time.NewTicker(clockUpdateInterval) + defer clockTicker.Stop() + + coordinateTicker := time.NewTicker(coordinateUpdateInterval) + defer coordinateTicker.Stop() + + for { + select { + case <-s.leaveCh: + s.leaving = true + + // If we plan to re-join, keep our state + if !s.rejoinAfterLeave { + s.aliveNodes = make(map[string]string) + } + s.tryAppend("leave\n") + if err := s.buffered.Flush(); err != nil { + s.logger.Printf("[ERR] serf: failed to flush leave to snapshot: %v", err) + } + if err := s.fh.Sync(); err != nil { + s.logger.Printf("[ERR] serf: failed to sync leave to snapshot: %v", err) + } + + case e := <-s.inCh: + // Forward the event immediately + if s.outCh != nil { + s.outCh <- e + } + + // Stop recording events after a leave is issued + if s.leaving { + continue + } + switch typed := e.(type) { + case MemberEvent: + s.processMemberEvent(typed) + case UserEvent: + s.processUserEvent(typed) + case *Query: + s.processQuery(typed) + default: + s.logger.Printf("[ERR] serf: Unknown event to snapshot: %#v", e) + } + + case <-clockTicker.C: + s.updateClock() + + case <-coordinateTicker.C: + s.updateCoordinate() + + case <-s.shutdownCh: + if err := s.buffered.Flush(); err != nil { + s.logger.Printf("[ERR] serf: failed to flush snapshot: %v", err) + } + if err := s.fh.Sync(); err != nil { + s.logger.Printf("[ERR] serf: failed to sync snapshot: %v", err) + } + s.fh.Close() + close(s.waitCh) + return + } + } +} + +// processMemberEvent is used to handle a single member event +func (s *Snapshotter) processMemberEvent(e MemberEvent) { + switch e.Type { + case EventMemberJoin: + for _, mem := range e.Members { + addr := net.TCPAddr{IP: mem.Addr, Port: int(mem.Port)} + s.aliveNodes[mem.Name] = addr.String() + s.tryAppend(fmt.Sprintf("alive: %s %s\n", mem.Name, addr.String())) + } + + case EventMemberLeave: + fallthrough + case EventMemberFailed: + for _, mem := range e.Members { + delete(s.aliveNodes, mem.Name) + s.tryAppend(fmt.Sprintf("not-alive: %s\n", mem.Name)) + } + } + s.updateClock() +} + +// updateClock is called periodically to check if we should udpate our +// clock value. This is done after member events but should also be done +// periodically due to race conditions with join and leave intents +func (s *Snapshotter) updateClock() { + lastSeen := s.clock.Time() - 1 + if lastSeen > s.lastClock { + s.lastClock = lastSeen + s.tryAppend(fmt.Sprintf("clock: %d\n", s.lastClock)) + } +} + +// updateCoordinate is called periodically to write out the current local +// coordinate. It's safe to call this if coordinates aren't enabled (nil +// client) and it will be a no-op. +func (s *Snapshotter) updateCoordinate() { + if s.coordClient != nil { + encoded, err := json.Marshal(s.coordClient.GetCoordinate()) + if err != nil { + s.logger.Printf("[ERR] serf: Failed to encode coordinate: %v", err) + } else { + s.tryAppend(fmt.Sprintf("coordinate: %s\n", encoded)) + } + } +} + +// processUserEvent is used to handle a single user event +func (s *Snapshotter) processUserEvent(e UserEvent) { + // Ignore old clocks + if e.LTime <= s.lastEventClock { + return + } + s.lastEventClock = e.LTime + s.tryAppend(fmt.Sprintf("event-clock: %d\n", e.LTime)) +} + +// processQuery is used to handle a single query event +func (s *Snapshotter) processQuery(q *Query) { + // Ignore old clocks + if q.LTime <= s.lastQueryClock { + return + } + s.lastQueryClock = q.LTime + s.tryAppend(fmt.Sprintf("query-clock: %d\n", q.LTime)) +} + +// tryAppend will invoke append line but will not return an error +func (s *Snapshotter) tryAppend(l string) { + if err := s.appendLine(l); err != nil { + s.logger.Printf("[ERR] serf: Failed to update snapshot: %v", err) + } +} + +// appendLine is used to append a line to the existing log +func (s *Snapshotter) appendLine(l string) error { + defer metrics.MeasureSince([]string{"serf", "snapshot", "appendLine"}, time.Now()) + + n, err := s.buffered.WriteString(l) + if err != nil { + return err + } + + // Check if we should flush + now := time.Now() + if now.Sub(s.lastFlush) > flushInterval { + s.lastFlush = now + if err := s.buffered.Flush(); err != nil { + return err + } + } + + // Check if a compaction is necessary + s.offset += int64(n) + if s.offset > s.maxSize { + return s.compact() + } + return nil +} + +// Compact is used to compact the snapshot once it is too large +func (s *Snapshotter) compact() error { + defer metrics.MeasureSince([]string{"serf", "snapshot", "compact"}, time.Now()) + + // Try to open the file to new fiel + newPath := s.path + tmpExt + fh, err := os.OpenFile(newPath, os.O_RDWR|os.O_TRUNC|os.O_CREATE, 0755) + if err != nil { + return fmt.Errorf("failed to open new snapshot: %v", err) + } + + // Create a buffered writer + buf := bufio.NewWriter(fh) + + // Write out the live nodes + var offset int64 + for name, addr := range s.aliveNodes { + line := fmt.Sprintf("alive: %s %s\n", name, addr) + n, err := buf.WriteString(line) + if err != nil { + fh.Close() + return err + } + offset += int64(n) + } + + // Write out the clocks + line := fmt.Sprintf("clock: %d\n", s.lastClock) + n, err := buf.WriteString(line) + if err != nil { + fh.Close() + return err + } + offset += int64(n) + + line = fmt.Sprintf("event-clock: %d\n", s.lastEventClock) + n, err = buf.WriteString(line) + if err != nil { + fh.Close() + return err + } + offset += int64(n) + + line = fmt.Sprintf("query-clock: %d\n", s.lastQueryClock) + n, err = buf.WriteString(line) + if err != nil { + fh.Close() + return err + } + offset += int64(n) + + // Write out the coordinate. + if s.coordClient != nil { + encoded, err := json.Marshal(s.coordClient.GetCoordinate()) + if err != nil { + fh.Close() + return err + } + + line = fmt.Sprintf("coordinate: %s\n", encoded) + n, err = buf.WriteString(line) + if err != nil { + fh.Close() + return err + } + offset += int64(n) + } + + // Flush the new snapshot + err = buf.Flush() + fh.Close() + if err != nil { + return fmt.Errorf("failed to flush new snapshot: %v", err) + } + + // We now need to swap the old snapshot file with the new snapshot. + // Turns out, Windows won't let us rename the files if we have + // open handles to them or if the destination already exists. This + // means we are forced to close the existing handles, delete the + // old file, move the new one in place, and then re-open the file + // handles. + + // Flush the existing snapshot, ignoring errors since we will + // delete it momentarily. + s.buffered.Flush() + s.buffered = nil + + // Close the file handle to the old snapshot + s.fh.Close() + s.fh = nil + + // Delete the old file + if err := os.Remove(s.path); err != nil { + return fmt.Errorf("failed to remove old snapshot: %v", err) + } + + // Move the new file into place + if err := os.Rename(newPath, s.path); err != nil { + return fmt.Errorf("failed to install new snapshot: %v", err) + } + + // Open the new snapshot + fh, err = os.OpenFile(s.path, os.O_RDWR|os.O_APPEND|os.O_CREATE, 0755) + if err != nil { + return fmt.Errorf("failed to open snapshot: %v", err) + } + buf = bufio.NewWriter(fh) + + // Rotate our handles + s.fh = fh + s.buffered = buf + s.offset = offset + s.lastFlush = time.Now() + return nil +} + +// replay is used to seek to reset our internal state by replaying +// the snapshot file. It is used at initialization time to read old +// state +func (s *Snapshotter) replay() error { + // Seek to the beginning + if _, err := s.fh.Seek(0, os.SEEK_SET); err != nil { + return err + } + + // Read each line + reader := bufio.NewReader(s.fh) + for { + line, err := reader.ReadString('\n') + if err != nil { + break + } + + // Skip the newline + line = line[:len(line)-1] + + // Switch on the prefix + if strings.HasPrefix(line, "alive: ") { + info := strings.TrimPrefix(line, "alive: ") + addrIdx := strings.LastIndex(info, " ") + if addrIdx == -1 { + s.logger.Printf("[WARN] serf: Failed to parse address: %v", line) + continue + } + addr := info[addrIdx+1:] + name := info[:addrIdx] + s.aliveNodes[name] = addr + + } else if strings.HasPrefix(line, "not-alive: ") { + name := strings.TrimPrefix(line, "not-alive: ") + delete(s.aliveNodes, name) + + } else if strings.HasPrefix(line, "clock: ") { + timeStr := strings.TrimPrefix(line, "clock: ") + timeInt, err := strconv.ParseUint(timeStr, 10, 64) + if err != nil { + s.logger.Printf("[WARN] serf: Failed to convert clock time: %v", err) + continue + } + s.lastClock = LamportTime(timeInt) + + } else if strings.HasPrefix(line, "event-clock: ") { + timeStr := strings.TrimPrefix(line, "event-clock: ") + timeInt, err := strconv.ParseUint(timeStr, 10, 64) + if err != nil { + s.logger.Printf("[WARN] serf: Failed to convert event clock time: %v", err) + continue + } + s.lastEventClock = LamportTime(timeInt) + + } else if strings.HasPrefix(line, "query-clock: ") { + timeStr := strings.TrimPrefix(line, "query-clock: ") + timeInt, err := strconv.ParseUint(timeStr, 10, 64) + if err != nil { + s.logger.Printf("[WARN] serf: Failed to convert query clock time: %v", err) + continue + } + s.lastQueryClock = LamportTime(timeInt) + + } else if strings.HasPrefix(line, "coordinate: ") { + if s.coordClient == nil { + s.logger.Printf("[WARN] serf: Ignoring snapshot coordinates since they are disabled") + continue + } + + coordStr := strings.TrimPrefix(line, "coordinate: ") + var coord coordinate.Coordinate + err := json.Unmarshal([]byte(coordStr), &coord) + if err != nil { + s.logger.Printf("[WARN] serf: Failed to decode coordinate: %v", err) + continue + } + s.coordClient.SetCoordinate(&coord) + } else if line == "leave" { + // Ignore a leave if we plan on re-joining + if s.rejoinAfterLeave { + s.logger.Printf("[INFO] serf: Ignoring previous leave in snapshot") + continue + } + s.aliveNodes = make(map[string]string) + s.lastClock = 0 + s.lastEventClock = 0 + s.lastQueryClock = 0 + + } else if strings.HasPrefix(line, "#") { + // Skip comment lines + + } else { + s.logger.Printf("[WARN] serf: Unrecognized snapshot line: %v", line) + } + } + + // Seek to the end + if _, err := s.fh.Seek(0, os.SEEK_END); err != nil { + return err + } + return nil +} diff --git a/vendor/github.com/sean-/seed/LICENSE b/vendor/github.com/sean-/seed/LICENSE new file mode 100644 index 0000000000..33d326a371 --- /dev/null +++ b/vendor/github.com/sean-/seed/LICENSE @@ -0,0 +1,54 @@ +MIT License + +Copyright (c) 2017 Sean Chittenden +Copyright (c) 2016 Alex Dadgar + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + +===== + +Bits of Go-lang's `once.Do()` were cribbed and reused here, too. + +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/sean-/seed/README.md b/vendor/github.com/sean-/seed/README.md new file mode 100644 index 0000000000..0137564f0c --- /dev/null +++ b/vendor/github.com/sean-/seed/README.md @@ -0,0 +1,44 @@ +# `seed` - Quickly Seed Go's Random Number Generator + +Boiler-plate to securely [seed](https://en.wikipedia.org/wiki/Random_seed) Go's +random number generator (if possible). This library isn't anything fancy, it's +just a canonical way of seeding Go's random number generator. Cribbed from +[`Nomad`](https://github.com/hashicorp/nomad/commit/f89a993ec6b91636a3384dd568898245fbc273a1) +before it was moved into +[`Consul`](https://github.com/hashicorp/consul/commit/d695bcaae6e31ee307c11fdf55bb0bf46ea9fcf4) +and made into a helper function, and now further modularized to be a super +lightweight and reusable library. + +Time is better than +[Go's default seed of `1`](https://golang.org/pkg/math/rand/#Seed), but friends +don't let friends use time as a seed to a random number generator. Use +`seed.MustInit()` instead. + +`seed.Init()` is an idempotent and reentrant call that will return an error if +it can't seed the value the first time it is called. `Init()` is reentrant. + +`seed.MustInit()` is idempotent and reentrant call that will `panic()` if it +can't seed the value the first time it is called. `MustInit()` is reentrant. + +## Usage + +``` +package mypackage + +import ( + "github.com/sean-/seed" +) + +// MustInit will panic() if it is unable to set a high-entropy random seed: +func init() { + seed.MustInit() +} + +// Or if you want to not panic() and can actually handle this error: +func init() { + if secure, err := !seed.Init(); !secure { + // Handle the error + //panic(fmt.Sprintf("Unable to securely seed Go's RNG: %v", err)) + } +} +``` diff --git a/vendor/github.com/sean-/seed/init.go b/vendor/github.com/sean-/seed/init.go new file mode 100644 index 0000000000..248d6b636c --- /dev/null +++ b/vendor/github.com/sean-/seed/init.go @@ -0,0 +1,84 @@ +package seed + +import ( + crand "crypto/rand" + "fmt" + "math" + "math/big" + "math/rand" + "sync" + "sync/atomic" + "time" +) + +var ( + m sync.Mutex + secure int32 + seeded int32 +) + +func cryptoSeed() error { + defer atomic.StoreInt32(&seeded, 1) + + var err error + var n *big.Int + n, err = crand.Int(crand.Reader, big.NewInt(math.MaxInt64)) + if err != nil { + rand.Seed(time.Now().UTC().UnixNano()) + return err + } + rand.Seed(n.Int64()) + atomic.StoreInt32(&secure, 1) + return nil +} + +// Init provides best-effort seeding (which is better than running with Go's +// default seed of 1). If `/dev/urandom` is available, Init() will seed Go's +// runtime with entropy from `/dev/urandom` and return true because the runtime +// was securely seeded. If Init() has already initialized the random number or +// it had failed to securely initialize the random number generation, Init() +// will return false. See MustInit(). +func Init() (seededSecurely bool, err error) { + if atomic.LoadInt32(&seeded) == 1 { + return false, nil + } + + // Slow-path + m.Lock() + defer m.Unlock() + + if err := cryptoSeed(); err != nil { + return false, err + } + + return true, nil +} + +// MustInit provides guaranteed secure seeding. If `/dev/urandom` is not +// available, MustInit will panic() with an error indicating why reading from +// `/dev/urandom` failed. MustInit() will upgrade the seed if for some reason a +// call to Init() failed in the past. +func MustInit() { + if atomic.LoadInt32(&secure) == 1 { + return + } + + // Slow-path + m.Lock() + defer m.Unlock() + + if err := cryptoSeed(); err != nil { + panic(fmt.Sprintf("Unable to seed the random number generator: %v", err)) + } +} + +// Secure returns true if a cryptographically secure seed was used to +// initialize rand. +func Secure() bool { + return atomic.LoadInt32(&secure) == 1 +} + +// Seeded returns true if Init has seeded the random number generator. +func Seeded() bool { + return atomic.LoadInt32(&seeded) == 1 +} diff --git a/vendor/github.com/xeipuuv/gojsonpointer/LICENSE-APACHE-2.0.txt b/vendor/github.com/xeipuuv/gojsonpointer/LICENSE-APACHE-2.0.txt new file mode 100644 index 0000000000..55ede8a42c --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonpointer/LICENSE-APACHE-2.0.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2015 xeipuuv + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/xeipuuv/gojsonpointer/README.md b/vendor/github.com/xeipuuv/gojsonpointer/README.md new file mode 100644 index 0000000000..dbe4d50824 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonpointer/README.md @@ -0,0 +1,8 @@ +# gojsonpointer +An implementation of JSON Pointer - Go language + +## References +http://tools.ietf.org/html/draft-ietf-appsawg-json-pointer-07 + +### Note +The 4.Evaluation part of the previous reference, starting with 'If the currently referenced value is a JSON array, the reference token MUST contain either...' is not implemented. diff --git a/vendor/github.com/xeipuuv/gojsonpointer/pointer.go b/vendor/github.com/xeipuuv/gojsonpointer/pointer.go new file mode 100644 index 0000000000..06f1918e84 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonpointer/pointer.go @@ -0,0 +1,190 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonpointer +// repository-desc An implementation of JSON Pointer - Go language +// +// description Main and unique file. +// +// created 25-02-2013 + +package gojsonpointer + +import ( + "errors" + "fmt" + "reflect" + "strconv" + "strings" +) + +const ( + const_empty_pointer = `` + const_pointer_separator = `/` + + const_invalid_start = `JSON pointer must be empty or start with a "` + const_pointer_separator + `"` +) + +type implStruct struct { + mode string // "SET" or "GET" + + inDocument interface{} + + setInValue interface{} + + getOutNode interface{} + getOutKind reflect.Kind + outError error +} + +type JsonPointer struct { + referenceTokens []string +} + +// NewJsonPointer parses the given string JSON pointer and returns an object +func NewJsonPointer(jsonPointerString string) (p JsonPointer, err error) { + + // Pointer to the root of the document + if len(jsonPointerString) == 0 { + // Keep referenceTokens nil + return + } + if jsonPointerString[0] != '/' { + return p, errors.New(const_invalid_start) + } + + p.referenceTokens = strings.Split(jsonPointerString[1:], const_pointer_separator) + return +} + +// Uses the pointer to retrieve a value from a JSON document +func (p *JsonPointer) Get(document interface{}) (interface{}, reflect.Kind, error) { + + is := &implStruct{mode: "GET", inDocument: document} + p.implementation(is) + return is.getOutNode, is.getOutKind, is.outError + +} + +// Uses the pointer to update a value from a JSON document +func (p *JsonPointer) Set(document interface{}, value interface{}) (interface{}, error) { + + is := &implStruct{mode: "SET", inDocument: document, setInValue: value} + p.implementation(is) + return document, is.outError + +} + +// Both Get and Set functions use the same implementation to avoid code duplication +func (p *JsonPointer) implementation(i *implStruct) { + + kind := reflect.Invalid + + // Full document when empty + if len(p.referenceTokens) == 0 { + i.getOutNode = i.inDocument + i.outError = nil + i.getOutKind = kind + i.outError = nil + return + } + + node := i.inDocument + + for ti, token := range p.referenceTokens { + + isLastToken := ti == len(p.referenceTokens)-1 + + switch v := node.(type) { + + case map[string]interface{}: + decodedToken := decodeReferenceToken(token) + if _, ok := v[decodedToken]; ok { + node = v[decodedToken] + if isLastToken && i.mode == "SET" { + v[decodedToken] = i.setInValue + } + } else { + i.outError = fmt.Errorf("Object has no key '%s'", decodedToken) + i.getOutKind = reflect.Map + i.getOutNode = nil + return + } + + case []interface{}: + tokenIndex, err := strconv.Atoi(token) + if err != nil { + i.outError = fmt.Errorf("Invalid array index '%s'", token) + i.getOutKind = reflect.Slice + i.getOutNode = nil + return + } + if tokenIndex < 0 || tokenIndex >= len(v) { + i.outError = fmt.Errorf("Out of bound array[0,%d] index '%d'", len(v), tokenIndex) + i.getOutKind = reflect.Slice + i.getOutNode = nil + return + } + + node = v[tokenIndex] + if isLastToken && i.mode == "SET" { + v[tokenIndex] = i.setInValue + } + + default: + i.outError = fmt.Errorf("Invalid token reference '%s'", token) + i.getOutKind = reflect.ValueOf(node).Kind() + i.getOutNode = nil + return + } + + } + + i.getOutNode = node + i.getOutKind = reflect.ValueOf(node).Kind() + i.outError = nil +} + +// Pointer to string representation function +func (p *JsonPointer) String() string { + + if len(p.referenceTokens) == 0 { + return const_empty_pointer + } + + pointerString := const_pointer_separator + strings.Join(p.referenceTokens, const_pointer_separator) + + return pointerString +} + +// Specific JSON pointer encoding here +// ~0 => ~ +// ~1 => / +// ... and vice versa + +func decodeReferenceToken(token string) string { + step1 := strings.Replace(token, `~1`, `/`, -1) + step2 := strings.Replace(step1, `~0`, `~`, -1) + return step2 +} + +func encodeReferenceToken(token string) string { + step1 := strings.Replace(token, `~`, `~0`, -1) + step2 := strings.Replace(step1, `/`, `~1`, -1) + return step2 +} diff --git a/vendor/github.com/xeipuuv/gojsonreference/LICENSE-APACHE-2.0.txt b/vendor/github.com/xeipuuv/gojsonreference/LICENSE-APACHE-2.0.txt new file mode 100644 index 0000000000..55ede8a42c --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonreference/LICENSE-APACHE-2.0.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2015 xeipuuv + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/xeipuuv/gojsonreference/README.md b/vendor/github.com/xeipuuv/gojsonreference/README.md new file mode 100644 index 0000000000..9ab6e1eb13 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonreference/README.md @@ -0,0 +1,10 @@ +# gojsonreference +An implementation of JSON Reference - Go language + +## Dependencies +https://github.com/xeipuuv/gojsonpointer + +## References +http://tools.ietf.org/html/draft-ietf-appsawg-json-pointer-07 + +http://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03 diff --git a/vendor/github.com/xeipuuv/gojsonreference/reference.go b/vendor/github.com/xeipuuv/gojsonreference/reference.go new file mode 100644 index 0000000000..d4d2eca0aa --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonreference/reference.go @@ -0,0 +1,141 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonreference +// repository-desc An implementation of JSON Reference - Go language +// +// description Main and unique file. +// +// created 26-02-2013 + +package gojsonreference + +import ( + "errors" + "github.com/xeipuuv/gojsonpointer" + "net/url" + "path/filepath" + "runtime" + "strings" +) + +const ( + const_fragment_char = `#` +) + +func NewJsonReference(jsonReferenceString string) (JsonReference, error) { + + var r JsonReference + err := r.parse(jsonReferenceString) + return r, err + +} + +type JsonReference struct { + referenceUrl *url.URL + referencePointer gojsonpointer.JsonPointer + + HasFullUrl bool + HasUrlPathOnly bool + HasFragmentOnly bool + HasFileScheme bool + HasFullFilePath bool +} + +func (r *JsonReference) GetUrl() *url.URL { + return r.referenceUrl +} + +func (r *JsonReference) GetPointer() *gojsonpointer.JsonPointer { + return &r.referencePointer +} + +func (r *JsonReference) String() string { + + if r.referenceUrl != nil { + return r.referenceUrl.String() + } + + if r.HasFragmentOnly { + return const_fragment_char + r.referencePointer.String() + } + + return r.referencePointer.String() +} + +func (r *JsonReference) IsCanonical() bool { + return (r.HasFileScheme && r.HasFullFilePath) || (!r.HasFileScheme && r.HasFullUrl) +} + +// "Constructor", parses the given string JSON reference +func (r *JsonReference) parse(jsonReferenceString string) (err error) { + + r.referenceUrl, err = url.Parse(jsonReferenceString) + if err != nil { + return + } + refUrl := r.referenceUrl + + if refUrl.Scheme != "" && refUrl.Host != "" { + r.HasFullUrl = true + } else { + if refUrl.Path != "" { + r.HasUrlPathOnly = true + } else if refUrl.RawQuery == "" && refUrl.Fragment != "" { + r.HasFragmentOnly = true + } + } + + r.HasFileScheme = refUrl.Scheme == "file" + if runtime.GOOS == "windows" { + // on Windows, a file URL may have an extra leading slash, and if it + // doesn't then its first component will be treated as the host by the + // Go runtime + if refUrl.Host == "" && strings.HasPrefix(refUrl.Path, "/") { + r.HasFullFilePath = filepath.IsAbs(refUrl.Path[1:]) + } else { + r.HasFullFilePath = filepath.IsAbs(refUrl.Host + refUrl.Path) + } + } else { + r.HasFullFilePath = filepath.IsAbs(refUrl.Path) + } + + // invalid json-pointer error means url has no json-pointer fragment. simply ignore error + r.referencePointer, _ = gojsonpointer.NewJsonPointer(refUrl.Fragment) + + return +} + +// Creates a new reference from a parent and a child +// If the child cannot inherit from the parent, an error is returned +func (r *JsonReference) Inherits(child JsonReference) (*JsonReference, error) { + childUrl := child.GetUrl() + parentUrl := r.GetUrl() + if childUrl == nil { + return nil, errors.New("childUrl is nil!") + } + if parentUrl == nil { + return nil, errors.New("parentUrl is nil!") + } + + ref, err := NewJsonReference(parentUrl.ResolveReference(childUrl).String()) + if err != nil { + return nil, err + } + return &ref, err +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/LICENSE-APACHE-2.0.txt b/vendor/github.com/xeipuuv/gojsonschema/LICENSE-APACHE-2.0.txt new file mode 100644 index 0000000000..55ede8a42c --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/LICENSE-APACHE-2.0.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2015 xeipuuv + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/xeipuuv/gojsonschema/README.md b/vendor/github.com/xeipuuv/gojsonschema/README.md new file mode 100644 index 0000000000..127bdd1680 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/README.md @@ -0,0 +1,236 @@ +[![Build Status](https://travis-ci.org/xeipuuv/gojsonschema.svg)](https://travis-ci.org/xeipuuv/gojsonschema) + +# gojsonschema + +## Description + +An implementation of JSON Schema, based on IETF's draft v4 - Go language + +References : + +* http://json-schema.org +* http://json-schema.org/latest/json-schema-core.html +* http://json-schema.org/latest/json-schema-validation.html + +## Installation + +``` +go get github.com/xeipuuv/gojsonschema +``` + +Dependencies : +* [github.com/xeipuuv/gojsonpointer](https://github.com/xeipuuv/gojsonpointer) +* [github.com/xeipuuv/gojsonreference](https://github.com/xeipuuv/gojsonreference) +* [github.com/stretchr/testify/assert](https://github.com/stretchr/testify#assert-package) + +## Usage + +### Example + +```go + +package main + +import ( + "fmt" + "github.com/xeipuuv/gojsonschema" +) + +func main() { + + schemaLoader := gojsonschema.NewReferenceLoader("file:///home/me/schema.json") + documentLoader := gojsonschema.NewReferenceLoader("file:///home/me/document.json") + + result, err := gojsonschema.Validate(schemaLoader, documentLoader) + if err != nil { + panic(err.Error()) + } + + if result.Valid() { + fmt.Printf("The document is valid\n") + } else { + fmt.Printf("The document is not valid. see errors :\n") + for _, desc := range result.Errors() { + fmt.Printf("- %s\n", desc) + } + } + +} + + +``` + +#### Loaders + +There are various ways to load your JSON data. +In order to load your schemas and documents, +first declare an appropriate loader : + +* Web / HTTP, using a reference : + +```go +loader := gojsonschema.NewReferenceLoader("http://www.some_host.com/schema.json") +``` + +* Local file, using a reference : + +```go +loader := gojsonschema.NewReferenceLoader("file:///home/me/schema.json") +``` + +References use the URI scheme, the prefix (file://) and a full path to the file are required. + +* JSON strings : + +```go +loader := gojsonschema.NewStringLoader(`{"type": "string"}`) +``` + +* Custom Go types : + +```go +m := map[string]interface{}{"type": "string"} +loader := gojsonschema.NewGoLoader(m) +``` + +And + +```go +type Root struct { + Users []User `json:"users"` +} + +type User struct { + Name string `json:"name"` +} + +... + +data := Root{} +data.Users = append(data.Users, User{"John"}) +data.Users = append(data.Users, User{"Sophia"}) +data.Users = append(data.Users, User{"Bill"}) + +loader := gojsonschema.NewGoLoader(data) +``` + +#### Validation + +Once the loaders are set, validation is easy : + +```go +result, err := gojsonschema.Validate(schemaLoader, documentLoader) +``` + +Alternatively, you might want to load a schema only once and process to multiple validations : + +```go +schema, err := gojsonschema.NewSchema(schemaLoader) +... +result1, err := schema.Validate(documentLoader1) +... +result2, err := schema.Validate(documentLoader2) +... +// etc ... +``` + +To check the result : + +```go + if result.Valid() { + fmt.Printf("The document is valid\n") + } else { + fmt.Printf("The document is not valid. see errors :\n") + for _, err := range result.Errors() { + // Err implements the ResultError interface + fmt.Printf("- %s\n", err) + } + } +``` + +## Working with Errors + +The library handles string error codes which you can customize by creating your own gojsonschema.locale and setting it +```go +gojsonschema.Locale = YourCustomLocale{} +``` + +However, each error contains additional contextual information. + +**err.Type()**: *string* Returns the "type" of error that occurred. Note you can also type check. See below + +Note: An error of RequiredType has an err.Type() return value of "required" + + "required": RequiredError + "invalid_type": InvalidTypeError + "number_any_of": NumberAnyOfError + "number_one_of": NumberOneOfError + "number_all_of": NumberAllOfError + "number_not": NumberNotError + "missing_dependency": MissingDependencyError + "internal": InternalError + "enum": EnumError + "array_no_additional_items": ArrayNoAdditionalItemsError + "array_min_items": ArrayMinItemsError + "array_max_items": ArrayMaxItemsError + "unique": ItemsMustBeUniqueError + "array_min_properties": ArrayMinPropertiesError + "array_max_properties": ArrayMaxPropertiesError + "additional_property_not_allowed": AdditionalPropertyNotAllowedError + "invalid_property_pattern": InvalidPropertyPatternError + "string_gte": StringLengthGTEError + "string_lte": StringLengthLTEError + "pattern": DoesNotMatchPatternError + "multiple_of": MultipleOfError + "number_gte": NumberGTEError + "number_gt": NumberGTError + "number_lte": NumberLTEError + "number_lt": NumberLTError + +**err.Value()**: *interface{}* Returns the value given + +**err.Context()**: *gojsonschema.jsonContext* Returns the context. This has a String() method that will print something like this: (root).firstName + +**err.Field()**: *string* Returns the fieldname in the format firstName, or for embedded properties, person.firstName. This returns the same as the String() method on *err.Context()* but removes the (root). prefix. + +**err.Description()**: *string* The error description. This is based on the locale you are using. See the beginning of this section for overwriting the locale with a custom implementation. + +**err.Details()**: *gojsonschema.ErrorDetails* Returns a map[string]interface{} of additional error details specific to the error. For example, GTE errors will have a "min" value, LTE will have a "max" value. See errors.go for a full description of all the error details. Every error always contains a "field" key that holds the value of *err.Field()* + +Note in most cases, the err.Details() will be used to generate replacement strings in your locales, and not used directly. These strings follow the text/template format i.e. +``` +{{.field}} must be greater than or equal to {{.min}} +``` + +## Formats +JSON Schema allows for optional "format" property to validate strings against well-known formats. gojsonschema ships with all of the formats defined in the spec that you can use like this: +````json +{"type": "string", "format": "email"} +```` +Available formats: date-time, hostname, email, ipv4, ipv6, uri. + +For repetitive or more complex formats, you can create custom format checkers and add them to gojsonschema like this: + +```go +// Define the format checker +type RoleFormatChecker struct {} + +// Ensure it meets the gojsonschema.FormatChecker interface +func (f RoleFormatChecker) IsFormat(input string) bool { + return strings.HasPrefix("ROLE_", input) +} + +// Add it to the library +gojsonschema.FormatCheckers.Add("role", RoleFormatChecker{}) +```` + +Now to use in your json schema: +````json +{"type": "string", "format": "role"} +```` + +## Uses + +gojsonschema uses the following test suite : + +https://github.com/json-schema/JSON-Schema-Test-Suite diff --git a/vendor/github.com/xeipuuv/gojsonschema/errors.go b/vendor/github.com/xeipuuv/gojsonschema/errors.go new file mode 100644 index 0000000000..a541a73783 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/errors.go @@ -0,0 +1,274 @@ +package gojsonschema + +import ( + "bytes" + "sync" + "text/template" +) + +var errorTemplates errorTemplate = errorTemplate{template.New("errors-new"), sync.RWMutex{}} + +// template.Template is not thread-safe for writing, so some locking is done +// sync.RWMutex is used for efficiently locking when new templates are created +type errorTemplate struct { + *template.Template + sync.RWMutex +} + +type ( + // RequiredError. ErrorDetails: property string + RequiredError struct { + ResultErrorFields + } + + // InvalidTypeError. ErrorDetails: expected, given + InvalidTypeError struct { + ResultErrorFields + } + + // NumberAnyOfError. ErrorDetails: - + NumberAnyOfError struct { + ResultErrorFields + } + + // NumberOneOfError. ErrorDetails: - + NumberOneOfError struct { + ResultErrorFields + } + + // NumberAllOfError. ErrorDetails: - + NumberAllOfError struct { + ResultErrorFields + } + + // NumberNotError. ErrorDetails: - + NumberNotError struct { + ResultErrorFields + } + + // MissingDependencyError. ErrorDetails: dependency + MissingDependencyError struct { + ResultErrorFields + } + + // InternalError. ErrorDetails: error + InternalError struct { + ResultErrorFields + } + + // EnumError. ErrorDetails: allowed + EnumError struct { + ResultErrorFields + } + + // ArrayNoAdditionalItemsError. ErrorDetails: - + ArrayNoAdditionalItemsError struct { + ResultErrorFields + } + + // ArrayMinItemsError. ErrorDetails: min + ArrayMinItemsError struct { + ResultErrorFields + } + + // ArrayMaxItemsError. ErrorDetails: max + ArrayMaxItemsError struct { + ResultErrorFields + } + + // ItemsMustBeUniqueError. ErrorDetails: type + ItemsMustBeUniqueError struct { + ResultErrorFields + } + + // ArrayMinPropertiesError. ErrorDetails: min + ArrayMinPropertiesError struct { + ResultErrorFields + } + + // ArrayMaxPropertiesError. ErrorDetails: max + ArrayMaxPropertiesError struct { + ResultErrorFields + } + + // AdditionalPropertyNotAllowedError. ErrorDetails: property + AdditionalPropertyNotAllowedError struct { + ResultErrorFields + } + + // InvalidPropertyPatternError. ErrorDetails: property, pattern + InvalidPropertyPatternError struct { + ResultErrorFields + } + + // StringLengthGTEError. ErrorDetails: min + StringLengthGTEError struct { + ResultErrorFields + } + + // StringLengthLTEError. ErrorDetails: max + StringLengthLTEError struct { + ResultErrorFields + } + + // DoesNotMatchPatternError. ErrorDetails: pattern + DoesNotMatchPatternError struct { + ResultErrorFields + } + + // DoesNotMatchFormatError. ErrorDetails: format + DoesNotMatchFormatError struct { + ResultErrorFields + } + + // MultipleOfError. ErrorDetails: multiple + MultipleOfError struct { + ResultErrorFields + } + + // NumberGTEError. ErrorDetails: min + NumberGTEError struct { + ResultErrorFields + } + + // NumberGTError. ErrorDetails: min + NumberGTError struct { + ResultErrorFields + } + + // NumberLTEError. ErrorDetails: max + NumberLTEError struct { + ResultErrorFields + } + + // NumberLTError. ErrorDetails: max + NumberLTError struct { + ResultErrorFields + } +) + +// newError takes a ResultError type and sets the type, context, description, details, value, and field +func newError(err ResultError, context *jsonContext, value interface{}, locale locale, details ErrorDetails) { + var t string + var d string + switch err.(type) { + case *RequiredError: + t = "required" + d = locale.Required() + case *InvalidTypeError: + t = "invalid_type" + d = locale.InvalidType() + case *NumberAnyOfError: + t = "number_any_of" + d = locale.NumberAnyOf() + case *NumberOneOfError: + t = "number_one_of" + d = locale.NumberOneOf() + case *NumberAllOfError: + t = "number_all_of" + d = locale.NumberAllOf() + case *NumberNotError: + t = "number_not" + d = locale.NumberNot() + case *MissingDependencyError: + t = "missing_dependency" + d = locale.MissingDependency() + case *InternalError: + t = "internal" + d = locale.Internal() + case *EnumError: + t = "enum" + d = locale.Enum() + case *ArrayNoAdditionalItemsError: + t = "array_no_additional_items" + d = locale.ArrayNoAdditionalItems() + case *ArrayMinItemsError: + t = "array_min_items" + d = locale.ArrayMinItems() + case *ArrayMaxItemsError: + t = "array_max_items" + d = locale.ArrayMaxItems() + case *ItemsMustBeUniqueError: + t = "unique" + d = locale.Unique() + case *ArrayMinPropertiesError: + t = "array_min_properties" + d = locale.ArrayMinProperties() + case *ArrayMaxPropertiesError: + t = "array_max_properties" + d = locale.ArrayMaxProperties() + case *AdditionalPropertyNotAllowedError: + t = "additional_property_not_allowed" + d = locale.AdditionalPropertyNotAllowed() + case *InvalidPropertyPatternError: + t = "invalid_property_pattern" + d = locale.InvalidPropertyPattern() + case *StringLengthGTEError: + t = "string_gte" + d = locale.StringGTE() + case *StringLengthLTEError: + t = "string_lte" + d = locale.StringLTE() + case *DoesNotMatchPatternError: + t = "pattern" + d = locale.DoesNotMatchPattern() + case *DoesNotMatchFormatError: + t = "format" + d = locale.DoesNotMatchFormat() + case *MultipleOfError: + t = "multiple_of" + d = locale.MultipleOf() + case *NumberGTEError: + t = "number_gte" + d = locale.NumberGTE() + case *NumberGTError: + t = "number_gt" + d = locale.NumberGT() + case *NumberLTEError: + t = "number_lte" + d = locale.NumberLTE() + case *NumberLTError: + t = "number_lt" + d = locale.NumberLT() + } + + err.SetType(t) + err.SetContext(context) + err.SetValue(value) + err.SetDetails(details) + details["field"] = err.Field() + err.SetDescription(formatErrorDescription(d, details)) +} + +// formatErrorDescription takes a string in the default text/template +// format and converts it to a string with replacements. The fields come +// from the ErrorDetails struct and vary for each type of error. +func formatErrorDescription(s string, details ErrorDetails) string { + + var tpl *template.Template + var descrAsBuffer bytes.Buffer + var err error + + errorTemplates.RLock() + tpl = errorTemplates.Lookup(s) + errorTemplates.RUnlock() + + if tpl == nil { + errorTemplates.Lock() + tpl = errorTemplates.New(s) + + tpl, err = tpl.Parse(s) + errorTemplates.Unlock() + + if err != nil { + return err.Error() + } + } + + err = tpl.Execute(&descrAsBuffer, details) + if err != nil { + return err.Error() + } + + return descrAsBuffer.String() +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/format_checkers.go b/vendor/github.com/xeipuuv/gojsonschema/format_checkers.go new file mode 100644 index 0000000000..c7214b0455 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/format_checkers.go @@ -0,0 +1,194 @@ +package gojsonschema + +import ( + "net" + "net/url" + "reflect" + "regexp" + "strings" + "time" +) + +type ( + // FormatChecker is the interface all formatters added to FormatCheckerChain must implement + FormatChecker interface { + IsFormat(input string) bool + } + + // FormatCheckerChain holds the formatters + FormatCheckerChain struct { + formatters map[string]FormatChecker + } + + // EmailFormatter verifies email address formats + EmailFormatChecker struct{} + + // IPV4FormatChecker verifies IP addresses in the ipv4 format + IPV4FormatChecker struct{} + + // IPV6FormatChecker verifies IP addresses in the ipv6 format + IPV6FormatChecker struct{} + + // DateTimeFormatChecker verifies date/time formats per RFC3339 5.6 + // + // Valid formats: + // Partial Time: HH:MM:SS + // Full Date: YYYY-MM-DD + // Full Time: HH:MM:SSZ-07:00 + // Date Time: YYYY-MM-DDTHH:MM:SSZ-0700 + // + // Where + // YYYY = 4DIGIT year + // MM = 2DIGIT month ; 01-12 + // DD = 2DIGIT day-month ; 01-28, 01-29, 01-30, 01-31 based on month/year + // HH = 2DIGIT hour ; 00-23 + // MM = 2DIGIT ; 00-59 + // SS = 2DIGIT ; 00-58, 00-60 based on leap second rules + // T = Literal + // Z = Literal + // + // Note: Nanoseconds are also suported in all formats + // + // http://tools.ietf.org/html/rfc3339#section-5.6 + DateTimeFormatChecker struct{} + + // URIFormatCheckers validates a URI with a valid Scheme per RFC3986 + URIFormatChecker struct{} + + // HostnameFormatChecker validates a hostname is in the correct format + HostnameFormatChecker struct{} + + // UUIDFormatChecker validates a UUID is in the correct format + UUIDFormatChecker struct{} + + // RegexFormatChecker validates a regex is in the correct format + RegexFormatChecker struct{} +) + +var ( + // Formatters holds the valid formatters, and is a public variable + // so library users can add custom formatters + FormatCheckers = FormatCheckerChain{ + formatters: map[string]FormatChecker{ + "date-time": DateTimeFormatChecker{}, + "hostname": HostnameFormatChecker{}, + "email": EmailFormatChecker{}, + "ipv4": IPV4FormatChecker{}, + "ipv6": IPV6FormatChecker{}, + "uri": URIFormatChecker{}, + "uuid": UUIDFormatChecker{}, + "regex": RegexFormatChecker{}, + }, + } + + // Regex credit: https://github.com/asaskevich/govalidator + rxEmail = regexp.MustCompile("^(((([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|((\\x22)((((\\x20|\\x09)*(\\x0d\\x0a))?(\\x20|\\x09)+)?(([\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(\\([\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(((\\x20|\\x09)*(\\x0d\\x0a))?(\\x20|\\x09)+)?(\\x22)))@((([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])([a-zA-Z]|\\d|-|\\.|_|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])([a-zA-Z]|\\d|-|\\.|_|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$") + + // Regex credit: https://www.socketloop.com/tutorials/golang-validate-hostname + rxHostname = regexp.MustCompile(`^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])(\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9]))*$`) + + rxUUID = regexp.MustCompile("^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$") +) + +// Add adds a FormatChecker to the FormatCheckerChain +// The name used will be the value used for the format key in your json schema +func (c *FormatCheckerChain) Add(name string, f FormatChecker) *FormatCheckerChain { + c.formatters[name] = f + + return c +} + +// Remove deletes a FormatChecker from the FormatCheckerChain (if it exists) +func (c *FormatCheckerChain) Remove(name string) *FormatCheckerChain { + delete(c.formatters, name) + + return c +} + +// Has checks to see if the FormatCheckerChain holds a FormatChecker with the given name +func (c *FormatCheckerChain) Has(name string) bool { + _, ok := c.formatters[name] + + return ok +} + +// IsFormat will check an input against a FormatChecker with the given name +// to see if it is the correct format +func (c *FormatCheckerChain) IsFormat(name string, input interface{}) bool { + f, ok := c.formatters[name] + + if !ok { + return false + } + + if !isKind(input, reflect.String) { + return false + } + + inputString := input.(string) + + return f.IsFormat(inputString) +} + +func (f EmailFormatChecker) IsFormat(input string) bool { + return rxEmail.MatchString(input) +} + +// Credit: https://github.com/asaskevich/govalidator +func (f IPV4FormatChecker) IsFormat(input string) bool { + ip := net.ParseIP(input) + return ip != nil && strings.Contains(input, ".") +} + +// Credit: https://github.com/asaskevich/govalidator +func (f IPV6FormatChecker) IsFormat(input string) bool { + ip := net.ParseIP(input) + return ip != nil && strings.Contains(input, ":") +} + +func (f DateTimeFormatChecker) IsFormat(input string) bool { + formats := []string{ + "15:04:05", + "15:04:05Z07:00", + "2006-01-02", + time.RFC3339, + time.RFC3339Nano, + } + + for _, format := range formats { + if _, err := time.Parse(format, input); err == nil { + return true + } + } + + return false +} + +func (f URIFormatChecker) IsFormat(input string) bool { + u, err := url.Parse(input) + if err != nil || u.Scheme == "" { + return false + } + + return true +} + +func (f HostnameFormatChecker) IsFormat(input string) bool { + return rxHostname.MatchString(input) && len(input) < 256 +} + +func (f UUIDFormatChecker) IsFormat(input string) bool { + return rxUUID.MatchString(input) +} + +// IsFormat implements FormatChecker interface. +func (f RegexFormatChecker) IsFormat(input string) bool { + if input == "" { + return true + } + _, err := regexp.Compile(input) + if err != nil { + return false + } + return true +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/glide.yaml b/vendor/github.com/xeipuuv/gojsonschema/glide.yaml new file mode 100644 index 0000000000..7aef8c0951 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/glide.yaml @@ -0,0 +1,12 @@ +package: github.com/xeipuuv/gojsonschema +license: Apache 2.0 +import: +- package: github.com/xeipuuv/gojsonschema + +- package: github.com/xeipuuv/gojsonpointer + +- package: github.com/xeipuuv/gojsonreference + +- package: github.com/stretchr/testify/assert + version: ^1.1.3 + diff --git a/vendor/github.com/xeipuuv/gojsonschema/internalLog.go b/vendor/github.com/xeipuuv/gojsonschema/internalLog.go new file mode 100644 index 0000000000..4ef7a8d03e --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/internalLog.go @@ -0,0 +1,37 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Very simple log wrapper. +// Used for debugging/testing purposes. +// +// created 01-01-2015 + +package gojsonschema + +import ( + "log" +) + +const internalLogEnabled = false + +func internalLog(format string, v ...interface{}) { + log.Printf(format, v...) +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/jsonContext.go b/vendor/github.com/xeipuuv/gojsonschema/jsonContext.go new file mode 100644 index 0000000000..fcc8d9d6f1 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/jsonContext.go @@ -0,0 +1,72 @@ +// Copyright 2013 MongoDB, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author tolsen +// author-github https://github.com/tolsen +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Implements a persistent (immutable w/ shared structure) singly-linked list of strings for the purpose of storing a json context +// +// created 04-09-2013 + +package gojsonschema + +import "bytes" + +// jsonContext implements a persistent linked-list of strings +type jsonContext struct { + head string + tail *jsonContext +} + +func newJsonContext(head string, tail *jsonContext) *jsonContext { + return &jsonContext{head, tail} +} + +// String displays the context in reverse. +// This plays well with the data structure's persistent nature with +// Cons and a json document's tree structure. +func (c *jsonContext) String(del ...string) string { + byteArr := make([]byte, 0, c.stringLen()) + buf := bytes.NewBuffer(byteArr) + c.writeStringToBuffer(buf, del) + + return buf.String() +} + +func (c *jsonContext) stringLen() int { + length := 0 + if c.tail != nil { + length = c.tail.stringLen() + 1 // add 1 for "." + } + + length += len(c.head) + return length +} + +func (c *jsonContext) writeStringToBuffer(buf *bytes.Buffer, del []string) { + if c.tail != nil { + c.tail.writeStringToBuffer(buf, del) + + if len(del) > 0 { + buf.WriteString(del[0]) + } else { + buf.WriteString(".") + } + } + + buf.WriteString(c.head) +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/jsonLoader.go b/vendor/github.com/xeipuuv/gojsonschema/jsonLoader.go new file mode 100644 index 0000000000..9433f3ed0d --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/jsonLoader.go @@ -0,0 +1,340 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Different strategies to load JSON files. +// Includes References (file and HTTP), JSON strings and Go types. +// +// created 01-02-2015 + +package gojsonschema + +import ( + "bytes" + "encoding/json" + "errors" + "io" + "io/ioutil" + "net/http" + "os" + "path/filepath" + "runtime" + "strings" + + "github.com/xeipuuv/gojsonreference" +) + +var osFS = osFileSystem(os.Open) + +// JSON loader interface + +type JSONLoader interface { + JsonSource() interface{} + LoadJSON() (interface{}, error) + JsonReference() (gojsonreference.JsonReference, error) + LoaderFactory() JSONLoaderFactory +} + +type JSONLoaderFactory interface { + New(source string) JSONLoader +} + +type DefaultJSONLoaderFactory struct { +} + +type FileSystemJSONLoaderFactory struct { + fs http.FileSystem +} + +func (d DefaultJSONLoaderFactory) New(source string) JSONLoader { + return &jsonReferenceLoader{ + fs: osFS, + source: source, + } +} + +func (f FileSystemJSONLoaderFactory) New(source string) JSONLoader { + return &jsonReferenceLoader{ + fs: f.fs, + source: source, + } +} + +// osFileSystem is a functional wrapper for os.Open that implements http.FileSystem. +type osFileSystem func(string) (*os.File, error) + +func (o osFileSystem) Open(name string) (http.File, error) { + return o(name) +} + +// JSON Reference loader +// references are used to load JSONs from files and HTTP + +type jsonReferenceLoader struct { + fs http.FileSystem + source string +} + +func (l *jsonReferenceLoader) JsonSource() interface{} { + return l.source +} + +func (l *jsonReferenceLoader) JsonReference() (gojsonreference.JsonReference, error) { + return gojsonreference.NewJsonReference(l.JsonSource().(string)) +} + +func (l *jsonReferenceLoader) LoaderFactory() JSONLoaderFactory { + return &FileSystemJSONLoaderFactory{ + fs: l.fs, + } +} + +// NewReferenceLoader returns a JSON reference loader using the given source and the local OS file system. +func NewReferenceLoader(source string) *jsonReferenceLoader { + return &jsonReferenceLoader{ + fs: osFS, + source: source, + } +} + +// NewReferenceLoaderFileSystem returns a JSON reference loader using the given source and file system. +func NewReferenceLoaderFileSystem(source string, fs http.FileSystem) *jsonReferenceLoader { + return &jsonReferenceLoader{ + fs: fs, + source: source, + } +} + +func (l *jsonReferenceLoader) LoadJSON() (interface{}, error) { + + var err error + + reference, err := gojsonreference.NewJsonReference(l.JsonSource().(string)) + if err != nil { + return nil, err + } + + refToUrl := reference + refToUrl.GetUrl().Fragment = "" + + var document interface{} + + if reference.HasFileScheme { + + filename := strings.Replace(refToUrl.GetUrl().Path, "file://", "", -1) + if runtime.GOOS == "windows" { + // on Windows, a file URL may have an extra leading slash, use slashes + // instead of backslashes, and have spaces escaped + if strings.HasPrefix(filename, "/") { + filename = filename[1:] + } + filename = filepath.FromSlash(filename) + } + + document, err = l.loadFromFile(filename) + if err != nil { + return nil, err + } + + } else { + + document, err = l.loadFromHTTP(refToUrl.String()) + if err != nil { + return nil, err + } + + } + + return document, nil + +} + +func (l *jsonReferenceLoader) loadFromHTTP(address string) (interface{}, error) { + + resp, err := http.Get(address) + if err != nil { + return nil, err + } + + // must return HTTP Status 200 OK + if resp.StatusCode != http.StatusOK { + return nil, errors.New(formatErrorDescription(Locale.HttpBadStatus(), ErrorDetails{"status": resp.Status})) + } + + bodyBuff, err := ioutil.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + return decodeJsonUsingNumber(bytes.NewReader(bodyBuff)) + +} + +func (l *jsonReferenceLoader) loadFromFile(path string) (interface{}, error) { + f, err := l.fs.Open(path) + if err != nil { + return nil, err + } + defer f.Close() + + bodyBuff, err := ioutil.ReadAll(f) + if err != nil { + return nil, err + } + + return decodeJsonUsingNumber(bytes.NewReader(bodyBuff)) + +} + +// JSON string loader + +type jsonStringLoader struct { + source string +} + +func (l *jsonStringLoader) JsonSource() interface{} { + return l.source +} + +func (l *jsonStringLoader) JsonReference() (gojsonreference.JsonReference, error) { + return gojsonreference.NewJsonReference("#") +} + +func (l *jsonStringLoader) LoaderFactory() JSONLoaderFactory { + return &DefaultJSONLoaderFactory{} +} + +func NewStringLoader(source string) *jsonStringLoader { + return &jsonStringLoader{source: source} +} + +func (l *jsonStringLoader) LoadJSON() (interface{}, error) { + + return decodeJsonUsingNumber(strings.NewReader(l.JsonSource().(string))) + +} + +// JSON bytes loader + +type jsonBytesLoader struct { + source []byte +} + +func (l *jsonBytesLoader) JsonSource() interface{} { + return l.source +} + +func (l *jsonBytesLoader) JsonReference() (gojsonreference.JsonReference, error) { + return gojsonreference.NewJsonReference("#") +} + +func (l *jsonBytesLoader) LoaderFactory() JSONLoaderFactory { + return &DefaultJSONLoaderFactory{} +} + +func NewBytesLoader(source []byte) *jsonBytesLoader { + return &jsonBytesLoader{source: source} +} + +func (l *jsonBytesLoader) LoadJSON() (interface{}, error) { + return decodeJsonUsingNumber(bytes.NewReader(l.JsonSource().([]byte))) +} + +// JSON Go (types) loader +// used to load JSONs from the code as maps, interface{}, structs ... + +type jsonGoLoader struct { + source interface{} +} + +func (l *jsonGoLoader) JsonSource() interface{} { + return l.source +} + +func (l *jsonGoLoader) JsonReference() (gojsonreference.JsonReference, error) { + return gojsonreference.NewJsonReference("#") +} + +func (l *jsonGoLoader) LoaderFactory() JSONLoaderFactory { + return &DefaultJSONLoaderFactory{} +} + +func NewGoLoader(source interface{}) *jsonGoLoader { + return &jsonGoLoader{source: source} +} + +func (l *jsonGoLoader) LoadJSON() (interface{}, error) { + + // convert it to a compliant JSON first to avoid types "mismatches" + + jsonBytes, err := json.Marshal(l.JsonSource()) + if err != nil { + return nil, err + } + + return decodeJsonUsingNumber(bytes.NewReader(jsonBytes)) + +} + +type jsonIOLoader struct { + buf *bytes.Buffer +} + +func NewReaderLoader(source io.Reader) (*jsonIOLoader, io.Reader) { + buf := &bytes.Buffer{} + return &jsonIOLoader{buf: buf}, io.TeeReader(source, buf) +} + +func NewWriterLoader(source io.Writer) (*jsonIOLoader, io.Writer) { + buf := &bytes.Buffer{} + return &jsonIOLoader{buf: buf}, io.MultiWriter(source, buf) +} + +func (l *jsonIOLoader) JsonSource() interface{} { + return l.buf.String() +} + +func (l *jsonIOLoader) LoadJSON() (interface{}, error) { + return decodeJsonUsingNumber(l.buf) +} + +func (l *jsonIOLoader) JsonReference() (gojsonreference.JsonReference, error) { + return gojsonreference.NewJsonReference("#") +} + +func (l *jsonIOLoader) LoaderFactory() JSONLoaderFactory { + return &DefaultJSONLoaderFactory{} +} + +func decodeJsonUsingNumber(r io.Reader) (interface{}, error) { + + var document interface{} + + decoder := json.NewDecoder(r) + decoder.UseNumber() + + err := decoder.Decode(&document) + if err != nil { + return nil, err + } + + return document, nil + +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/locales.go b/vendor/github.com/xeipuuv/gojsonschema/locales.go new file mode 100644 index 0000000000..c530952b86 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/locales.go @@ -0,0 +1,280 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Contains const string and messages. +// +// created 01-01-2015 + +package gojsonschema + +type ( + // locale is an interface for defining custom error strings + locale interface { + Required() string + InvalidType() string + NumberAnyOf() string + NumberOneOf() string + NumberAllOf() string + NumberNot() string + MissingDependency() string + Internal() string + Enum() string + ArrayNotEnoughItems() string + ArrayNoAdditionalItems() string + ArrayMinItems() string + ArrayMaxItems() string + Unique() string + ArrayMinProperties() string + ArrayMaxProperties() string + AdditionalPropertyNotAllowed() string + InvalidPropertyPattern() string + StringGTE() string + StringLTE() string + DoesNotMatchPattern() string + DoesNotMatchFormat() string + MultipleOf() string + NumberGTE() string + NumberGT() string + NumberLTE() string + NumberLT() string + + // Schema validations + RegexPattern() string + GreaterThanZero() string + MustBeOfA() string + MustBeOfAn() string + CannotBeUsedWithout() string + CannotBeGT() string + MustBeOfType() string + MustBeValidRegex() string + MustBeValidFormat() string + MustBeGTEZero() string + KeyCannotBeGreaterThan() string + KeyItemsMustBeOfType() string + KeyItemsMustBeUnique() string + ReferenceMustBeCanonical() string + NotAValidType() string + Duplicated() string + HttpBadStatus() string + + // ErrorFormat + ErrorFormat() string + } + + // DefaultLocale is the default locale for this package + DefaultLocale struct{} +) + +func (l DefaultLocale) Required() string { + return `{{.property}} is required` +} + +func (l DefaultLocale) InvalidType() string { + return `Invalid type. Expected: {{.expected}}, given: {{.given}}` +} + +func (l DefaultLocale) NumberAnyOf() string { + return `Must validate at least one schema (anyOf)` +} + +func (l DefaultLocale) NumberOneOf() string { + return `Must validate one and only one schema (oneOf)` +} + +func (l DefaultLocale) NumberAllOf() string { + return `Must validate all the schemas (allOf)` +} + +func (l DefaultLocale) NumberNot() string { + return `Must not validate the schema (not)` +} + +func (l DefaultLocale) MissingDependency() string { + return `Has a dependency on {{.dependency}}` +} + +func (l DefaultLocale) Internal() string { + return `Internal Error {{.error}}` +} + +func (l DefaultLocale) Enum() string { + return `{{.field}} must be one of the following: {{.allowed}}` +} + +func (l DefaultLocale) ArrayNoAdditionalItems() string { + return `No additional items allowed on array` +} + +func (l DefaultLocale) ArrayNotEnoughItems() string { + return `Not enough items on array to match positional list of schema` +} + +func (l DefaultLocale) ArrayMinItems() string { + return `Array must have at least {{.min}} items` +} + +func (l DefaultLocale) ArrayMaxItems() string { + return `Array must have at most {{.max}} items` +} + +func (l DefaultLocale) Unique() string { + return `{{.type}} items must be unique` +} + +func (l DefaultLocale) ArrayMinProperties() string { + return `Must have at least {{.min}} properties` +} + +func (l DefaultLocale) ArrayMaxProperties() string { + return `Must have at most {{.max}} properties` +} + +func (l DefaultLocale) AdditionalPropertyNotAllowed() string { + return `Additional property {{.property}} is not allowed` +} + +func (l DefaultLocale) InvalidPropertyPattern() string { + return `Property "{{.property}}" does not match pattern {{.pattern}}` +} + +func (l DefaultLocale) StringGTE() string { + return `String length must be greater than or equal to {{.min}}` +} + +func (l DefaultLocale) StringLTE() string { + return `String length must be less than or equal to {{.max}}` +} + +func (l DefaultLocale) DoesNotMatchPattern() string { + return `Does not match pattern '{{.pattern}}'` +} + +func (l DefaultLocale) DoesNotMatchFormat() string { + return `Does not match format '{{.format}}'` +} + +func (l DefaultLocale) MultipleOf() string { + return `Must be a multiple of {{.multiple}}` +} + +func (l DefaultLocale) NumberGTE() string { + return `Must be greater than or equal to {{.min}}` +} + +func (l DefaultLocale) NumberGT() string { + return `Must be greater than {{.min}}` +} + +func (l DefaultLocale) NumberLTE() string { + return `Must be less than or equal to {{.max}}` +} + +func (l DefaultLocale) NumberLT() string { + return `Must be less than {{.max}}` +} + +// Schema validators +func (l DefaultLocale) RegexPattern() string { + return `Invalid regex pattern '{{.pattern}}'` +} + +func (l DefaultLocale) GreaterThanZero() string { + return `{{.number}} must be strictly greater than 0` +} + +func (l DefaultLocale) MustBeOfA() string { + return `{{.x}} must be of a {{.y}}` +} + +func (l DefaultLocale) MustBeOfAn() string { + return `{{.x}} must be of an {{.y}}` +} + +func (l DefaultLocale) CannotBeUsedWithout() string { + return `{{.x}} cannot be used without {{.y}}` +} + +func (l DefaultLocale) CannotBeGT() string { + return `{{.x}} cannot be greater than {{.y}}` +} + +func (l DefaultLocale) MustBeOfType() string { + return `{{.key}} must be of type {{.type}}` +} + +func (l DefaultLocale) MustBeValidRegex() string { + return `{{.key}} must be a valid regex` +} + +func (l DefaultLocale) MustBeValidFormat() string { + return `{{.key}} must be a valid format {{.given}}` +} + +func (l DefaultLocale) MustBeGTEZero() string { + return `{{.key}} must be greater than or equal to 0` +} + +func (l DefaultLocale) KeyCannotBeGreaterThan() string { + return `{{.key}} cannot be greater than {{.y}}` +} + +func (l DefaultLocale) KeyItemsMustBeOfType() string { + return `{{.key}} items must be {{.type}}` +} + +func (l DefaultLocale) KeyItemsMustBeUnique() string { + return `{{.key}} items must be unique` +} + +func (l DefaultLocale) ReferenceMustBeCanonical() string { + return `Reference {{.reference}} must be canonical` +} + +func (l DefaultLocale) NotAValidType() string { + return `{{.type}} is not a valid type -- ` +} + +func (l DefaultLocale) Duplicated() string { + return `{{.type}} type is duplicated` +} + +func (l DefaultLocale) HttpBadStatus() string { + return `Could not read schema from HTTP, response status is {{.status}}` +} + +// Replacement options: field, description, context, value +func (l DefaultLocale) ErrorFormat() string { + return `{{.field}}: {{.description}}` +} + +const ( + STRING_NUMBER = "number" + STRING_ARRAY_OF_STRINGS = "array of strings" + STRING_ARRAY_OF_SCHEMAS = "array of schemas" + STRING_SCHEMA = "schema" + STRING_SCHEMA_OR_ARRAY_OF_STRINGS = "schema or array of strings" + STRING_PROPERTIES = "properties" + STRING_DEPENDENCY = "dependency" + STRING_PROPERTY = "property" + STRING_UNDEFINED = "undefined" + STRING_CONTEXT_ROOT = "(root)" + STRING_ROOT_SCHEMA_PROPERTY = "(root)" +) diff --git a/vendor/github.com/xeipuuv/gojsonschema/result.go b/vendor/github.com/xeipuuv/gojsonschema/result.go new file mode 100644 index 0000000000..6ad56ae865 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/result.go @@ -0,0 +1,172 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Result and ResultError implementations. +// +// created 01-01-2015 + +package gojsonschema + +import ( + "fmt" + "strings" +) + +type ( + // ErrorDetails is a map of details specific to each error. + // While the values will vary, every error will contain a "field" value + ErrorDetails map[string]interface{} + + // ResultError is the interface that library errors must implement + ResultError interface { + Field() string + SetType(string) + Type() string + SetContext(*jsonContext) + Context() *jsonContext + SetDescription(string) + Description() string + SetValue(interface{}) + Value() interface{} + SetDetails(ErrorDetails) + Details() ErrorDetails + String() string + } + + // ResultErrorFields holds the fields for each ResultError implementation. + // ResultErrorFields implements the ResultError interface, so custom errors + // can be defined by just embedding this type + ResultErrorFields struct { + errorType string // A string with the type of error (i.e. invalid_type) + context *jsonContext // Tree like notation of the part that failed the validation. ex (root).a.b ... + description string // A human readable error message + value interface{} // Value given by the JSON file that is the source of the error + details ErrorDetails + } + + Result struct { + errors []ResultError + // Scores how well the validation matched. Useful in generating + // better error messages for anyOf and oneOf. + score int + } +) + +// Field outputs the field name without the root context +// i.e. firstName or person.firstName instead of (root).firstName or (root).person.firstName +func (v *ResultErrorFields) Field() string { + if p, ok := v.Details()["property"]; ok { + if str, isString := p.(string); isString { + return str + } + } + + return strings.TrimPrefix(v.context.String(), STRING_ROOT_SCHEMA_PROPERTY+".") +} + +func (v *ResultErrorFields) SetType(errorType string) { + v.errorType = errorType +} + +func (v *ResultErrorFields) Type() string { + return v.errorType +} + +func (v *ResultErrorFields) SetContext(context *jsonContext) { + v.context = context +} + +func (v *ResultErrorFields) Context() *jsonContext { + return v.context +} + +func (v *ResultErrorFields) SetDescription(description string) { + v.description = description +} + +func (v *ResultErrorFields) Description() string { + return v.description +} + +func (v *ResultErrorFields) SetValue(value interface{}) { + v.value = value +} + +func (v *ResultErrorFields) Value() interface{} { + return v.value +} + +func (v *ResultErrorFields) SetDetails(details ErrorDetails) { + v.details = details +} + +func (v *ResultErrorFields) Details() ErrorDetails { + return v.details +} + +func (v ResultErrorFields) String() string { + // as a fallback, the value is displayed go style + valueString := fmt.Sprintf("%v", v.value) + + // marshal the go value value to json + if v.value == nil { + valueString = TYPE_NULL + } else { + if vs, err := marshalToJsonString(v.value); err == nil { + if vs == nil { + valueString = TYPE_NULL + } else { + valueString = *vs + } + } + } + + return formatErrorDescription(Locale.ErrorFormat(), ErrorDetails{ + "context": v.context.String(), + "description": v.description, + "value": valueString, + "field": v.Field(), + }) +} + +func (v *Result) Valid() bool { + return len(v.errors) == 0 +} + +func (v *Result) Errors() []ResultError { + return v.errors +} + +func (v *Result) addError(err ResultError, context *jsonContext, value interface{}, details ErrorDetails) { + newError(err, context, value, Locale, details) + v.errors = append(v.errors, err) + v.score -= 2 // results in a net -1 when added to the +1 we get at the end of the validation function +} + +// Used to copy errors from a sub-schema to the main one +func (v *Result) mergeErrors(otherResult *Result) { + v.errors = append(v.errors, otherResult.Errors()...) + v.score += otherResult.score +} + +func (v *Result) incrementScore() { + v.score++ +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/schema.go b/vendor/github.com/xeipuuv/gojsonschema/schema.go new file mode 100644 index 0000000000..cf3cbc7d58 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/schema.go @@ -0,0 +1,930 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Defines Schema, the main entry to every subSchema. +// Contains the parsing logic and error checking. +// +// created 26-02-2013 + +package gojsonschema + +import ( + // "encoding/json" + "errors" + "reflect" + "regexp" + + "github.com/xeipuuv/gojsonreference" +) + +var ( + // Locale is the default locale to use + // Library users can overwrite with their own implementation + Locale locale = DefaultLocale{} +) + +func NewSchema(l JSONLoader) (*Schema, error) { + ref, err := l.JsonReference() + if err != nil { + return nil, err + } + + d := Schema{} + d.pool = newSchemaPool(l.LoaderFactory()) + d.documentReference = ref + d.referencePool = newSchemaReferencePool() + + var doc interface{} + if ref.String() != "" { + // Get document from schema pool + spd, err := d.pool.GetDocument(d.documentReference) + if err != nil { + return nil, err + } + doc = spd.Document + } else { + // Load JSON directly + doc, err = l.LoadJSON() + if err != nil { + return nil, err + } + d.pool.SetStandaloneDocument(doc) + } + + err = d.parse(doc) + if err != nil { + return nil, err + } + + return &d, nil +} + +type Schema struct { + documentReference gojsonreference.JsonReference + rootSchema *subSchema + pool *schemaPool + referencePool *schemaReferencePool +} + +func (d *Schema) parse(document interface{}) error { + d.rootSchema = &subSchema{property: STRING_ROOT_SCHEMA_PROPERTY} + return d.parseSchema(document, d.rootSchema) +} + +func (d *Schema) SetRootSchemaName(name string) { + d.rootSchema.property = name +} + +// Parses a subSchema +// +// Pretty long function ( sorry :) )... but pretty straight forward, repetitive and boring +// Not much magic involved here, most of the job is to validate the key names and their values, +// then the values are copied into subSchema struct +// +func (d *Schema) parseSchema(documentNode interface{}, currentSchema *subSchema) error { + + if !isKind(documentNode, reflect.Map) { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_OBJECT, + "given": STRING_SCHEMA, + }, + )) + } + + m := documentNode.(map[string]interface{}) + + if currentSchema == d.rootSchema { + currentSchema.ref = &d.documentReference + } + + // $subSchema + if existsMapKey(m, KEY_SCHEMA) { + if !isKind(m[KEY_SCHEMA], reflect.String) { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_STRING, + "given": KEY_SCHEMA, + }, + )) + } + schemaRef := m[KEY_SCHEMA].(string) + schemaReference, err := gojsonreference.NewJsonReference(schemaRef) + currentSchema.subSchema = &schemaReference + if err != nil { + return err + } + } + + // $ref + if existsMapKey(m, KEY_REF) && !isKind(m[KEY_REF], reflect.String) { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_STRING, + "given": KEY_REF, + }, + )) + } + if k, ok := m[KEY_REF].(string); ok { + + jsonReference, err := gojsonreference.NewJsonReference(k) + if err != nil { + return err + } + + if jsonReference.HasFullUrl { + currentSchema.ref = &jsonReference + } else { + inheritedReference, err := currentSchema.ref.Inherits(jsonReference) + if err != nil { + return err + } + + currentSchema.ref = inheritedReference + } + + if sch, ok := d.referencePool.Get(currentSchema.ref.String() + k); ok { + currentSchema.refSchema = sch + + } else { + err := d.parseReference(documentNode, currentSchema, k) + if err != nil { + return err + } + + return nil + } + } + + // definitions + if existsMapKey(m, KEY_DEFINITIONS) { + if isKind(m[KEY_DEFINITIONS], reflect.Map) { + currentSchema.definitions = make(map[string]*subSchema) + for dk, dv := range m[KEY_DEFINITIONS].(map[string]interface{}) { + if isKind(dv, reflect.Map) { + newSchema := &subSchema{property: KEY_DEFINITIONS, parent: currentSchema, ref: currentSchema.ref} + currentSchema.definitions[dk] = newSchema + err := d.parseSchema(dv, newSchema) + if err != nil { + return errors.New(err.Error()) + } + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": STRING_ARRAY_OF_SCHEMAS, + "given": KEY_DEFINITIONS, + }, + )) + } + } + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": STRING_ARRAY_OF_SCHEMAS, + "given": KEY_DEFINITIONS, + }, + )) + } + + } + + // id + if existsMapKey(m, KEY_ID) && !isKind(m[KEY_ID], reflect.String) { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_STRING, + "given": KEY_ID, + }, + )) + } + if k, ok := m[KEY_ID].(string); ok { + currentSchema.id = &k + } + + // title + if existsMapKey(m, KEY_TITLE) && !isKind(m[KEY_TITLE], reflect.String) { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_STRING, + "given": KEY_TITLE, + }, + )) + } + if k, ok := m[KEY_TITLE].(string); ok { + currentSchema.title = &k + } + + // description + if existsMapKey(m, KEY_DESCRIPTION) && !isKind(m[KEY_DESCRIPTION], reflect.String) { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_STRING, + "given": KEY_DESCRIPTION, + }, + )) + } + if k, ok := m[KEY_DESCRIPTION].(string); ok { + currentSchema.description = &k + } + + // type + if existsMapKey(m, KEY_TYPE) { + if isKind(m[KEY_TYPE], reflect.String) { + if k, ok := m[KEY_TYPE].(string); ok { + err := currentSchema.types.Add(k) + if err != nil { + return err + } + } + } else { + if isKind(m[KEY_TYPE], reflect.Slice) { + arrayOfTypes := m[KEY_TYPE].([]interface{}) + for _, typeInArray := range arrayOfTypes { + if reflect.ValueOf(typeInArray).Kind() != reflect.String { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_STRING + "/" + STRING_ARRAY_OF_STRINGS, + "given": KEY_TYPE, + }, + )) + } else { + currentSchema.types.Add(typeInArray.(string)) + } + } + + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_STRING + "/" + STRING_ARRAY_OF_STRINGS, + "given": KEY_TYPE, + }, + )) + } + } + } + + // properties + if existsMapKey(m, KEY_PROPERTIES) { + err := d.parseProperties(m[KEY_PROPERTIES], currentSchema) + if err != nil { + return err + } + } + + // additionalProperties + if existsMapKey(m, KEY_ADDITIONAL_PROPERTIES) { + if isKind(m[KEY_ADDITIONAL_PROPERTIES], reflect.Bool) { + currentSchema.additionalProperties = m[KEY_ADDITIONAL_PROPERTIES].(bool) + } else if isKind(m[KEY_ADDITIONAL_PROPERTIES], reflect.Map) { + newSchema := &subSchema{property: KEY_ADDITIONAL_PROPERTIES, parent: currentSchema, ref: currentSchema.ref} + currentSchema.additionalProperties = newSchema + err := d.parseSchema(m[KEY_ADDITIONAL_PROPERTIES], newSchema) + if err != nil { + return errors.New(err.Error()) + } + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_BOOLEAN + "/" + STRING_SCHEMA, + "given": KEY_ADDITIONAL_PROPERTIES, + }, + )) + } + } + + // patternProperties + if existsMapKey(m, KEY_PATTERN_PROPERTIES) { + if isKind(m[KEY_PATTERN_PROPERTIES], reflect.Map) { + patternPropertiesMap := m[KEY_PATTERN_PROPERTIES].(map[string]interface{}) + if len(patternPropertiesMap) > 0 { + currentSchema.patternProperties = make(map[string]*subSchema) + for k, v := range patternPropertiesMap { + _, err := regexp.MatchString(k, "") + if err != nil { + return errors.New(formatErrorDescription( + Locale.RegexPattern(), + ErrorDetails{"pattern": k}, + )) + } + newSchema := &subSchema{property: k, parent: currentSchema, ref: currentSchema.ref} + err = d.parseSchema(v, newSchema) + if err != nil { + return errors.New(err.Error()) + } + currentSchema.patternProperties[k] = newSchema + } + } + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": STRING_SCHEMA, + "given": KEY_PATTERN_PROPERTIES, + }, + )) + } + } + + // dependencies + if existsMapKey(m, KEY_DEPENDENCIES) { + err := d.parseDependencies(m[KEY_DEPENDENCIES], currentSchema) + if err != nil { + return err + } + } + + // items + if existsMapKey(m, KEY_ITEMS) { + if isKind(m[KEY_ITEMS], reflect.Slice) { + for _, itemElement := range m[KEY_ITEMS].([]interface{}) { + if isKind(itemElement, reflect.Map) { + newSchema := &subSchema{parent: currentSchema, property: KEY_ITEMS} + newSchema.ref = currentSchema.ref + currentSchema.AddItemsChild(newSchema) + err := d.parseSchema(itemElement, newSchema) + if err != nil { + return err + } + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": STRING_SCHEMA + "/" + STRING_ARRAY_OF_SCHEMAS, + "given": KEY_ITEMS, + }, + )) + } + currentSchema.itemsChildrenIsSingleSchema = false + } + } else if isKind(m[KEY_ITEMS], reflect.Map) { + newSchema := &subSchema{parent: currentSchema, property: KEY_ITEMS} + newSchema.ref = currentSchema.ref + currentSchema.AddItemsChild(newSchema) + err := d.parseSchema(m[KEY_ITEMS], newSchema) + if err != nil { + return err + } + currentSchema.itemsChildrenIsSingleSchema = true + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": STRING_SCHEMA + "/" + STRING_ARRAY_OF_SCHEMAS, + "given": KEY_ITEMS, + }, + )) + } + } + + // additionalItems + if existsMapKey(m, KEY_ADDITIONAL_ITEMS) { + if isKind(m[KEY_ADDITIONAL_ITEMS], reflect.Bool) { + currentSchema.additionalItems = m[KEY_ADDITIONAL_ITEMS].(bool) + } else if isKind(m[KEY_ADDITIONAL_ITEMS], reflect.Map) { + newSchema := &subSchema{property: KEY_ADDITIONAL_ITEMS, parent: currentSchema, ref: currentSchema.ref} + currentSchema.additionalItems = newSchema + err := d.parseSchema(m[KEY_ADDITIONAL_ITEMS], newSchema) + if err != nil { + return errors.New(err.Error()) + } + } else { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": TYPE_BOOLEAN + "/" + STRING_SCHEMA, + "given": KEY_ADDITIONAL_ITEMS, + }, + )) + } + } + + // validation : number / integer + + if existsMapKey(m, KEY_MULTIPLE_OF) { + multipleOfValue := mustBeNumber(m[KEY_MULTIPLE_OF]) + if multipleOfValue == nil { + return errors.New(formatErrorDescription( + Locale.InvalidType(), + ErrorDetails{ + "expected": STRING_NUMBER, + "given": KEY_MULTIPLE_OF, + }, + )) + } + if *multipleOfValue <= 0 { + return errors.New(formatErrorDescription( + Locale.GreaterThanZero(), + ErrorDetails{"number": KEY_MULTIPLE_OF}, + )) + } + currentSchema.multipleOf = multipleOfValue + } + + if existsMapKey(m, KEY_MINIMUM) { + minimumValue := mustBeNumber(m[KEY_MINIMUM]) + if minimumValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfA(), + ErrorDetails{"x": KEY_MINIMUM, "y": STRING_NUMBER}, + )) + } + currentSchema.minimum = minimumValue + } + + if existsMapKey(m, KEY_EXCLUSIVE_MINIMUM) { + if isKind(m[KEY_EXCLUSIVE_MINIMUM], reflect.Bool) { + if currentSchema.minimum == nil { + return errors.New(formatErrorDescription( + Locale.CannotBeUsedWithout(), + ErrorDetails{"x": KEY_EXCLUSIVE_MINIMUM, "y": KEY_MINIMUM}, + )) + } + exclusiveMinimumValue := m[KEY_EXCLUSIVE_MINIMUM].(bool) + currentSchema.exclusiveMinimum = exclusiveMinimumValue + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfA(), + ErrorDetails{"x": KEY_EXCLUSIVE_MINIMUM, "y": TYPE_BOOLEAN}, + )) + } + } + + if existsMapKey(m, KEY_MAXIMUM) { + maximumValue := mustBeNumber(m[KEY_MAXIMUM]) + if maximumValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfA(), + ErrorDetails{"x": KEY_MAXIMUM, "y": STRING_NUMBER}, + )) + } + currentSchema.maximum = maximumValue + } + + if existsMapKey(m, KEY_EXCLUSIVE_MAXIMUM) { + if isKind(m[KEY_EXCLUSIVE_MAXIMUM], reflect.Bool) { + if currentSchema.maximum == nil { + return errors.New(formatErrorDescription( + Locale.CannotBeUsedWithout(), + ErrorDetails{"x": KEY_EXCLUSIVE_MAXIMUM, "y": KEY_MAXIMUM}, + )) + } + exclusiveMaximumValue := m[KEY_EXCLUSIVE_MAXIMUM].(bool) + currentSchema.exclusiveMaximum = exclusiveMaximumValue + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfA(), + ErrorDetails{"x": KEY_EXCLUSIVE_MAXIMUM, "y": STRING_NUMBER}, + )) + } + } + + if currentSchema.minimum != nil && currentSchema.maximum != nil { + if *currentSchema.minimum > *currentSchema.maximum { + return errors.New(formatErrorDescription( + Locale.CannotBeGT(), + ErrorDetails{"x": KEY_MINIMUM, "y": KEY_MAXIMUM}, + )) + } + } + + // validation : string + + if existsMapKey(m, KEY_MIN_LENGTH) { + minLengthIntegerValue := mustBeInteger(m[KEY_MIN_LENGTH]) + if minLengthIntegerValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_MIN_LENGTH, "y": TYPE_INTEGER}, + )) + } + if *minLengthIntegerValue < 0 { + return errors.New(formatErrorDescription( + Locale.MustBeGTEZero(), + ErrorDetails{"key": KEY_MIN_LENGTH}, + )) + } + currentSchema.minLength = minLengthIntegerValue + } + + if existsMapKey(m, KEY_MAX_LENGTH) { + maxLengthIntegerValue := mustBeInteger(m[KEY_MAX_LENGTH]) + if maxLengthIntegerValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_MAX_LENGTH, "y": TYPE_INTEGER}, + )) + } + if *maxLengthIntegerValue < 0 { + return errors.New(formatErrorDescription( + Locale.MustBeGTEZero(), + ErrorDetails{"key": KEY_MAX_LENGTH}, + )) + } + currentSchema.maxLength = maxLengthIntegerValue + } + + if currentSchema.minLength != nil && currentSchema.maxLength != nil { + if *currentSchema.minLength > *currentSchema.maxLength { + return errors.New(formatErrorDescription( + Locale.CannotBeGT(), + ErrorDetails{"x": KEY_MIN_LENGTH, "y": KEY_MAX_LENGTH}, + )) + } + } + + if existsMapKey(m, KEY_PATTERN) { + if isKind(m[KEY_PATTERN], reflect.String) { + regexpObject, err := regexp.Compile(m[KEY_PATTERN].(string)) + if err != nil { + return errors.New(formatErrorDescription( + Locale.MustBeValidRegex(), + ErrorDetails{"key": KEY_PATTERN}, + )) + } + currentSchema.pattern = regexpObject + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfA(), + ErrorDetails{"x": KEY_PATTERN, "y": TYPE_STRING}, + )) + } + } + + if existsMapKey(m, KEY_FORMAT) { + formatString, ok := m[KEY_FORMAT].(string) + if ok && FormatCheckers.Has(formatString) { + currentSchema.format = formatString + } else { + return errors.New(formatErrorDescription( + Locale.MustBeValidFormat(), + ErrorDetails{"key": KEY_FORMAT, "given": m[KEY_FORMAT]}, + )) + } + } + + // validation : object + + if existsMapKey(m, KEY_MIN_PROPERTIES) { + minPropertiesIntegerValue := mustBeInteger(m[KEY_MIN_PROPERTIES]) + if minPropertiesIntegerValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_MIN_PROPERTIES, "y": TYPE_INTEGER}, + )) + } + if *minPropertiesIntegerValue < 0 { + return errors.New(formatErrorDescription( + Locale.MustBeGTEZero(), + ErrorDetails{"key": KEY_MIN_PROPERTIES}, + )) + } + currentSchema.minProperties = minPropertiesIntegerValue + } + + if existsMapKey(m, KEY_MAX_PROPERTIES) { + maxPropertiesIntegerValue := mustBeInteger(m[KEY_MAX_PROPERTIES]) + if maxPropertiesIntegerValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_MAX_PROPERTIES, "y": TYPE_INTEGER}, + )) + } + if *maxPropertiesIntegerValue < 0 { + return errors.New(formatErrorDescription( + Locale.MustBeGTEZero(), + ErrorDetails{"key": KEY_MAX_PROPERTIES}, + )) + } + currentSchema.maxProperties = maxPropertiesIntegerValue + } + + if currentSchema.minProperties != nil && currentSchema.maxProperties != nil { + if *currentSchema.minProperties > *currentSchema.maxProperties { + return errors.New(formatErrorDescription( + Locale.KeyCannotBeGreaterThan(), + ErrorDetails{"key": KEY_MIN_PROPERTIES, "y": KEY_MAX_PROPERTIES}, + )) + } + } + + if existsMapKey(m, KEY_REQUIRED) { + if isKind(m[KEY_REQUIRED], reflect.Slice) { + requiredValues := m[KEY_REQUIRED].([]interface{}) + for _, requiredValue := range requiredValues { + if isKind(requiredValue, reflect.String) { + err := currentSchema.AddRequired(requiredValue.(string)) + if err != nil { + return err + } + } else { + return errors.New(formatErrorDescription( + Locale.KeyItemsMustBeOfType(), + ErrorDetails{"key": KEY_REQUIRED, "type": TYPE_STRING}, + )) + } + } + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_REQUIRED, "y": TYPE_ARRAY}, + )) + } + } + + // validation : array + + if existsMapKey(m, KEY_MIN_ITEMS) { + minItemsIntegerValue := mustBeInteger(m[KEY_MIN_ITEMS]) + if minItemsIntegerValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_MIN_ITEMS, "y": TYPE_INTEGER}, + )) + } + if *minItemsIntegerValue < 0 { + return errors.New(formatErrorDescription( + Locale.MustBeGTEZero(), + ErrorDetails{"key": KEY_MIN_ITEMS}, + )) + } + currentSchema.minItems = minItemsIntegerValue + } + + if existsMapKey(m, KEY_MAX_ITEMS) { + maxItemsIntegerValue := mustBeInteger(m[KEY_MAX_ITEMS]) + if maxItemsIntegerValue == nil { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_MAX_ITEMS, "y": TYPE_INTEGER}, + )) + } + if *maxItemsIntegerValue < 0 { + return errors.New(formatErrorDescription( + Locale.MustBeGTEZero(), + ErrorDetails{"key": KEY_MAX_ITEMS}, + )) + } + currentSchema.maxItems = maxItemsIntegerValue + } + + if existsMapKey(m, KEY_UNIQUE_ITEMS) { + if isKind(m[KEY_UNIQUE_ITEMS], reflect.Bool) { + currentSchema.uniqueItems = m[KEY_UNIQUE_ITEMS].(bool) + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfA(), + ErrorDetails{"x": KEY_UNIQUE_ITEMS, "y": TYPE_BOOLEAN}, + )) + } + } + + // validation : all + + if existsMapKey(m, KEY_ENUM) { + if isKind(m[KEY_ENUM], reflect.Slice) { + for _, v := range m[KEY_ENUM].([]interface{}) { + err := currentSchema.AddEnum(v) + if err != nil { + return err + } + } + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_ENUM, "y": TYPE_ARRAY}, + )) + } + } + + // validation : subSchema + + if existsMapKey(m, KEY_ONE_OF) { + if isKind(m[KEY_ONE_OF], reflect.Slice) { + for _, v := range m[KEY_ONE_OF].([]interface{}) { + newSchema := &subSchema{property: KEY_ONE_OF, parent: currentSchema, ref: currentSchema.ref} + currentSchema.AddOneOf(newSchema) + err := d.parseSchema(v, newSchema) + if err != nil { + return err + } + } + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_ONE_OF, "y": TYPE_ARRAY}, + )) + } + } + + if existsMapKey(m, KEY_ANY_OF) { + if isKind(m[KEY_ANY_OF], reflect.Slice) { + for _, v := range m[KEY_ANY_OF].([]interface{}) { + newSchema := &subSchema{property: KEY_ANY_OF, parent: currentSchema, ref: currentSchema.ref} + currentSchema.AddAnyOf(newSchema) + err := d.parseSchema(v, newSchema) + if err != nil { + return err + } + } + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_ANY_OF, "y": TYPE_ARRAY}, + )) + } + } + + if existsMapKey(m, KEY_ALL_OF) { + if isKind(m[KEY_ALL_OF], reflect.Slice) { + for _, v := range m[KEY_ALL_OF].([]interface{}) { + newSchema := &subSchema{property: KEY_ALL_OF, parent: currentSchema, ref: currentSchema.ref} + currentSchema.AddAllOf(newSchema) + err := d.parseSchema(v, newSchema) + if err != nil { + return err + } + } + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_ANY_OF, "y": TYPE_ARRAY}, + )) + } + } + + if existsMapKey(m, KEY_NOT) { + if isKind(m[KEY_NOT], reflect.Map) { + newSchema := &subSchema{property: KEY_NOT, parent: currentSchema, ref: currentSchema.ref} + currentSchema.SetNot(newSchema) + err := d.parseSchema(m[KEY_NOT], newSchema) + if err != nil { + return err + } + } else { + return errors.New(formatErrorDescription( + Locale.MustBeOfAn(), + ErrorDetails{"x": KEY_NOT, "y": TYPE_OBJECT}, + )) + } + } + + return nil +} + +func (d *Schema) parseReference(documentNode interface{}, currentSchema *subSchema, reference string) error { + var refdDocumentNode interface{} + jsonPointer := currentSchema.ref.GetPointer() + standaloneDocument := d.pool.GetStandaloneDocument() + + if standaloneDocument != nil { + + var err error + refdDocumentNode, _, err = jsonPointer.Get(standaloneDocument) + if err != nil { + return err + } + + } else { + dsp, err := d.pool.GetDocument(*currentSchema.ref) + if err != nil { + return err + } + + refdDocumentNode, _, err = jsonPointer.Get(dsp.Document) + if err != nil { + return err + } + + } + + if !isKind(refdDocumentNode, reflect.Map) { + return errors.New(formatErrorDescription( + Locale.MustBeOfType(), + ErrorDetails{"key": STRING_SCHEMA, "type": TYPE_OBJECT}, + )) + } + + // returns the loaded referenced subSchema for the caller to update its current subSchema + newSchemaDocument := refdDocumentNode.(map[string]interface{}) + newSchema := &subSchema{property: KEY_REF, parent: currentSchema, ref: currentSchema.ref} + d.referencePool.Add(currentSchema.ref.String()+reference, newSchema) + + err := d.parseSchema(newSchemaDocument, newSchema) + if err != nil { + return err + } + + currentSchema.refSchema = newSchema + + return nil + +} + +func (d *Schema) parseProperties(documentNode interface{}, currentSchema *subSchema) error { + + if !isKind(documentNode, reflect.Map) { + return errors.New(formatErrorDescription( + Locale.MustBeOfType(), + ErrorDetails{"key": STRING_PROPERTIES, "type": TYPE_OBJECT}, + )) + } + + m := documentNode.(map[string]interface{}) + for k := range m { + schemaProperty := k + newSchema := &subSchema{property: schemaProperty, parent: currentSchema, ref: currentSchema.ref} + currentSchema.AddPropertiesChild(newSchema) + err := d.parseSchema(m[k], newSchema) + if err != nil { + return err + } + } + + return nil +} + +func (d *Schema) parseDependencies(documentNode interface{}, currentSchema *subSchema) error { + + if !isKind(documentNode, reflect.Map) { + return errors.New(formatErrorDescription( + Locale.MustBeOfType(), + ErrorDetails{"key": KEY_DEPENDENCIES, "type": TYPE_OBJECT}, + )) + } + + m := documentNode.(map[string]interface{}) + currentSchema.dependencies = make(map[string]interface{}) + + for k := range m { + switch reflect.ValueOf(m[k]).Kind() { + + case reflect.Slice: + values := m[k].([]interface{}) + var valuesToRegister []string + + for _, value := range values { + if !isKind(value, reflect.String) { + return errors.New(formatErrorDescription( + Locale.MustBeOfType(), + ErrorDetails{ + "key": STRING_DEPENDENCY, + "type": STRING_SCHEMA_OR_ARRAY_OF_STRINGS, + }, + )) + } else { + valuesToRegister = append(valuesToRegister, value.(string)) + } + currentSchema.dependencies[k] = valuesToRegister + } + + case reflect.Map: + depSchema := &subSchema{property: k, parent: currentSchema, ref: currentSchema.ref} + err := d.parseSchema(m[k], depSchema) + if err != nil { + return err + } + currentSchema.dependencies[k] = depSchema + + default: + return errors.New(formatErrorDescription( + Locale.MustBeOfType(), + ErrorDetails{ + "key": STRING_DEPENDENCY, + "type": STRING_SCHEMA_OR_ARRAY_OF_STRINGS, + }, + )) + } + + } + + return nil +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/schemaPool.go b/vendor/github.com/xeipuuv/gojsonschema/schemaPool.go new file mode 100644 index 0000000000..f2ad641af3 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/schemaPool.go @@ -0,0 +1,109 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Defines resources pooling. +// Eases referencing and avoids downloading the same resource twice. +// +// created 26-02-2013 + +package gojsonschema + +import ( + "errors" + + "github.com/xeipuuv/gojsonreference" +) + +type schemaPoolDocument struct { + Document interface{} +} + +type schemaPool struct { + schemaPoolDocuments map[string]*schemaPoolDocument + standaloneDocument interface{} + jsonLoaderFactory JSONLoaderFactory +} + +func newSchemaPool(f JSONLoaderFactory) *schemaPool { + + p := &schemaPool{} + p.schemaPoolDocuments = make(map[string]*schemaPoolDocument) + p.standaloneDocument = nil + p.jsonLoaderFactory = f + + return p +} + +func (p *schemaPool) SetStandaloneDocument(document interface{}) { + p.standaloneDocument = document +} + +func (p *schemaPool) GetStandaloneDocument() (document interface{}) { + return p.standaloneDocument +} + +func (p *schemaPool) GetDocument(reference gojsonreference.JsonReference) (*schemaPoolDocument, error) { + + if internalLogEnabled { + internalLog("Get Document ( %s )", reference.String()) + } + + var err error + + // It is not possible to load anything that is not canonical... + if !reference.IsCanonical() { + return nil, errors.New(formatErrorDescription( + Locale.ReferenceMustBeCanonical(), + ErrorDetails{"reference": reference}, + )) + } + + refToUrl := reference + refToUrl.GetUrl().Fragment = "" + + var spd *schemaPoolDocument + + // Try to find the requested document in the pool + for k := range p.schemaPoolDocuments { + if k == refToUrl.String() { + spd = p.schemaPoolDocuments[k] + } + } + + if spd != nil { + if internalLogEnabled { + internalLog(" From pool") + } + return spd, nil + } + + jsonReferenceLoader := p.jsonLoaderFactory.New(reference.String()) + document, err := jsonReferenceLoader.LoadJSON() + if err != nil { + return nil, err + } + + spd = &schemaPoolDocument{Document: document} + // add the document to the pool for potential later use + p.schemaPoolDocuments[refToUrl.String()] = spd + + return spd, nil +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/schemaReferencePool.go b/vendor/github.com/xeipuuv/gojsonschema/schemaReferencePool.go new file mode 100644 index 0000000000..294e36a732 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/schemaReferencePool.go @@ -0,0 +1,67 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Pool of referenced schemas. +// +// created 25-06-2013 + +package gojsonschema + +import ( + "fmt" +) + +type schemaReferencePool struct { + documents map[string]*subSchema +} + +func newSchemaReferencePool() *schemaReferencePool { + + p := &schemaReferencePool{} + p.documents = make(map[string]*subSchema) + + return p +} + +func (p *schemaReferencePool) Get(ref string) (r *subSchema, o bool) { + + if internalLogEnabled { + internalLog(fmt.Sprintf("Schema Reference ( %s )", ref)) + } + + if sch, ok := p.documents[ref]; ok { + if internalLogEnabled { + internalLog(fmt.Sprintf(" From pool")) + } + return sch, true + } + + return nil, false +} + +func (p *schemaReferencePool) Add(ref string, sch *subSchema) { + + if internalLogEnabled { + internalLog(fmt.Sprintf("Add Schema Reference %s to pool", ref)) + } + + p.documents[ref] = sch +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/schemaType.go b/vendor/github.com/xeipuuv/gojsonschema/schemaType.go new file mode 100644 index 0000000000..e13a0fb0cb --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/schemaType.go @@ -0,0 +1,83 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Helper structure to handle schema types, and the combination of them. +// +// created 28-02-2013 + +package gojsonschema + +import ( + "errors" + "fmt" + "strings" +) + +type jsonSchemaType struct { + types []string +} + +// Is the schema typed ? that is containing at least one type +// When not typed, the schema does not need any type validation +func (t *jsonSchemaType) IsTyped() bool { + return len(t.types) > 0 +} + +func (t *jsonSchemaType) Add(etype string) error { + + if !isStringInSlice(JSON_TYPES, etype) { + return errors.New(formatErrorDescription(Locale.NotAValidType(), ErrorDetails{"type": etype})) + } + + if t.Contains(etype) { + return errors.New(formatErrorDescription(Locale.Duplicated(), ErrorDetails{"type": etype})) + } + + t.types = append(t.types, etype) + + return nil +} + +func (t *jsonSchemaType) Contains(etype string) bool { + + for _, v := range t.types { + if v == etype { + return true + } + } + + return false +} + +func (t *jsonSchemaType) String() string { + + if len(t.types) == 0 { + return STRING_UNDEFINED // should never happen + } + + // Displayed as a list [type1,type2,...] + if len(t.types) > 1 { + return fmt.Sprintf("[%s]", strings.Join(t.types, ",")) + } + + // Only one type: name only + return t.types[0] +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/subSchema.go b/vendor/github.com/xeipuuv/gojsonschema/subSchema.go new file mode 100644 index 0000000000..9ddbb5fc14 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/subSchema.go @@ -0,0 +1,227 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Defines the structure of a sub-subSchema. +// A sub-subSchema can contain other sub-schemas. +// +// created 27-02-2013 + +package gojsonschema + +import ( + "errors" + "regexp" + "strings" + + "github.com/xeipuuv/gojsonreference" +) + +const ( + KEY_SCHEMA = "$subSchema" + KEY_ID = "$id" + KEY_REF = "$ref" + KEY_TITLE = "title" + KEY_DESCRIPTION = "description" + KEY_TYPE = "type" + KEY_ITEMS = "items" + KEY_ADDITIONAL_ITEMS = "additionalItems" + KEY_PROPERTIES = "properties" + KEY_PATTERN_PROPERTIES = "patternProperties" + KEY_ADDITIONAL_PROPERTIES = "additionalProperties" + KEY_DEFINITIONS = "definitions" + KEY_MULTIPLE_OF = "multipleOf" + KEY_MINIMUM = "minimum" + KEY_MAXIMUM = "maximum" + KEY_EXCLUSIVE_MINIMUM = "exclusiveMinimum" + KEY_EXCLUSIVE_MAXIMUM = "exclusiveMaximum" + KEY_MIN_LENGTH = "minLength" + KEY_MAX_LENGTH = "maxLength" + KEY_PATTERN = "pattern" + KEY_FORMAT = "format" + KEY_MIN_PROPERTIES = "minProperties" + KEY_MAX_PROPERTIES = "maxProperties" + KEY_DEPENDENCIES = "dependencies" + KEY_REQUIRED = "required" + KEY_MIN_ITEMS = "minItems" + KEY_MAX_ITEMS = "maxItems" + KEY_UNIQUE_ITEMS = "uniqueItems" + KEY_ENUM = "enum" + KEY_ONE_OF = "oneOf" + KEY_ANY_OF = "anyOf" + KEY_ALL_OF = "allOf" + KEY_NOT = "not" +) + +type subSchema struct { + + // basic subSchema meta properties + id *string + title *string + description *string + + property string + + // Types associated with the subSchema + types jsonSchemaType + + // Reference url + ref *gojsonreference.JsonReference + // Schema referenced + refSchema *subSchema + // Json reference + subSchema *gojsonreference.JsonReference + + // hierarchy + parent *subSchema + definitions map[string]*subSchema + definitionsChildren []*subSchema + itemsChildren []*subSchema + itemsChildrenIsSingleSchema bool + propertiesChildren []*subSchema + + // validation : number / integer + multipleOf *float64 + maximum *float64 + exclusiveMaximum bool + minimum *float64 + exclusiveMinimum bool + + // validation : string + minLength *int + maxLength *int + pattern *regexp.Regexp + format string + + // validation : object + minProperties *int + maxProperties *int + required []string + + dependencies map[string]interface{} + additionalProperties interface{} + patternProperties map[string]*subSchema + + // validation : array + minItems *int + maxItems *int + uniqueItems bool + + additionalItems interface{} + + // validation : all + enum []string + + // validation : subSchema + oneOf []*subSchema + anyOf []*subSchema + allOf []*subSchema + not *subSchema +} + +func (s *subSchema) AddEnum(i interface{}) error { + + is, err := marshalToJsonString(i) + if err != nil { + return err + } + + if isStringInSlice(s.enum, *is) { + return errors.New(formatErrorDescription( + Locale.KeyItemsMustBeUnique(), + ErrorDetails{"key": KEY_ENUM}, + )) + } + + s.enum = append(s.enum, *is) + + return nil +} + +func (s *subSchema) ContainsEnum(i interface{}) (bool, error) { + + is, err := marshalToJsonString(i) + if err != nil { + return false, err + } + + return isStringInSlice(s.enum, *is), nil +} + +func (s *subSchema) AddOneOf(subSchema *subSchema) { + s.oneOf = append(s.oneOf, subSchema) +} + +func (s *subSchema) AddAllOf(subSchema *subSchema) { + s.allOf = append(s.allOf, subSchema) +} + +func (s *subSchema) AddAnyOf(subSchema *subSchema) { + s.anyOf = append(s.anyOf, subSchema) +} + +func (s *subSchema) SetNot(subSchema *subSchema) { + s.not = subSchema +} + +func (s *subSchema) AddRequired(value string) error { + + if isStringInSlice(s.required, value) { + return errors.New(formatErrorDescription( + Locale.KeyItemsMustBeUnique(), + ErrorDetails{"key": KEY_REQUIRED}, + )) + } + + s.required = append(s.required, value) + + return nil +} + +func (s *subSchema) AddDefinitionChild(child *subSchema) { + s.definitionsChildren = append(s.definitionsChildren, child) +} + +func (s *subSchema) AddItemsChild(child *subSchema) { + s.itemsChildren = append(s.itemsChildren, child) +} + +func (s *subSchema) AddPropertiesChild(child *subSchema) { + s.propertiesChildren = append(s.propertiesChildren, child) +} + +func (s *subSchema) PatternPropertiesString() string { + + if s.patternProperties == nil || len(s.patternProperties) == 0 { + return STRING_UNDEFINED // should never happen + } + + patternPropertiesKeySlice := []string{} + for pk := range s.patternProperties { + patternPropertiesKeySlice = append(patternPropertiesKeySlice, `"`+pk+`"`) + } + + if len(patternPropertiesKeySlice) == 1 { + return patternPropertiesKeySlice[0] + } + + return "[" + strings.Join(patternPropertiesKeySlice, ",") + "]" + +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/types.go b/vendor/github.com/xeipuuv/gojsonschema/types.go new file mode 100644 index 0000000000..952d22ef65 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/types.go @@ -0,0 +1,58 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Contains const types for schema and JSON. +// +// created 28-02-2013 + +package gojsonschema + +const ( + TYPE_ARRAY = `array` + TYPE_BOOLEAN = `boolean` + TYPE_INTEGER = `integer` + TYPE_NUMBER = `number` + TYPE_NULL = `null` + TYPE_OBJECT = `object` + TYPE_STRING = `string` +) + +var JSON_TYPES []string +var SCHEMA_TYPES []string + +func init() { + JSON_TYPES = []string{ + TYPE_ARRAY, + TYPE_BOOLEAN, + TYPE_INTEGER, + TYPE_NUMBER, + TYPE_NULL, + TYPE_OBJECT, + TYPE_STRING} + + SCHEMA_TYPES = []string{ + TYPE_ARRAY, + TYPE_BOOLEAN, + TYPE_INTEGER, + TYPE_NUMBER, + TYPE_OBJECT, + TYPE_STRING} +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/utils.go b/vendor/github.com/xeipuuv/gojsonschema/utils.go new file mode 100644 index 0000000000..26cf75ebf7 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/utils.go @@ -0,0 +1,208 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Various utility functions. +// +// created 26-02-2013 + +package gojsonschema + +import ( + "encoding/json" + "fmt" + "math" + "reflect" + "strconv" +) + +func isKind(what interface{}, kind reflect.Kind) bool { + target := what + if isJsonNumber(what) { + // JSON Numbers are strings! + target = *mustBeNumber(what) + } + return reflect.ValueOf(target).Kind() == kind +} + +func existsMapKey(m map[string]interface{}, k string) bool { + _, ok := m[k] + return ok +} + +func isStringInSlice(s []string, what string) bool { + for i := range s { + if s[i] == what { + return true + } + } + return false +} + +func marshalToJsonString(value interface{}) (*string, error) { + + mBytes, err := json.Marshal(value) + if err != nil { + return nil, err + } + + sBytes := string(mBytes) + return &sBytes, nil +} + +func isJsonNumber(what interface{}) bool { + + switch what.(type) { + + case json.Number: + return true + } + + return false +} + +func checkJsonNumber(what interface{}) (isValidFloat64 bool, isValidInt64 bool, isValidInt32 bool) { + + jsonNumber := what.(json.Number) + + f64, errFloat64 := jsonNumber.Float64() + s64 := strconv.FormatFloat(f64, 'f', -1, 64) + _, errInt64 := strconv.ParseInt(s64, 10, 64) + + isValidFloat64 = errFloat64 == nil + isValidInt64 = errInt64 == nil + + _, errInt32 := strconv.ParseInt(s64, 10, 32) + isValidInt32 = isValidInt64 && errInt32 == nil + + return + +} + +// same as ECMA Number.MAX_SAFE_INTEGER and Number.MIN_SAFE_INTEGER +const ( + max_json_float = float64(1<<53 - 1) // 9007199254740991.0 2^53 - 1 + min_json_float = -float64(1<<53 - 1) //-9007199254740991.0 -2^53 - 1 +) + +func isFloat64AnInteger(f float64) bool { + + if math.IsNaN(f) || math.IsInf(f, 0) || f < min_json_float || f > max_json_float { + return false + } + + return f == float64(int64(f)) || f == float64(uint64(f)) +} + +func mustBeInteger(what interface{}) *int { + + if isJsonNumber(what) { + + number := what.(json.Number) + + _, _, isValidInt32 := checkJsonNumber(number) + + if isValidInt32 { + + int64Value, err := number.Int64() + if err != nil { + return nil + } + + int32Value := int(int64Value) + return &int32Value + + } else { + return nil + } + + } + + return nil +} + +func mustBeNumber(what interface{}) *float64 { + + if isJsonNumber(what) { + + number := what.(json.Number) + float64Value, err := number.Float64() + + if err == nil { + return &float64Value + } else { + return nil + } + + } + + return nil + +} + +// formats a number so that it is displayed as the smallest string possible +func resultErrorFormatJsonNumber(n json.Number) string { + + if int64Value, err := n.Int64(); err == nil { + return fmt.Sprintf("%d", int64Value) + } + + float64Value, _ := n.Float64() + + return fmt.Sprintf("%g", float64Value) +} + +// formats a number so that it is displayed as the smallest string possible +func resultErrorFormatNumber(n float64) string { + + if isFloat64AnInteger(n) { + return fmt.Sprintf("%d", int64(n)) + } + + return fmt.Sprintf("%g", n) +} + +func convertDocumentNode(val interface{}) interface{} { + + if lval, ok := val.([]interface{}); ok { + + res := []interface{}{} + for _, v := range lval { + res = append(res, convertDocumentNode(v)) + } + + return res + + } + + if mval, ok := val.(map[interface{}]interface{}); ok { + + res := map[string]interface{}{} + + for k, v := range mval { + res[k.(string)] = convertDocumentNode(v) + } + + return res + + } + + return val +} diff --git a/vendor/github.com/xeipuuv/gojsonschema/validation.go b/vendor/github.com/xeipuuv/gojsonschema/validation.go new file mode 100644 index 0000000000..5b2230db16 --- /dev/null +++ b/vendor/github.com/xeipuuv/gojsonschema/validation.go @@ -0,0 +1,832 @@ +// Copyright 2015 xeipuuv ( https://github.com/xeipuuv ) +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// author xeipuuv +// author-github https://github.com/xeipuuv +// author-mail xeipuuv@gmail.com +// +// repository-name gojsonschema +// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language. +// +// description Extends Schema and subSchema, implements the validation phase. +// +// created 28-02-2013 + +package gojsonschema + +import ( + "encoding/json" + "reflect" + "regexp" + "strconv" + "strings" + "unicode/utf8" +) + +func Validate(ls JSONLoader, ld JSONLoader) (*Result, error) { + + var err error + + // load schema + + schema, err := NewSchema(ls) + if err != nil { + return nil, err + } + + // begine validation + + return schema.Validate(ld) + +} + +func (v *Schema) Validate(l JSONLoader) (*Result, error) { + + // load document + + root, err := l.LoadJSON() + if err != nil { + return nil, err + } + + // begin validation + + result := &Result{} + context := newJsonContext(STRING_CONTEXT_ROOT, nil) + v.rootSchema.validateRecursive(v.rootSchema, root, result, context) + + return result, nil + +} + +func (v *subSchema) subValidateWithContext(document interface{}, context *jsonContext) *Result { + result := &Result{} + v.validateRecursive(v, document, result, context) + return result +} + +// Walker function to validate the json recursively against the subSchema +func (v *subSchema) validateRecursive(currentSubSchema *subSchema, currentNode interface{}, result *Result, context *jsonContext) { + + if internalLogEnabled { + internalLog("validateRecursive %s", context.String()) + internalLog(" %v", currentNode) + } + + // Handle referenced schemas, returns directly when a $ref is found + if currentSubSchema.refSchema != nil { + v.validateRecursive(currentSubSchema.refSchema, currentNode, result, context) + return + } + + // Check for null value + if currentNode == nil { + if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_NULL) { + result.addError( + new(InvalidTypeError), + context, + currentNode, + ErrorDetails{ + "expected": currentSubSchema.types.String(), + "given": TYPE_NULL, + }, + ) + return + } + + currentSubSchema.validateSchema(currentSubSchema, currentNode, result, context) + v.validateCommon(currentSubSchema, currentNode, result, context) + + } else { // Not a null value + + if isJsonNumber(currentNode) { + + value := currentNode.(json.Number) + + _, isValidInt64, _ := checkJsonNumber(value) + + validType := currentSubSchema.types.Contains(TYPE_NUMBER) || (isValidInt64 && currentSubSchema.types.Contains(TYPE_INTEGER)) + + if currentSubSchema.types.IsTyped() && !validType { + + givenType := TYPE_INTEGER + if !isValidInt64 { + givenType = TYPE_NUMBER + } + + result.addError( + new(InvalidTypeError), + context, + currentNode, + ErrorDetails{ + "expected": currentSubSchema.types.String(), + "given": givenType, + }, + ) + return + } + + currentSubSchema.validateSchema(currentSubSchema, value, result, context) + v.validateNumber(currentSubSchema, value, result, context) + v.validateCommon(currentSubSchema, value, result, context) + v.validateString(currentSubSchema, value, result, context) + + } else { + + rValue := reflect.ValueOf(currentNode) + rKind := rValue.Kind() + + switch rKind { + + // Slice => JSON array + + case reflect.Slice: + + if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_ARRAY) { + result.addError( + new(InvalidTypeError), + context, + currentNode, + ErrorDetails{ + "expected": currentSubSchema.types.String(), + "given": TYPE_ARRAY, + }, + ) + return + } + + castCurrentNode := currentNode.([]interface{}) + + currentSubSchema.validateSchema(currentSubSchema, castCurrentNode, result, context) + + v.validateArray(currentSubSchema, castCurrentNode, result, context) + v.validateCommon(currentSubSchema, castCurrentNode, result, context) + + // Map => JSON object + + case reflect.Map: + if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_OBJECT) { + result.addError( + new(InvalidTypeError), + context, + currentNode, + ErrorDetails{ + "expected": currentSubSchema.types.String(), + "given": TYPE_OBJECT, + }, + ) + return + } + + castCurrentNode, ok := currentNode.(map[string]interface{}) + if !ok { + castCurrentNode = convertDocumentNode(currentNode).(map[string]interface{}) + } + + currentSubSchema.validateSchema(currentSubSchema, castCurrentNode, result, context) + + v.validateObject(currentSubSchema, castCurrentNode, result, context) + v.validateCommon(currentSubSchema, castCurrentNode, result, context) + + for _, pSchema := range currentSubSchema.propertiesChildren { + nextNode, ok := castCurrentNode[pSchema.property] + if ok { + subContext := newJsonContext(pSchema.property, context) + v.validateRecursive(pSchema, nextNode, result, subContext) + } + } + + // Simple JSON values : string, number, boolean + + case reflect.Bool: + + if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_BOOLEAN) { + result.addError( + new(InvalidTypeError), + context, + currentNode, + ErrorDetails{ + "expected": currentSubSchema.types.String(), + "given": TYPE_BOOLEAN, + }, + ) + return + } + + value := currentNode.(bool) + + currentSubSchema.validateSchema(currentSubSchema, value, result, context) + v.validateNumber(currentSubSchema, value, result, context) + v.validateCommon(currentSubSchema, value, result, context) + v.validateString(currentSubSchema, value, result, context) + + case reflect.String: + + if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_STRING) { + result.addError( + new(InvalidTypeError), + context, + currentNode, + ErrorDetails{ + "expected": currentSubSchema.types.String(), + "given": TYPE_STRING, + }, + ) + return + } + + value := currentNode.(string) + + currentSubSchema.validateSchema(currentSubSchema, value, result, context) + v.validateNumber(currentSubSchema, value, result, context) + v.validateCommon(currentSubSchema, value, result, context) + v.validateString(currentSubSchema, value, result, context) + + } + + } + + } + + result.incrementScore() +} + +// Different kinds of validation there, subSchema / common / array / object / string... +func (v *subSchema) validateSchema(currentSubSchema *subSchema, currentNode interface{}, result *Result, context *jsonContext) { + + if internalLogEnabled { + internalLog("validateSchema %s", context.String()) + internalLog(" %v", currentNode) + } + + if len(currentSubSchema.anyOf) > 0 { + + validatedAnyOf := false + var bestValidationResult *Result + + for _, anyOfSchema := range currentSubSchema.anyOf { + if !validatedAnyOf { + validationResult := anyOfSchema.subValidateWithContext(currentNode, context) + validatedAnyOf = validationResult.Valid() + + if !validatedAnyOf && (bestValidationResult == nil || validationResult.score > bestValidationResult.score) { + bestValidationResult = validationResult + } + } + } + if !validatedAnyOf { + + result.addError(new(NumberAnyOfError), context, currentNode, ErrorDetails{}) + + if bestValidationResult != nil { + // add error messages of closest matching subSchema as + // that's probably the one the user was trying to match + result.mergeErrors(bestValidationResult) + } + } + } + + if len(currentSubSchema.oneOf) > 0 { + + nbValidated := 0 + var bestValidationResult *Result + + for _, oneOfSchema := range currentSubSchema.oneOf { + validationResult := oneOfSchema.subValidateWithContext(currentNode, context) + if validationResult.Valid() { + nbValidated++ + } else if nbValidated == 0 && (bestValidationResult == nil || validationResult.score > bestValidationResult.score) { + bestValidationResult = validationResult + } + } + + if nbValidated != 1 { + + result.addError(new(NumberOneOfError), context, currentNode, ErrorDetails{}) + + if nbValidated == 0 { + // add error messages of closest matching subSchema as + // that's probably the one the user was trying to match + result.mergeErrors(bestValidationResult) + } + } + + } + + if len(currentSubSchema.allOf) > 0 { + nbValidated := 0 + + for _, allOfSchema := range currentSubSchema.allOf { + validationResult := allOfSchema.subValidateWithContext(currentNode, context) + if validationResult.Valid() { + nbValidated++ + } + result.mergeErrors(validationResult) + } + + if nbValidated != len(currentSubSchema.allOf) { + result.addError(new(NumberAllOfError), context, currentNode, ErrorDetails{}) + } + } + + if currentSubSchema.not != nil { + validationResult := currentSubSchema.not.subValidateWithContext(currentNode, context) + if validationResult.Valid() { + result.addError(new(NumberNotError), context, currentNode, ErrorDetails{}) + } + } + + if currentSubSchema.dependencies != nil && len(currentSubSchema.dependencies) > 0 { + if isKind(currentNode, reflect.Map) { + for elementKey := range currentNode.(map[string]interface{}) { + if dependency, ok := currentSubSchema.dependencies[elementKey]; ok { + switch dependency := dependency.(type) { + + case []string: + for _, dependOnKey := range dependency { + if _, dependencyResolved := currentNode.(map[string]interface{})[dependOnKey]; !dependencyResolved { + result.addError( + new(MissingDependencyError), + context, + currentNode, + ErrorDetails{"dependency": dependOnKey}, + ) + } + } + + case *subSchema: + dependency.validateRecursive(dependency, currentNode, result, context) + + } + } + } + } + } + + result.incrementScore() +} + +func (v *subSchema) validateCommon(currentSubSchema *subSchema, value interface{}, result *Result, context *jsonContext) { + + if internalLogEnabled { + internalLog("validateCommon %s", context.String()) + internalLog(" %v", value) + } + + // enum: + if len(currentSubSchema.enum) > 0 { + has, err := currentSubSchema.ContainsEnum(value) + if err != nil { + result.addError(new(InternalError), context, value, ErrorDetails{"error": err}) + } + if !has { + result.addError( + new(EnumError), + context, + value, + ErrorDetails{ + "allowed": strings.Join(currentSubSchema.enum, ", "), + }, + ) + } + } + + result.incrementScore() +} + +func (v *subSchema) validateArray(currentSubSchema *subSchema, value []interface{}, result *Result, context *jsonContext) { + + if internalLogEnabled { + internalLog("validateArray %s", context.String()) + internalLog(" %v", value) + } + + nbValues := len(value) + + // TODO explain + if currentSubSchema.itemsChildrenIsSingleSchema { + for i := range value { + subContext := newJsonContext(strconv.Itoa(i), context) + validationResult := currentSubSchema.itemsChildren[0].subValidateWithContext(value[i], subContext) + result.mergeErrors(validationResult) + } + } else { + if currentSubSchema.itemsChildren != nil && len(currentSubSchema.itemsChildren) > 0 { + + nbItems := len(currentSubSchema.itemsChildren) + + // while we have both schemas and values, check them against each other + for i := 0; i != nbItems && i != nbValues; i++ { + subContext := newJsonContext(strconv.Itoa(i), context) + validationResult := currentSubSchema.itemsChildren[i].subValidateWithContext(value[i], subContext) + result.mergeErrors(validationResult) + } + + if nbItems < nbValues { + // we have less schemas than elements in the instance array, + // but that might be ok if "additionalItems" is specified. + + switch currentSubSchema.additionalItems.(type) { + case bool: + if !currentSubSchema.additionalItems.(bool) { + result.addError(new(ArrayNoAdditionalItemsError), context, value, ErrorDetails{}) + } + case *subSchema: + additionalItemSchema := currentSubSchema.additionalItems.(*subSchema) + for i := nbItems; i != nbValues; i++ { + subContext := newJsonContext(strconv.Itoa(i), context) + validationResult := additionalItemSchema.subValidateWithContext(value[i], subContext) + result.mergeErrors(validationResult) + } + } + } + } + } + + // minItems & maxItems + if currentSubSchema.minItems != nil { + if nbValues < int(*currentSubSchema.minItems) { + result.addError( + new(ArrayMinItemsError), + context, + value, + ErrorDetails{"min": *currentSubSchema.minItems}, + ) + } + } + if currentSubSchema.maxItems != nil { + if nbValues > int(*currentSubSchema.maxItems) { + result.addError( + new(ArrayMaxItemsError), + context, + value, + ErrorDetails{"max": *currentSubSchema.maxItems}, + ) + } + } + + // uniqueItems: + if currentSubSchema.uniqueItems { + var stringifiedItems []string + for _, v := range value { + vString, err := marshalToJsonString(v) + if err != nil { + result.addError(new(InternalError), context, value, ErrorDetails{"err": err}) + } + if isStringInSlice(stringifiedItems, *vString) { + result.addError( + new(ItemsMustBeUniqueError), + context, + value, + ErrorDetails{"type": TYPE_ARRAY}, + ) + } + stringifiedItems = append(stringifiedItems, *vString) + } + } + + result.incrementScore() +} + +func (v *subSchema) validateObject(currentSubSchema *subSchema, value map[string]interface{}, result *Result, context *jsonContext) { + + if internalLogEnabled { + internalLog("validateObject %s", context.String()) + internalLog(" %v", value) + } + + // minProperties & maxProperties: + if currentSubSchema.minProperties != nil { + if len(value) < int(*currentSubSchema.minProperties) { + result.addError( + new(ArrayMinPropertiesError), + context, + value, + ErrorDetails{"min": *currentSubSchema.minProperties}, + ) + } + } + if currentSubSchema.maxProperties != nil { + if len(value) > int(*currentSubSchema.maxProperties) { + result.addError( + new(ArrayMaxPropertiesError), + context, + value, + ErrorDetails{"max": *currentSubSchema.maxProperties}, + ) + } + } + + // required: + for _, requiredProperty := range currentSubSchema.required { + _, ok := value[requiredProperty] + if ok { + result.incrementScore() + } else { + result.addError( + new(RequiredError), + context, + value, + ErrorDetails{"property": requiredProperty}, + ) + } + } + + // additionalProperty & patternProperty: + if currentSubSchema.additionalProperties != nil { + + switch currentSubSchema.additionalProperties.(type) { + case bool: + + if !currentSubSchema.additionalProperties.(bool) { + + for pk := range value { + + found := false + for _, spValue := range currentSubSchema.propertiesChildren { + if pk == spValue.property { + found = true + } + } + + pp_has, pp_match := v.validatePatternProperty(currentSubSchema, pk, value[pk], result, context) + + if found { + + if pp_has && !pp_match { + result.addError( + new(AdditionalPropertyNotAllowedError), + context, + value, + ErrorDetails{"property": pk}, + ) + } + + } else { + + if !pp_has || !pp_match { + result.addError( + new(AdditionalPropertyNotAllowedError), + context, + value, + ErrorDetails{"property": pk}, + ) + } + + } + } + } + + case *subSchema: + + additionalPropertiesSchema := currentSubSchema.additionalProperties.(*subSchema) + for pk := range value { + + found := false + for _, spValue := range currentSubSchema.propertiesChildren { + if pk == spValue.property { + found = true + } + } + + pp_has, pp_match := v.validatePatternProperty(currentSubSchema, pk, value[pk], result, context) + + if found { + + if pp_has && !pp_match { + validationResult := additionalPropertiesSchema.subValidateWithContext(value[pk], context) + result.mergeErrors(validationResult) + } + + } else { + + if !pp_has || !pp_match { + validationResult := additionalPropertiesSchema.subValidateWithContext(value[pk], context) + result.mergeErrors(validationResult) + } + + } + + } + } + } else { + + for pk := range value { + + pp_has, pp_match := v.validatePatternProperty(currentSubSchema, pk, value[pk], result, context) + + if pp_has && !pp_match { + + result.addError( + new(InvalidPropertyPatternError), + context, + value, + ErrorDetails{ + "property": pk, + "pattern": currentSubSchema.PatternPropertiesString(), + }, + ) + } + + } + } + + result.incrementScore() +} + +func (v *subSchema) validatePatternProperty(currentSubSchema *subSchema, key string, value interface{}, result *Result, context *jsonContext) (has bool, matched bool) { + + if internalLogEnabled { + internalLog("validatePatternProperty %s", context.String()) + internalLog(" %s %v", key, value) + } + + has = false + + validatedkey := false + + for pk, pv := range currentSubSchema.patternProperties { + if matches, _ := regexp.MatchString(pk, key); matches { + has = true + subContext := newJsonContext(key, context) + validationResult := pv.subValidateWithContext(value, subContext) + result.mergeErrors(validationResult) + if validationResult.Valid() { + validatedkey = true + } + } + } + + if !validatedkey { + return has, false + } + + result.incrementScore() + + return has, true +} + +func (v *subSchema) validateString(currentSubSchema *subSchema, value interface{}, result *Result, context *jsonContext) { + + // Ignore JSON numbers + if isJsonNumber(value) { + return + } + + // Ignore non strings + if !isKind(value, reflect.String) { + return + } + + if internalLogEnabled { + internalLog("validateString %s", context.String()) + internalLog(" %v", value) + } + + stringValue := value.(string) + + // minLength & maxLength: + if currentSubSchema.minLength != nil { + if utf8.RuneCount([]byte(stringValue)) < int(*currentSubSchema.minLength) { + result.addError( + new(StringLengthGTEError), + context, + value, + ErrorDetails{"min": *currentSubSchema.minLength}, + ) + } + } + if currentSubSchema.maxLength != nil { + if utf8.RuneCount([]byte(stringValue)) > int(*currentSubSchema.maxLength) { + result.addError( + new(StringLengthLTEError), + context, + value, + ErrorDetails{"max": *currentSubSchema.maxLength}, + ) + } + } + + // pattern: + if currentSubSchema.pattern != nil { + if !currentSubSchema.pattern.MatchString(stringValue) { + result.addError( + new(DoesNotMatchPatternError), + context, + value, + ErrorDetails{"pattern": currentSubSchema.pattern}, + ) + + } + } + + // format + if currentSubSchema.format != "" { + if !FormatCheckers.IsFormat(currentSubSchema.format, stringValue) { + result.addError( + new(DoesNotMatchFormatError), + context, + value, + ErrorDetails{"format": currentSubSchema.format}, + ) + } + } + + result.incrementScore() +} + +func (v *subSchema) validateNumber(currentSubSchema *subSchema, value interface{}, result *Result, context *jsonContext) { + + // Ignore non numbers + if !isJsonNumber(value) { + return + } + + if internalLogEnabled { + internalLog("validateNumber %s", context.String()) + internalLog(" %v", value) + } + + number := value.(json.Number) + float64Value, _ := number.Float64() + + // multipleOf: + if currentSubSchema.multipleOf != nil { + + if !isFloat64AnInteger(float64Value / *currentSubSchema.multipleOf) { + result.addError( + new(MultipleOfError), + context, + resultErrorFormatJsonNumber(number), + ErrorDetails{"multiple": *currentSubSchema.multipleOf}, + ) + } + } + + //maximum & exclusiveMaximum: + if currentSubSchema.maximum != nil { + if currentSubSchema.exclusiveMaximum { + if float64Value >= *currentSubSchema.maximum { + result.addError( + new(NumberLTError), + context, + resultErrorFormatJsonNumber(number), + ErrorDetails{ + "max": resultErrorFormatNumber(*currentSubSchema.maximum), + }, + ) + } + } else { + if float64Value > *currentSubSchema.maximum { + result.addError( + new(NumberLTEError), + context, + resultErrorFormatJsonNumber(number), + ErrorDetails{ + "max": resultErrorFormatNumber(*currentSubSchema.maximum), + }, + ) + } + } + } + + //minimum & exclusiveMinimum: + if currentSubSchema.minimum != nil { + if currentSubSchema.exclusiveMinimum { + if float64Value <= *currentSubSchema.minimum { + result.addError( + new(NumberGTError), + context, + resultErrorFormatJsonNumber(number), + ErrorDetails{ + "min": resultErrorFormatNumber(*currentSubSchema.minimum), + }, + ) + } + } else { + if float64Value < *currentSubSchema.minimum { + result.addError( + new(NumberGTEError), + context, + resultErrorFormatJsonNumber(number), + ErrorDetails{ + "min": resultErrorFormatNumber(*currentSubSchema.minimum), + }, + ) + } + } + } + + result.incrementScore() +} diff --git a/vendor/google.golang.org/api/container/v1/container-api.json b/vendor/google.golang.org/api/container/v1/container-api.json index f9f5ba1df9..0a07d88ab8 100644 --- a/vendor/google.golang.org/api/container/v1/container-api.json +++ b/vendor/google.golang.org/api/container/v1/container-api.json @@ -1,11 +1,11 @@ { "kind": "discovery#restDescription", - "etag": "\"jQLIOHBVnDZie4rQHGH1WJF-INE/cpP4K9eaLrLwMGtsdl5oXjxb8rw\"", + "etag": "\"tbys6C40o18GZwyMen5GMkdK-3s/aTs6tIgXySgjqhtr4EU6PD-kvdQ\"", "discoveryVersion": "v1", "id": "container:v1", "name": "container", "version": "v1", - "revision": "20160421", + "revision": "20161024", "title": "Google Container Engine API", "description": "Builds and manages clusters that run container-based applications, powered by open source Kubernetes technology.", "ownerDomain": "google.com", @@ -183,7 +183,7 @@ }, "nodePools": { "type": "array", - "description": "The node pools associated with this cluster. When creating a new cluster, only a single node pool should be specified. This field should not be set if \"node_config\" or \"initial_node_count\" are specified.", + "description": "The node pools associated with this cluster. This field should not be set if \"node_config\" or \"initial_node_count\" are specified.", "items": { "$ref": "NodePool" } @@ -195,6 +195,10 @@ "type": "string" } }, + "enableKubernetesAlpha": { + "type": "boolean", + "description": "Kubernetes alpha features are enabled on this cluster. This includes alpha API groups (e.g. v1alpha1) and features that may not be production ready in the kubernetes version of the master and nodes. The cluster has no SLA for uptime and master/node upgrades are disabled. Alpha enabled clusters are automatically deleted thirty days after creation." + }, "selfLink": { "type": "string", "description": "[Output only] Server-defined URL for the resource." @@ -259,6 +263,10 @@ "type": "integer", "description": "[Output only] The number of nodes currently in the cluster.", "format": "int32" + }, + "expireTime": { + "type": "string", + "description": "[Output only] The time the cluster will be automatically deleted in [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) text format." } } }, @@ -283,12 +291,43 @@ "type": "string" } }, + "serviceAccount": { + "type": "string", + "description": "The Google Cloud Platform Service Account to be used by the node VMs. If no Service Account is specified, the \"default\" service account is used." + }, "metadata": { "type": "object", "description": "The metadata key/value pairs assigned to instances in the cluster. Keys must conform to the regexp [a-zA-Z0-9-_]+ and be less than 128 bytes in length. These are reflected as part of a URL in the metadata server. Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the four reserved keys: \"instance-template\", \"kube-env\", \"startup-script\", and \"user-data\" Values are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on them is that each value's size must be less than or equal to 32 KB. The total size of all keys and values must be less than 512 KB.", "additionalProperties": { "type": "string" } + }, + "imageType": { + "type": "string", + "description": "The image type to use for this node. Note that for a given image type, the latest version of it will be used." + }, + "labels": { + "type": "object", + "description": "The map of Kubernetes labels (key/value pairs) to be applied to each node. These will added in addition to any default label(s) that Kubernetes may apply to the node. In case of conflict in label keys, the applied set may differ depending on the Kubernetes version -- it's best to assume the behavior is undefined and conflicts should be avoided. For more information, including usage and the valid values, see: http://kubernetes.io/v1.1/docs/user-guide/labels.html", + "additionalProperties": { + "type": "string" + } + }, + "localSsdCount": { + "type": "integer", + "description": "The number of local SSD disks to be attached to the node. The limit for this value is dependant upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd#local_ssd_limits for more information.", + "format": "int32" + }, + "tags": { + "type": "array", + "description": "The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during cluster or node pool creation. Each tag within the list must comply with RFC1035.", + "items": { + "type": "string" + } + }, + "preemptible": { + "type": "boolean", + "description": "Whether the nodes are created as preemptible VM instances. See: https://cloud.google.com/compute/docs/instances/preemptible for more inforamtion about preemptible VM instances." } } }, @@ -376,11 +415,11 @@ }, "selfLink": { "type": "string", - "description": "Server-defined URL for the resource." + "description": "[Output only] Server-defined URL for the resource." }, "version": { "type": "string", - "description": "The version of the Kubernetes of this node." + "description": "[Output only] The version of the Kubernetes of this node." }, "instanceGroupUrls": { "type": "array", @@ -391,7 +430,7 @@ }, "status": { "type": "string", - "description": "The status of the nodes in this pool instance.", + "description": "[Output only] The status of the nodes in this pool instance.", "enum": [ "STATUS_UNSPECIFIED", "PROVISIONING", @@ -405,6 +444,65 @@ "statusMessage": { "type": "string", "description": "[Output only] Additional information about the current status of this node pool instance, if available." + }, + "autoscaling": { + "$ref": "NodePoolAutoscaling", + "description": "Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present." + }, + "management": { + "$ref": "NodeManagement", + "description": "NodeManagement configuration for this NodePool." + } + } + }, + "NodePoolAutoscaling": { + "id": "NodePoolAutoscaling", + "type": "object", + "description": "NodePoolAutoscaling contains information required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.", + "properties": { + "enabled": { + "type": "boolean", + "description": "Is autoscaling enabled for this node pool." + }, + "minNodeCount": { + "type": "integer", + "description": "Minimum number of nodes in the NodePool. Must be \u003e= 1 and \u003c= max_node_count.", + "format": "int32" + }, + "maxNodeCount": { + "type": "integer", + "description": "Maximum number of nodes in the NodePool. Must be \u003e= min_node_count. There has to enough quota to scale up the cluster.", + "format": "int32" + } + } + }, + "NodeManagement": { + "id": "NodeManagement", + "type": "object", + "description": "NodeManagement defines the set of node management services turned on for the node pool.", + "properties": { + "autoUpgrade": { + "type": "boolean", + "description": "Whether the nodes will be automatically upgraded." + }, + "upgradeOptions": { + "$ref": "AutoUpgradeOptions", + "description": "Specifies the Auto Upgrade knobs for the node pool." + } + } + }, + "AutoUpgradeOptions": { + "id": "AutoUpgradeOptions", + "type": "object", + "description": "AutoUpgradeOptions defines the set of options for the user to control how the Auto Upgrades will proceed.", + "properties": { + "autoUpgradeStartTime": { + "type": "string", + "description": "[Output only] This field is set when upgrades are about to commence with the approximate start time for the upgrades, in [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) text format." + }, + "description": { + "type": "string", + "description": "[Output only] This field is set when upgrades are about to commence with the description of the upgrade." } } }, @@ -444,7 +542,8 @@ "REPAIR_CLUSTER", "UPDATE_CLUSTER", "CREATE_NODE_POOL", - "DELETE_NODE_POOL" + "DELETE_NODE_POOL", + "SET_NODE_POOL_MANAGEMENT" ] }, "status": { @@ -454,7 +553,8 @@ "STATUS_UNSPECIFIED", "PENDING", "RUNNING", - "DONE" + "DONE", + "ABORTING" ] }, "detail": { @@ -505,7 +605,22 @@ }, "desiredNodePoolId": { "type": "string", - "description": "The node pool to be upgraded. This field is mandatory if the \"desired_node_version\" or \"desired_image_family\" is specified and there is more than one node pool on the cluster." + "description": "The node pool to be upgraded. This field is mandatory if \"desired_node_version\", \"desired_image_family\" or \"desired_node_pool_autoscaling\" is specified and there is more than one node pool on the cluster." + }, + "desiredImageType": { + "type": "string", + "description": "The desired image type for the node pool. NOTE: Set the \"desired_node_pool\" field as well." + }, + "desiredNodePoolAutoscaling": { + "$ref": "NodePoolAutoscaling", + "description": "Autoscaler configuration for the node pool specified in desired_node_pool_id. If there is only one pool in the cluster and desired_node_pool_id is not provided then the change applies to that single node pool." + }, + "desiredLocations": { + "type": "array", + "description": "The desired list of Google Compute Engine [locations](/compute/docs/zones#available) in which the cluster's nodes should be located. Changing the locations a cluster is in will result in nodes being either created or removed from the cluster, depending on whether locations are being added or removed. This list must always include the cluster's primary zone.", + "items": { + "type": "string" + } }, "desiredMasterVersion": { "type": "string", @@ -534,6 +649,16 @@ } } }, + "CancelOperationRequest": { + "id": "CancelOperationRequest", + "type": "object", + "description": "CancelOperationRequest cancels a single operation." + }, + "Empty": { + "id": "Empty", + "type": "object", + "description": "A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for `Empty` is empty JSON object `{}`." + }, "ServerConfig": { "id": "ServerConfig", "type": "object", @@ -550,13 +675,20 @@ "type": "string" } }, - "defaultImageFamily": { + "defaultImageType": { "type": "string", - "description": "Default image family." + "description": "Default image type." }, - "validImageFamilies": { + "validImageTypes": { "type": "array", - "description": "List of valid image families.", + "description": "List of valid image types.", + "items": { + "type": "string" + } + }, + "validMasterVersions": { + "type": "array", + "description": "List of valid master versions.", "items": { "type": "string" } @@ -587,6 +719,22 @@ "description": "The node pool to create." } } + }, + "RollbackNodePoolUpgradeRequest": { + "id": "RollbackNodePoolUpgradeRequest", + "type": "object", + "description": "RollbackNodePoolUpgradeRequest rollbacks the previously Aborted or Failed NodePool upgrade. This will be an no-op if the last upgrade successfully completed." + }, + "SetNodePoolManagementRequest": { + "id": "SetNodePoolManagementRequest", + "type": "object", + "description": "SetNodePoolManagementRequest sets the node management properties of a node pool.", + "properties": { + "management": { + "$ref": "NodeManagement", + "description": "NodeManagement configuration for the node pool." + } + } } }, "resources": { @@ -973,6 +1121,100 @@ "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] + }, + "rollback": { + "id": "container.projects.zones.clusters.nodePools.rollback", + "path": "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}/nodePools/{nodePoolId}:rollback", + "httpMethod": "POST", + "description": "Roll back the previously Aborted or Failed NodePool upgrade. This will be an no-op if the last upgrade successfully completed.", + "parameters": { + "projectId": { + "type": "string", + "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).", + "required": true, + "location": "path" + }, + "zone": { + "type": "string", + "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) in which the cluster resides.", + "required": true, + "location": "path" + }, + "clusterId": { + "type": "string", + "description": "The name of the cluster to rollback.", + "required": true, + "location": "path" + }, + "nodePoolId": { + "type": "string", + "description": "The name of the node pool to rollback.", + "required": true, + "location": "path" + } + }, + "parameterOrder": [ + "projectId", + "zone", + "clusterId", + "nodePoolId" + ], + "request": { + "$ref": "RollbackNodePoolUpgradeRequest" + }, + "response": { + "$ref": "Operation" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform" + ] + }, + "setManagement": { + "id": "container.projects.zones.clusters.nodePools.setManagement", + "path": "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}/nodePools/{nodePoolId}/setManagement", + "httpMethod": "POST", + "description": "Sets the NodeManagement options for a node pool.", + "parameters": { + "projectId": { + "type": "string", + "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).", + "required": true, + "location": "path" + }, + "zone": { + "type": "string", + "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) in which the cluster resides.", + "required": true, + "location": "path" + }, + "clusterId": { + "type": "string", + "description": "The name of the cluster to update.", + "required": true, + "location": "path" + }, + "nodePoolId": { + "type": "string", + "description": "The name of the node pool to update.", + "required": true, + "location": "path" + } + }, + "parameterOrder": [ + "projectId", + "zone", + "clusterId", + "nodePoolId" + ], + "request": { + "$ref": "SetNodePoolManagementRequest" + }, + "response": { + "$ref": "Operation" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform" + ] } } } @@ -1046,6 +1288,46 @@ "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] + }, + "cancel": { + "id": "container.projects.zones.operations.cancel", + "path": "v1/projects/{projectId}/zones/{zone}/operations/{operationId}:cancel", + "httpMethod": "POST", + "description": "Cancels the specified operation.", + "parameters": { + "projectId": { + "type": "string", + "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).", + "required": true, + "location": "path" + }, + "zone": { + "type": "string", + "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) in which the operation resides.", + "required": true, + "location": "path" + }, + "operationId": { + "type": "string", + "description": "The server-assigned `name` of the operation.", + "required": true, + "location": "path" + } + }, + "parameterOrder": [ + "projectId", + "zone", + "operationId" + ], + "request": { + "$ref": "CancelOperationRequest" + }, + "response": { + "$ref": "Empty" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform" + ] } } } diff --git a/vendor/google.golang.org/api/container/v1/container-gen.go b/vendor/google.golang.org/api/container/v1/container-gen.go index 644e1de7cf..c6e08354a9 100644 --- a/vendor/google.golang.org/api/container/v1/container-gen.go +++ b/vendor/google.golang.org/api/container/v1/container-gen.go @@ -61,9 +61,10 @@ func New(client *http.Client) (*Service, error) { } type Service struct { - client *http.Client - BasePath string // API endpoint base URL - UserAgent string // optional additional User-Agent fragment + client *http.Client + BasePath string // API endpoint base URL + UserAgent string // optional additional User-Agent fragment + GoogleClientHeaderElement string // client header fragment, for Google use only Projects *ProjectsService } @@ -75,6 +76,10 @@ func (s *Service) userAgent() string { return googleapi.UserAgent + " " + s.UserAgent } +func (s *Service) clientHeader() string { + return gensupport.GoogleClientHeader("20170210", s.GoogleClientHeaderElement) +} + func NewProjectsService(s *Service) *ProjectsService { rs := &ProjectsService{s: s} rs.Zones = NewProjectsZonesService(s) @@ -171,6 +176,49 @@ func (s *AddonsConfig) MarshalJSON() ([]byte, error) { return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } +// AutoUpgradeOptions: AutoUpgradeOptions defines the set of options for +// the user to control how the Auto Upgrades will proceed. +type AutoUpgradeOptions struct { + // AutoUpgradeStartTime: [Output only] This field is set when upgrades + // are about to commence with the approximate start time for the + // upgrades, in [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) text + // format. + AutoUpgradeStartTime string `json:"autoUpgradeStartTime,omitempty"` + + // Description: [Output only] This field is set when upgrades are about + // to commence with the description of the upgrade. + Description string `json:"description,omitempty"` + + // ForceSendFields is a list of field names (e.g. + // "AutoUpgradeStartTime") to unconditionally include in API requests. + // By default, fields with empty values are omitted from API requests. + // However, any non-pointer, non-interface field appearing in + // ForceSendFields will be sent to the server regardless of whether the + // field is empty or not. This may be used to include empty fields in + // Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "AutoUpgradeStartTime") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. + NullFields []string `json:"-"` +} + +func (s *AutoUpgradeOptions) MarshalJSON() ([]byte, error) { + type noMethod AutoUpgradeOptions + raw := noMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// CancelOperationRequest: CancelOperationRequest cancels a single +// operation. +type CancelOperationRequest struct { +} + // Cluster: A Google Container Engine cluster. type Cluster struct { // AddonsConfig: Configurations for the various addons available to run @@ -205,12 +253,25 @@ type Cluster struct { // Description: An optional description of this cluster. Description string `json:"description,omitempty"` + // EnableKubernetesAlpha: Kubernetes alpha features are enabled on this + // cluster. This includes alpha API groups (e.g. v1alpha1) and features + // that may not be production ready in the kubernetes version of the + // master and nodes. The cluster has no SLA for uptime and master/node + // upgrades are disabled. Alpha enabled clusters are automatically + // deleted thirty days after creation. + EnableKubernetesAlpha bool `json:"enableKubernetesAlpha,omitempty"` + // Endpoint: [Output only] The IP address of this cluster's master // endpoint. The endpoint can be accessed from the internet at // `https://username:password@endpoint/`. See the `masterAuth` property // of this resource for username and password information. Endpoint string `json:"endpoint,omitempty"` + // ExpireTime: [Output only] The time the cluster will be automatically + // deleted in [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) text + // format. + ExpireTime string `json:"expireTime,omitempty"` + // InitialClusterVersion: [Output only] The software version of the // master endpoint and kubelets used in the cluster when it was first // created. The version can be upgraded over time. @@ -280,9 +341,8 @@ type Cluster struct { // `container_ipv4_cidr` range. NodeIpv4CidrSize int64 `json:"nodeIpv4CidrSize,omitempty"` - // NodePools: The node pools associated with this cluster. When creating - // a new cluster, only a single node pool should be specified. This - // field should not be set if "node_config" or "initial_node_count" are + // NodePools: The node pools associated with this cluster. This field + // should not be set if "node_config" or "initial_node_count" are // specified. NodePools []*NodePool `json:"nodePools,omitempty"` @@ -355,6 +415,18 @@ type ClusterUpdate struct { // to run in the cluster. DesiredAddonsConfig *AddonsConfig `json:"desiredAddonsConfig,omitempty"` + // DesiredImageType: The desired image type for the node pool. NOTE: Set + // the "desired_node_pool" field as well. + DesiredImageType string `json:"desiredImageType,omitempty"` + + // DesiredLocations: The desired list of Google Compute Engine + // [locations](/compute/docs/zones#available) in which the cluster's + // nodes should be located. Changing the locations a cluster is in will + // result in nodes being either created or removed from the cluster, + // depending on whether locations are being added or removed. This list + // must always include the cluster's primary zone. + DesiredLocations []string `json:"desiredLocations,omitempty"` + // DesiredMasterVersion: The Kubernetes version to change the master to. // The only valid value is the latest supported version. Use "-" to have // the server automatically select the latest version. @@ -366,9 +438,16 @@ type ClusterUpdate struct { // "none" - no metrics will be exported from the cluster DesiredMonitoringService string `json:"desiredMonitoringService,omitempty"` + // DesiredNodePoolAutoscaling: Autoscaler configuration for the node + // pool specified in desired_node_pool_id. If there is only one pool in + // the cluster and desired_node_pool_id is not provided then the change + // applies to that single node pool. + DesiredNodePoolAutoscaling *NodePoolAutoscaling `json:"desiredNodePoolAutoscaling,omitempty"` + // DesiredNodePoolId: The node pool to be upgraded. This field is - // mandatory if the "desired_node_version" or "desired_image_family" is - // specified and there is more than one node pool on the cluster. + // mandatory if "desired_node_version", "desired_image_family" or + // "desired_node_pool_autoscaling" is specified and there is more than + // one node pool on the cluster. DesiredNodePoolId string `json:"desiredNodePoolId,omitempty"` // DesiredNodeVersion: The Kubernetes version to change the nodes to @@ -458,6 +537,18 @@ func (s *CreateNodePoolRequest) MarshalJSON() ([]byte, error) { return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } +// Empty: A generic empty message that you can re-use to avoid defining +// duplicated empty messages in your APIs. A typical example is to use +// it as the request or the response type of an API method. For +// instance: service Foo { rpc Bar(google.protobuf.Empty) returns +// (google.protobuf.Empty); } The JSON representation for `Empty` is +// empty JSON object `{}`. +type Empty struct { + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` +} + // HorizontalPodAutoscaling: Configuration options for the horizontal // pod autoscaling feature, which increases or decreases the number of // replica pods a replication controller has based on the resource usage @@ -689,6 +780,26 @@ type NodeConfig struct { // disk size is 100GB. DiskSizeGb int64 `json:"diskSizeGb,omitempty"` + // ImageType: The image type to use for this node. Note that for a given + // image type, the latest version of it will be used. + ImageType string `json:"imageType,omitempty"` + + // Labels: The map of Kubernetes labels (key/value pairs) to be applied + // to each node. These will added in addition to any default label(s) + // that Kubernetes may apply to the node. In case of conflict in label + // keys, the applied set may differ depending on the Kubernetes version + // -- it's best to assume the behavior is undefined and conflicts should + // be avoided. For more information, including usage and the valid + // values, see: http://kubernetes.io/v1.1/docs/user-guide/labels.html + Labels map[string]string `json:"labels,omitempty"` + + // LocalSsdCount: The number of local SSD disks to be attached to the + // node. The limit for this value is dependant upon the maximum number + // of disks available on a machine per zone. See: + // https://cloud.google.com/compute/docs/disks/local-ssd#local_ssd_limits for more + // information. + LocalSsdCount int64 `json:"localSsdCount,omitempty"` + // MachineType: The name of a Google Compute Engine [machine // type](/compute/docs/machine-types) (e.g. `n1-standard-1`). If // unspecified, the default machine type is `n1-standard-1`. @@ -719,6 +830,23 @@ type NodeConfig struct { // case their required scopes will be added. OauthScopes []string `json:"oauthScopes,omitempty"` + // Preemptible: Whether the nodes are created as preemptible VM + // instances. See: + // https://cloud.google.com/compute/docs/instances/preemptible for more + // inforamtion about preemptible VM instances. + Preemptible bool `json:"preemptible,omitempty"` + + // ServiceAccount: The Google Cloud Platform Service Account to be used + // by the node VMs. If no Service Account is specified, the "default" + // service account is used. + ServiceAccount string `json:"serviceAccount,omitempty"` + + // Tags: The list of instance tags applied to all nodes. Tags are used + // to identify valid sources or targets for network firewalls and are + // specified by the client during cluster or node pool creation. Each + // tag within the list must comply with RFC1035. + Tags []string `json:"tags,omitempty"` + // ForceSendFields is a list of field names (e.g. "DiskSizeGb") to // unconditionally include in API requests. By default, fields with // empty values are omitted from API requests. However, any non-pointer, @@ -742,6 +870,38 @@ func (s *NodeConfig) MarshalJSON() ([]byte, error) { return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } +// NodeManagement: NodeManagement defines the set of node management +// services turned on for the node pool. +type NodeManagement struct { + // AutoUpgrade: Whether the nodes will be automatically upgraded. + AutoUpgrade bool `json:"autoUpgrade,omitempty"` + + // UpgradeOptions: Specifies the Auto Upgrade knobs for the node pool. + UpgradeOptions *AutoUpgradeOptions `json:"upgradeOptions,omitempty"` + + // ForceSendFields is a list of field names (e.g. "AutoUpgrade") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "AutoUpgrade") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *NodeManagement) MarshalJSON() ([]byte, error) { + type noMethod NodeManagement + raw := noMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + // NodePool: NodePool contains the name and configuration for a // cluster's node pool. Node pools are a set of nodes (i.e. VM's), with // a common configuration and specification, under the control of the @@ -749,6 +909,10 @@ func (s *NodeConfig) MarshalJSON() ([]byte, error) { // them, which may be used to reference them during pod scheduling. They // may also be resized up or down, to accommodate the workload. type NodePool struct { + // Autoscaling: Autoscaler configuration for this NodePool. Autoscaler + // is enabled only if a valid configuration is present. + Autoscaling *NodePoolAutoscaling `json:"autoscaling,omitempty"` + // Config: The node configuration of the pool. Config *NodeConfig `json:"config,omitempty"` @@ -763,13 +927,16 @@ type NodePool struct { // pool. InstanceGroupUrls []string `json:"instanceGroupUrls,omitempty"` + // Management: NodeManagement configuration for this NodePool. + Management *NodeManagement `json:"management,omitempty"` + // Name: The name of the node pool. Name string `json:"name,omitempty"` - // SelfLink: Server-defined URL for the resource. + // SelfLink: [Output only] Server-defined URL for the resource. SelfLink string `json:"selfLink,omitempty"` - // Status: The status of the nodes in this pool instance. + // Status: [Output only] The status of the nodes in this pool instance. // // Possible values: // "STATUS_UNSPECIFIED" @@ -785,14 +952,14 @@ type NodePool struct { // status of this node pool instance, if available. StatusMessage string `json:"statusMessage,omitempty"` - // Version: The version of the Kubernetes of this node. + // Version: [Output only] The version of the Kubernetes of this node. Version string `json:"version,omitempty"` // ServerResponse contains the HTTP response code and headers from the // server. googleapi.ServerResponse `json:"-"` - // ForceSendFields is a list of field names (e.g. "Config") to + // ForceSendFields is a list of field names (e.g. "Autoscaling") to // unconditionally include in API requests. By default, fields with // empty values are omitted from API requests. However, any non-pointer, // non-interface field appearing in ForceSendFields will be sent to the @@ -800,10 +967,10 @@ type NodePool struct { // used to include empty fields in Patch requests. ForceSendFields []string `json:"-"` - // NullFields is a list of field names (e.g. "Config") to include in API - // requests with the JSON null value. By default, fields with empty - // values are omitted from API requests. However, any field with an - // empty value appearing in NullFields will be sent to the server as + // NullFields is a list of field names (e.g. "Autoscaling") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as // null. It is an error if a field in this list has a non-empty value. // This may be used to include null fields in Patch requests. NullFields []string `json:"-"` @@ -815,6 +982,44 @@ func (s *NodePool) MarshalJSON() ([]byte, error) { return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } +// NodePoolAutoscaling: NodePoolAutoscaling contains information +// required by cluster autoscaler to adjust the size of the node pool to +// the current cluster usage. +type NodePoolAutoscaling struct { + // Enabled: Is autoscaling enabled for this node pool. + Enabled bool `json:"enabled,omitempty"` + + // MaxNodeCount: Maximum number of nodes in the NodePool. Must be >= + // min_node_count. There has to enough quota to scale up the cluster. + MaxNodeCount int64 `json:"maxNodeCount,omitempty"` + + // MinNodeCount: Minimum number of nodes in the NodePool. Must be >= 1 + // and <= max_node_count. + MinNodeCount int64 `json:"minNodeCount,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Enabled") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Enabled") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *NodePoolAutoscaling) MarshalJSON() ([]byte, error) { + type noMethod NodePoolAutoscaling + raw := noMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + // Operation: This operation resource represents operations that may // have happened or are happening on the cluster. All fields are output // only. @@ -837,6 +1042,7 @@ type Operation struct { // "UPDATE_CLUSTER" // "CREATE_NODE_POOL" // "DELETE_NODE_POOL" + // "SET_NODE_POOL_MANAGEMENT" OperationType string `json:"operationType,omitempty"` // SelfLink: Server-defined URL for the resource. @@ -849,6 +1055,7 @@ type Operation struct { // "PENDING" // "RUNNING" // "DONE" + // "ABORTING" Status string `json:"status,omitempty"` // StatusMessage: If an error has occurred, a textual description of the @@ -890,17 +1097,26 @@ func (s *Operation) MarshalJSON() ([]byte, error) { return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } +// RollbackNodePoolUpgradeRequest: RollbackNodePoolUpgradeRequest +// rollbacks the previously Aborted or Failed NodePool upgrade. This +// will be an no-op if the last upgrade successfully completed. +type RollbackNodePoolUpgradeRequest struct { +} + // ServerConfig: Container Engine service configuration. type ServerConfig struct { // DefaultClusterVersion: Version of Kubernetes the service deploys by // default. DefaultClusterVersion string `json:"defaultClusterVersion,omitempty"` - // DefaultImageFamily: Default image family. - DefaultImageFamily string `json:"defaultImageFamily,omitempty"` + // DefaultImageType: Default image type. + DefaultImageType string `json:"defaultImageType,omitempty"` - // ValidImageFamilies: List of valid image families. - ValidImageFamilies []string `json:"validImageFamilies,omitempty"` + // ValidImageTypes: List of valid image types. + ValidImageTypes []string `json:"validImageTypes,omitempty"` + + // ValidMasterVersions: List of valid master versions. + ValidMasterVersions []string `json:"validMasterVersions,omitempty"` // ValidNodeVersions: List of valid node upgrade target versions. ValidNodeVersions []string `json:"validNodeVersions,omitempty"` @@ -934,6 +1150,35 @@ func (s *ServerConfig) MarshalJSON() ([]byte, error) { return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) } +// SetNodePoolManagementRequest: SetNodePoolManagementRequest sets the +// node management properties of a node pool. +type SetNodePoolManagementRequest struct { + // Management: NodeManagement configuration for the node pool. + Management *NodeManagement `json:"management,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Management") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Management") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *SetNodePoolManagementRequest) MarshalJSON() ([]byte, error) { + type noMethod SetNodePoolManagementRequest + raw := noMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + // UpdateClusterRequest: UpdateClusterRequest updates the settings of a // cluster. type UpdateClusterRequest struct { @@ -1025,6 +1270,7 @@ func (c *ProjectsZonesGetServerconfigCall) doRequest(alt string) (*http.Response reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) if c.ifNoneMatch_ != "" { reqHeaders.Set("If-None-Match", c.ifNoneMatch_) } @@ -1171,6 +1417,7 @@ func (c *ProjectsZonesClustersCreateCall) doRequest(alt string) (*http.Response, reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) var body io.Reader = nil body, err := googleapi.WithoutDataWrapper.JSONReader(c.createclusterrequest) if err != nil { @@ -1319,6 +1566,7 @@ func (c *ProjectsZonesClustersDeleteCall) doRequest(alt string) (*http.Response, reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) var body io.Reader = nil c.urlParams_.Set("alt", alt) urls := googleapi.ResolveRelative(c.s.BasePath, "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}") @@ -1473,6 +1721,7 @@ func (c *ProjectsZonesClustersGetCall) doRequest(alt string) (*http.Response, er reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) if c.ifNoneMatch_ != "" { reqHeaders.Set("If-None-Match", c.ifNoneMatch_) } @@ -1629,6 +1878,7 @@ func (c *ProjectsZonesClustersListCall) doRequest(alt string) (*http.Response, e reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) if c.ifNoneMatch_ != "" { reqHeaders.Set("If-None-Match", c.ifNoneMatch_) } @@ -1769,6 +2019,7 @@ func (c *ProjectsZonesClustersUpdateCall) doRequest(alt string) (*http.Response, reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) var body io.Reader = nil body, err := googleapi.WithoutDataWrapper.JSONReader(c.updateclusterrequest) if err != nil { @@ -1922,6 +2173,7 @@ func (c *ProjectsZonesClustersNodePoolsCreateCall) doRequest(alt string) (*http. reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) var body io.Reader = nil body, err := googleapi.WithoutDataWrapper.JSONReader(c.createnodepoolrequest) if err != nil { @@ -2075,6 +2327,7 @@ func (c *ProjectsZonesClustersNodePoolsDeleteCall) doRequest(alt string) (*http. reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) var body io.Reader = nil c.urlParams_.Set("alt", alt) urls := googleapi.ResolveRelative(c.s.BasePath, "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}/nodePools/{nodePoolId}") @@ -2239,6 +2492,7 @@ func (c *ProjectsZonesClustersNodePoolsGetCall) doRequest(alt string) (*http.Res reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) if c.ifNoneMatch_ != "" { reqHeaders.Set("If-None-Match", c.ifNoneMatch_) } @@ -2404,6 +2658,7 @@ func (c *ProjectsZonesClustersNodePoolsListCall) doRequest(alt string) (*http.Re reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) if c.ifNoneMatch_ != "" { reqHeaders.Set("If-None-Match", c.ifNoneMatch_) } @@ -2498,6 +2753,490 @@ func (c *ProjectsZonesClustersNodePoolsListCall) Do(opts ...googleapi.CallOption } +// method id "container.projects.zones.clusters.nodePools.rollback": + +type ProjectsZonesClustersNodePoolsRollbackCall struct { + s *Service + projectId string + zone string + clusterId string + nodePoolId string + rollbacknodepoolupgraderequest *RollbackNodePoolUpgradeRequest + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Rollback: Roll back the previously Aborted or Failed NodePool +// upgrade. This will be an no-op if the last upgrade successfully +// completed. +func (r *ProjectsZonesClustersNodePoolsService) Rollback(projectId string, zone string, clusterId string, nodePoolId string, rollbacknodepoolupgraderequest *RollbackNodePoolUpgradeRequest) *ProjectsZonesClustersNodePoolsRollbackCall { + c := &ProjectsZonesClustersNodePoolsRollbackCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.projectId = projectId + c.zone = zone + c.clusterId = clusterId + c.nodePoolId = nodePoolId + c.rollbacknodepoolupgraderequest = rollbacknodepoolupgraderequest + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ProjectsZonesClustersNodePoolsRollbackCall) Fields(s ...googleapi.Field) *ProjectsZonesClustersNodePoolsRollbackCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ProjectsZonesClustersNodePoolsRollbackCall) Context(ctx context.Context) *ProjectsZonesClustersNodePoolsRollbackCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ProjectsZonesClustersNodePoolsRollbackCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ProjectsZonesClustersNodePoolsRollbackCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.rollbacknodepoolupgraderequest) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}/nodePools/{nodePoolId}:rollback") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "projectId": c.projectId, + "zone": c.zone, + "clusterId": c.clusterId, + "nodePoolId": c.nodePoolId, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "container.projects.zones.clusters.nodePools.rollback" call. +// Exactly one of *Operation or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Operation.ServerResponse.Header or (if a response was returned at +// all) in error.(*googleapi.Error).Header. Use googleapi.IsNotModified +// to check whether the returned error was because +// http.StatusNotModified was returned. +func (c *ProjectsZonesClustersNodePoolsRollbackCall) Do(opts ...googleapi.CallOption) (*Operation, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Operation{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := json.NewDecoder(res.Body).Decode(target); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Roll back the previously Aborted or Failed NodePool upgrade. This will be an no-op if the last upgrade successfully completed.", + // "httpMethod": "POST", + // "id": "container.projects.zones.clusters.nodePools.rollback", + // "parameterOrder": [ + // "projectId", + // "zone", + // "clusterId", + // "nodePoolId" + // ], + // "parameters": { + // "clusterId": { + // "description": "The name of the cluster to rollback.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "nodePoolId": { + // "description": "The name of the node pool to rollback.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "projectId": { + // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "zone": { + // "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) in which the cluster resides.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}/nodePools/{nodePoolId}:rollback", + // "request": { + // "$ref": "RollbackNodePoolUpgradeRequest" + // }, + // "response": { + // "$ref": "Operation" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform" + // ] + // } + +} + +// method id "container.projects.zones.clusters.nodePools.setManagement": + +type ProjectsZonesClustersNodePoolsSetManagementCall struct { + s *Service + projectId string + zone string + clusterId string + nodePoolId string + setnodepoolmanagementrequest *SetNodePoolManagementRequest + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// SetManagement: Sets the NodeManagement options for a node pool. +func (r *ProjectsZonesClustersNodePoolsService) SetManagement(projectId string, zone string, clusterId string, nodePoolId string, setnodepoolmanagementrequest *SetNodePoolManagementRequest) *ProjectsZonesClustersNodePoolsSetManagementCall { + c := &ProjectsZonesClustersNodePoolsSetManagementCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.projectId = projectId + c.zone = zone + c.clusterId = clusterId + c.nodePoolId = nodePoolId + c.setnodepoolmanagementrequest = setnodepoolmanagementrequest + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ProjectsZonesClustersNodePoolsSetManagementCall) Fields(s ...googleapi.Field) *ProjectsZonesClustersNodePoolsSetManagementCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ProjectsZonesClustersNodePoolsSetManagementCall) Context(ctx context.Context) *ProjectsZonesClustersNodePoolsSetManagementCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ProjectsZonesClustersNodePoolsSetManagementCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ProjectsZonesClustersNodePoolsSetManagementCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.setnodepoolmanagementrequest) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}/nodePools/{nodePoolId}/setManagement") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "projectId": c.projectId, + "zone": c.zone, + "clusterId": c.clusterId, + "nodePoolId": c.nodePoolId, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "container.projects.zones.clusters.nodePools.setManagement" call. +// Exactly one of *Operation or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Operation.ServerResponse.Header or (if a response was returned at +// all) in error.(*googleapi.Error).Header. Use googleapi.IsNotModified +// to check whether the returned error was because +// http.StatusNotModified was returned. +func (c *ProjectsZonesClustersNodePoolsSetManagementCall) Do(opts ...googleapi.CallOption) (*Operation, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Operation{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := json.NewDecoder(res.Body).Decode(target); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Sets the NodeManagement options for a node pool.", + // "httpMethod": "POST", + // "id": "container.projects.zones.clusters.nodePools.setManagement", + // "parameterOrder": [ + // "projectId", + // "zone", + // "clusterId", + // "nodePoolId" + // ], + // "parameters": { + // "clusterId": { + // "description": "The name of the cluster to update.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "nodePoolId": { + // "description": "The name of the node pool to update.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "projectId": { + // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "zone": { + // "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) in which the cluster resides.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}/nodePools/{nodePoolId}/setManagement", + // "request": { + // "$ref": "SetNodePoolManagementRequest" + // }, + // "response": { + // "$ref": "Operation" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform" + // ] + // } + +} + +// method id "container.projects.zones.operations.cancel": + +type ProjectsZonesOperationsCancelCall struct { + s *Service + projectId string + zone string + operationId string + canceloperationrequest *CancelOperationRequest + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Cancel: Cancels the specified operation. +func (r *ProjectsZonesOperationsService) Cancel(projectId string, zone string, operationId string, canceloperationrequest *CancelOperationRequest) *ProjectsZonesOperationsCancelCall { + c := &ProjectsZonesOperationsCancelCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.projectId = projectId + c.zone = zone + c.operationId = operationId + c.canceloperationrequest = canceloperationrequest + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ProjectsZonesOperationsCancelCall) Fields(s ...googleapi.Field) *ProjectsZonesOperationsCancelCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ProjectsZonesOperationsCancelCall) Context(ctx context.Context) *ProjectsZonesOperationsCancelCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ProjectsZonesOperationsCancelCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ProjectsZonesOperationsCancelCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.canceloperationrequest) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "v1/projects/{projectId}/zones/{zone}/operations/{operationId}:cancel") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "projectId": c.projectId, + "zone": c.zone, + "operationId": c.operationId, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "container.projects.zones.operations.cancel" call. +// Exactly one of *Empty or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Empty.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ProjectsZonesOperationsCancelCall) Do(opts ...googleapi.CallOption) (*Empty, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Empty{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := json.NewDecoder(res.Body).Decode(target); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Cancels the specified operation.", + // "httpMethod": "POST", + // "id": "container.projects.zones.operations.cancel", + // "parameterOrder": [ + // "projectId", + // "zone", + // "operationId" + // ], + // "parameters": { + // "operationId": { + // "description": "The server-assigned `name` of the operation.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "projectId": { + // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "zone": { + // "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) in which the operation resides.", + // "location": "path", + // "required": true, + // "type": "string" + // } + // }, + // "path": "v1/projects/{projectId}/zones/{zone}/operations/{operationId}:cancel", + // "request": { + // "$ref": "CancelOperationRequest" + // }, + // "response": { + // "$ref": "Empty" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform" + // ] + // } + +} + // method id "container.projects.zones.operations.get": type ProjectsZonesOperationsGetCall struct { @@ -2561,6 +3300,7 @@ func (c *ProjectsZonesOperationsGetCall) doRequest(alt string) (*http.Response, reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) if c.ifNoneMatch_ != "" { reqHeaders.Set("If-None-Match", c.ifNoneMatch_) } @@ -2717,6 +3457,7 @@ func (c *ProjectsZonesOperationsListCall) doRequest(alt string) (*http.Response, reqHeaders[k] = v } reqHeaders.Set("User-Agent", c.s.userAgent()) + reqHeaders.Set("x-goog-api-client", c.s.clientHeader()) if c.ifNoneMatch_ != "" { reqHeaders.Set("If-None-Match", c.ifNoneMatch_) } diff --git a/vendor/google.golang.org/api/gensupport/header.go b/vendor/google.golang.org/api/gensupport/header.go new file mode 100644 index 0000000000..cb5e67c77a --- /dev/null +++ b/vendor/google.golang.org/api/gensupport/header.go @@ -0,0 +1,22 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package gensupport + +import ( + "fmt" + "runtime" + "strings" +) + +// GoogleClientHeader returns the value to use for the x-goog-api-client +// header, which is used internally by Google. +func GoogleClientHeader(generatorVersion, clientElement string) string { + elts := []string{"gl-go/" + strings.Replace(runtime.Version(), " ", "_", -1)} + if clientElement != "" { + elts = append(elts, clientElement) + } + elts = append(elts, fmt.Sprintf("gdcl/%s", generatorVersion)) + return strings.Join(elts, " ") +} diff --git a/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_apikey.go b/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_apikey.go index 7343a11555..dee4aa9fa8 100644 --- a/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_apikey.go +++ b/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_apikey.go @@ -49,9 +49,9 @@ func (s *APIKeysService) Get(keyID string) (*account.APIKey, *http.Response, err if err.(*Error).Message == "unknown api key" { return nil, resp, ErrKeyMissing } - default: - return nil, resp, err + } + return nil, resp, err } return &a, resp, nil @@ -74,9 +74,8 @@ func (s *APIKeysService) Create(a *account.APIKey) (*http.Response, error) { if err.(*Error).Message == fmt.Sprintf("api key with name \"%s\" exists", a.Name) { return resp, ErrKeyExists } - default: - return resp, err } + return resp, err } return resp, nil @@ -101,9 +100,8 @@ func (s *APIKeysService) Update(a *account.APIKey) (*http.Response, error) { if err.(*Error).Message == "unknown api key" { return resp, ErrKeyMissing } - default: - return resp, err } + return resp, err } return resp, nil @@ -127,9 +125,8 @@ func (s *APIKeysService) Delete(keyID string) (*http.Response, error) { if err.(*Error).Message == "unknown api key" { return resp, ErrKeyMissing } - default: - return resp, err } + return resp, err } return resp, nil diff --git a/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_team.go b/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_team.go index b307b412ad..1f4a98b431 100644 --- a/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_team.go +++ b/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_team.go @@ -48,9 +48,8 @@ func (s *TeamsService) Get(id string) (*account.Team, *http.Response, error) { if err.(*Error).Message == "Unknown team id" { return nil, resp, ErrTeamMissing } - default: - return nil, resp, err } + return nil, resp, err } return &t, resp, nil @@ -73,9 +72,8 @@ func (s *TeamsService) Create(t *account.Team) (*http.Response, error) { if err.(*Error).Message == fmt.Sprintf("team with name \"%s\" exists", t.Name) { return resp, ErrTeamExists } - default: - return resp, err } + return resp, err } return resp, nil @@ -100,9 +98,8 @@ func (s *TeamsService) Update(t *account.Team) (*http.Response, error) { if err.(*Error).Message == "unknown team id" { return resp, ErrTeamMissing } - default: - return resp, err } + return resp, err } return resp, nil @@ -126,9 +123,8 @@ func (s *TeamsService) Delete(id string) (*http.Response, error) { if err.(*Error).Message == "unknown team id" { return resp, ErrTeamMissing } - default: - return resp, err } + return resp, err } return resp, nil diff --git a/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_user.go b/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_user.go index 2f6699f9aa..0ad35dc25f 100644 --- a/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_user.go +++ b/vendor/gopkg.in/ns1/ns1-go.v2/rest/account_user.go @@ -48,9 +48,8 @@ func (s *UsersService) Get(username string) (*account.User, *http.Response, erro if err.(*Error).Message == "Unknown user" { return nil, resp, ErrUserMissing } - default: - return nil, resp, err } + return nil, resp, err } return &u, resp, nil @@ -73,9 +72,8 @@ func (s *UsersService) Create(u *account.User) (*http.Response, error) { if err.(*Error).Message == "request failed:Login Name is already in use." { return resp, ErrUserExists } - default: - return resp, err } + return resp, err } return resp, nil @@ -100,9 +98,8 @@ func (s *UsersService) Update(u *account.User) (*http.Response, error) { if err.(*Error).Message == "Unknown user" { return resp, ErrUserMissing } - default: - return resp, err } + return resp, err } return resp, nil @@ -126,9 +123,8 @@ func (s *UsersService) Delete(username string) (*http.Response, error) { if err.(*Error).Message == "Unknown user" { return resp, ErrUserMissing } - default: - return resp, err } + return resp, err } return resp, nil diff --git a/vendor/gopkg.in/ns1/ns1-go.v2/rest/monitor_notify.go b/vendor/gopkg.in/ns1/ns1-go.v2/rest/monitor_notify.go index c8fea014bf..e1ddc36bdb 100644 --- a/vendor/gopkg.in/ns1/ns1-go.v2/rest/monitor_notify.go +++ b/vendor/gopkg.in/ns1/ns1-go.v2/rest/monitor_notify.go @@ -48,9 +48,8 @@ func (s *NotificationsService) Get(listID string) (*monitor.NotifyList, *http.Re if err.(*Error).Message == "unknown notification list" { return nil, resp, ErrListMissing } - default: - return nil, resp, err } + return nil, resp, err } return &nl, resp, nil @@ -73,9 +72,8 @@ func (s *NotificationsService) Create(nl *monitor.NotifyList) (*http.Response, e if err.(*Error).Message == fmt.Sprintf("notification list with name \"%s\" exists", nl.Name) { return resp, ErrListExists } - default: - return resp, err } + return resp, err } return resp, nil diff --git a/vendor/gopkg.in/ns1/ns1-go.v2/rest/record.go b/vendor/gopkg.in/ns1/ns1-go.v2/rest/record.go index f24dc43f66..382b5ccf31 100644 --- a/vendor/gopkg.in/ns1/ns1-go.v2/rest/record.go +++ b/vendor/gopkg.in/ns1/ns1-go.v2/rest/record.go @@ -30,9 +30,8 @@ func (s *RecordsService) Get(zone, domain, t string) (*dns.Record, *http.Respons if err.(*Error).Message == "record not found" { return nil, resp, ErrRecordMissing } - default: - return nil, resp, err } + return nil, resp, err } return &r, resp, nil @@ -61,9 +60,8 @@ func (s *RecordsService) Create(r *dns.Record) (*http.Response, error) { case "record already exists": return resp, ErrRecordExists } - default: - return resp, err } + return resp, err } return resp, nil @@ -92,9 +90,8 @@ func (s *RecordsService) Update(r *dns.Record) (*http.Response, error) { case "record already exists": return resp, ErrRecordExists } - default: - return resp, err } + return resp, err } return resp, nil @@ -118,9 +115,8 @@ func (s *RecordsService) Delete(zone string, domain string, t string) (*http.Res if err.(*Error).Message == "record not found" { return resp, ErrRecordMissing } - default: - return resp, err } + return resp, err } return resp, nil diff --git a/vendor/gopkg.in/ns1/ns1-go.v2/rest/zone.go b/vendor/gopkg.in/ns1/ns1-go.v2/rest/zone.go index ff21650ac5..87b768fdf0 100644 --- a/vendor/gopkg.in/ns1/ns1-go.v2/rest/zone.go +++ b/vendor/gopkg.in/ns1/ns1-go.v2/rest/zone.go @@ -48,9 +48,8 @@ func (s *ZonesService) Get(zone string) (*dns.Zone, *http.Response, error) { if err.(*Error).Message == "zone not found" { return nil, resp, ErrZoneMissing } - default: - return nil, resp, err } + return nil, resp, err } return &z, resp, nil @@ -75,9 +74,8 @@ func (s *ZonesService) Create(z *dns.Zone) (*http.Response, error) { if err.(*Error).Message == "zone already exists" { return resp, ErrZoneExists } - default: - return resp, err } + return resp, err } return resp, nil @@ -102,9 +100,8 @@ func (s *ZonesService) Update(z *dns.Zone) (*http.Response, error) { if err.(*Error).Message == "zone not found" { return resp, ErrZoneMissing } - default: - return resp, err } + return resp, err } return resp, nil @@ -128,9 +125,8 @@ func (s *ZonesService) Delete(zone string) (*http.Response, error) { if err.(*Error).Message == "zone not found" { return resp, ErrZoneMissing } - default: - return resp, err } + return resp, err } return resp, nil diff --git a/vendor/vendor.json b/vendor/vendor.json index f343fe45c2..9ff83c8187 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -393,10 +393,10 @@ "revisionTime": "2016-11-03T18:56:17Z" }, { - "checksumSHA1": "ibs+ylGiQibNx5GeZlCKx7A/zH8=", + "checksumSHA1": "iysTPYhDNP3x2reGLTMgNHw+iL0=", "path": "github.com/PagerDuty/go-pagerduty", - "revision": "fcea06d066d768bf90755d1071dd444c6abff524", - "revisionTime": "2017-03-01T18:59:23Z" + "revision": "cc53abe550274c2dec666e7f44cefc9ee10e429d", + "revisionTime": "2017-03-09T00:45:39Z" }, { "checksumSHA1": "NX4v3cbkXAJxFlrncqT9yEUBuoA=", @@ -475,6 +475,12 @@ "revision": "bbbad097214e2918d8543d5201d12bfd7bca254d", "revisionTime": "2015-08-27T00:49:46Z" }, + { + "checksumSHA1": "YfhpW3cu1CHWX7lUCRparOJ6Vy4=", + "path": "github.com/armon/go-metrics", + "revision": "93f237eba9b0602f3e73710416558854a81d9337", + "revisionTime": "2017-01-14T13:47:37Z" + }, { "checksumSHA1": "gNO0JNpLzYOdInGeq7HqMZUzx9M=", "path": "github.com/armon/go-radix", @@ -488,636 +494,636 @@ "revisionTime": "2017-01-23T00:46:44Z" }, { - "checksumSHA1": "iqZtcuXvBhnOSc9oSK706rUQBGg=", + "checksumSHA1": "vgQ6NEtijFyvN0+Ulc48KPhRLQ8=", "path": "github.com/aws/aws-sdk-go", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "FN20dHo+g6B2zQC/ETGW/J+RNxw=", + "checksumSHA1": "TZ18dAT4T7uCQT1XESgmvLuyG9I=", "path": "github.com/aws/aws-sdk-go/aws", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Y9W+4GimK4Fuxq+vyIskVYFRnX4=", "path": "github.com/aws/aws-sdk-go/aws/awserr", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "yyYr41HZ1Aq0hWc3J5ijXwYEcac=", "path": "github.com/aws/aws-sdk-go/aws/awsutil", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "iThCyNRL/oQFD9CF2SYgBGl+aww=", "path": "github.com/aws/aws-sdk-go/aws/client", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "ieAJ+Cvp/PKv1LpUEnUXpc3OI6E=", "path": "github.com/aws/aws-sdk-go/aws/client/metadata", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Fl8vRSCY0MbM04cmiz/0MID+goA=", "path": "github.com/aws/aws-sdk-go/aws/corehandlers", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "zu5C95rmCZff6NYZb62lEaT5ibE=", "path": "github.com/aws/aws-sdk-go/aws/credentials", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "u3GOAJLmdvbuNUeUEcZSEAOeL/0=", "path": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "NUJUTWlc1sV8b7WjfiYc4JZbXl0=", "path": "github.com/aws/aws-sdk-go/aws/credentials/endpointcreds", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "6cj/zsRmcxkE1TLS+v910GbQYg0=", "path": "github.com/aws/aws-sdk-go/aws/credentials/stscreds", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "lqh3fG7wCochvB4iHAZJuhhEJW0=", "path": "github.com/aws/aws-sdk-go/aws/defaults", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "/EXbk/z2TWjWc1Hvb4QYs3Wmhb8=", "path": "github.com/aws/aws-sdk-go/aws/ec2metadata", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "9LFtC2yggJvQfZ6NKVQTkW2WQJ8=", + "checksumSHA1": "Y/H3JXynvwx55rAbQg6g2hCouB8=", "path": "github.com/aws/aws-sdk-go/aws/endpoints", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "M78rTxU55Qagqr3MYj91im2031E=", "path": "github.com/aws/aws-sdk-go/aws/request", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "u6tKvFGcRQ1xtby1ONjgyUTgcpg=", + "checksumSHA1": "5pzA5afgeU1alfACFh8z2CDUMao=", "path": "github.com/aws/aws-sdk-go/aws/session", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "0FvPLvkBUpTElfUc/FZtPsJfuV0=", "path": "github.com/aws/aws-sdk-go/aws/signer/v4", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "wk7EyvDaHwb5qqoOP/4d3cV0708=", "path": "github.com/aws/aws-sdk-go/private/protocol", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "1QmQ3FqV37w0Zi44qv8pA1GeR0A=", "path": "github.com/aws/aws-sdk-go/private/protocol/ec2query", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "O6hcK24yI6w7FA+g4Pbr+eQ7pys=", "path": "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "R00RL5jJXRYq1iiK1+PGvMfvXyM=", "path": "github.com/aws/aws-sdk-go/private/protocol/jsonrpc", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "ZqY5RWavBLWTo6j9xqdyBEaNFRk=", "path": "github.com/aws/aws-sdk-go/private/protocol/query", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Drt1JfLMa0DQEZLWrnMlTWaIcC8=", "path": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "szZSLm3BlYkL3vqlZhNAlYk8iwM=", "path": "github.com/aws/aws-sdk-go/private/protocol/rest", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Rpu8KBtHZgvhkwHxUfaky+qW+G4=", "path": "github.com/aws/aws-sdk-go/private/protocol/restjson", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "ODo+ko8D6unAxZuN1jGzMcN4QCc=", "path": "github.com/aws/aws-sdk-go/private/protocol/restxml", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "lZ1z4xAbT8euCzKoAsnEYic60VE=", "path": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "F6mth+G7dXN1GI+nktaGo8Lx8aE=", "path": "github.com/aws/aws-sdk-go/private/signer/v2", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Eo9yODN5U99BK0pMzoqnBm7PCrY=", "path": "github.com/aws/aws-sdk-go/private/waiter", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "9n/Gdm1mNIxB7eXRZR+LP2pLjr8=", "path": "github.com/aws/aws-sdk-go/service/acm", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "ygS1AtvAaYa1JHsccugtZUlxnxo=", + "checksumSHA1": "Ykf7vcT+gAM+nsZ2vfRbWR51iqM=", "path": "github.com/aws/aws-sdk-go/service/apigateway", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "vywzqp8jtu1rUKkb/4LEld2yOgQ=", "path": "github.com/aws/aws-sdk-go/service/applicationautoscaling", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "0/2niio3ok72EAFl/s3S/E/yabc=", "path": "github.com/aws/aws-sdk-go/service/autoscaling", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "rKlCBX8p5aFkljRSWug8chDKOsU=", "path": "github.com/aws/aws-sdk-go/service/cloudformation", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "FKms6qE/E3ZLLV90G877CrXJwpk=", "path": "github.com/aws/aws-sdk-go/service/cloudfront", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "JkCPEbRbVHODZ8hw8fRRB0ow0+s=", "path": "github.com/aws/aws-sdk-go/service/cloudtrail", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "ZnIZiTYeRgS2393kOcYxNL0qAUQ=", "path": "github.com/aws/aws-sdk-go/service/cloudwatch", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "wlq1vQbXSJ4NK6fzlVrPDZwyw8A=", + "checksumSHA1": "eil1c4KFMkqPN+ng7GsMlBV8TFc=", "path": "github.com/aws/aws-sdk-go/service/cloudwatchevents", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "TMRiIJYbg0/5naYSnYk3DQnaDkk=", "path": "github.com/aws/aws-sdk-go/service/cloudwatchlogs", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "8T0+kiovp+vGclOMZMajizGsG54=", "path": "github.com/aws/aws-sdk-go/service/codebuild", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "JKGhzZ6hg3myUEnNndjUyamloN4=", "path": "github.com/aws/aws-sdk-go/service/codecommit", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "Lzj28Igm2Nazp9iY1qt3nJQ8vv4=", + "checksumSHA1": "Lw5wzTslFwdkfXupmArobCYb6G8=", "path": "github.com/aws/aws-sdk-go/service/codedeploy", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "LXjLQyMAadcANG0UURWuw4di2YE=", "path": "github.com/aws/aws-sdk-go/service/codepipeline", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "NYRd4lqocAcZdkEvLHAZYyXz8Bs=", "path": "github.com/aws/aws-sdk-go/service/configservice", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "fcYSy6jPQjLB7mtOfxsMqWnjobU=", "path": "github.com/aws/aws-sdk-go/service/databasemigrationservice", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "efnIi8bx7cQJ46T9mtzg/SFRqLI=", "path": "github.com/aws/aws-sdk-go/service/directoryservice", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "D5tbr+FKR8BUU0HxxGB9pS9Dlrc=", "path": "github.com/aws/aws-sdk-go/service/dynamodb", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "ecCVL8+SptmQlojrGtL8mQdaJ6E=", "path": "github.com/aws/aws-sdk-go/service/ec2", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "B6qHy1+Rrp9lQCBR/JDRT72kuCI=", "path": "github.com/aws/aws-sdk-go/service/ecr", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "UFpKfwRxhzQk3pCbBrBa2RsPL24=", "path": "github.com/aws/aws-sdk-go/service/ecs", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "jTTOfudaj/nYDyLCig9SKlDFFHk=", "path": "github.com/aws/aws-sdk-go/service/efs", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "5ZYWoEnb0SID/9cKRb1oGPrrhsA=", "path": "github.com/aws/aws-sdk-go/service/elasticache", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "oVV/BlLfwPI+iycKd9PIQ7oLm/4=", "path": "github.com/aws/aws-sdk-go/service/elasticbeanstalk", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "yvQhmYq5ZKkKooTgkZ+M6032Vr0=", "path": "github.com/aws/aws-sdk-go/service/elasticsearchservice", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "M1+iJ/A2Ml8bxSJFrBr/jWsv9w0=", "path": "github.com/aws/aws-sdk-go/service/elastictranscoder", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "BjzlDfZp1UvDoFfFnkwBxJxtylg=", "path": "github.com/aws/aws-sdk-go/service/elb", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "42TACCjZnJKGuF4ijfLpKUpw4/I=", "path": "github.com/aws/aws-sdk-go/service/elbv2", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "x+ykEiXwI53Wm6Ypb4XgFf/6HaI=", + "checksumSHA1": "lJcieoov9dRhwpuEBasKweL7Mzo=", "path": "github.com/aws/aws-sdk-go/service/emr", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "1O87s9AddHMbwCu6ooNULcW9iE8=", "path": "github.com/aws/aws-sdk-go/service/firehose", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "g5xmBO7nAUGV2yT8SAL2tfP8DUU=", "path": "github.com/aws/aws-sdk-go/service/glacier", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "7JybKGBdRMLcnHP+126VLsnVghM=", "path": "github.com/aws/aws-sdk-go/service/iam", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Bk6ExT97T4NMOyXthMr6Avm34mg=", "path": "github.com/aws/aws-sdk-go/service/inspector", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "lUmFKbtBQn9S4qrD5GOd57PIU1c=", "path": "github.com/aws/aws-sdk-go/service/kinesis", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "l1NpLkHXS+eDybfk4Al9Afhyf/4=", "path": "github.com/aws/aws-sdk-go/service/kms", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "8kUY3AExG/gcAJ2I2a5RCSoxx5I=", "path": "github.com/aws/aws-sdk-go/service/lambda", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Ab4YFGFLtEBEIpr8kHkLjB7ydGY=", "path": "github.com/aws/aws-sdk-go/service/lightsail", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "c3N3uwWuXjwio6NNDAlDr0oUUXk=", "path": "github.com/aws/aws-sdk-go/service/opsworks", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { - "checksumSHA1": "jlUKUEyZw9qh+qLaPaRzWS5bxEk=", + "checksumSHA1": "ra0UNwqr9Ic/fsEGk41dvl5jqbs=", "path": "github.com/aws/aws-sdk-go/service/rds", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "09fncNHyk8Tcw9Ailvi0pi9F1Xc=", "path": "github.com/aws/aws-sdk-go/service/redshift", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "VWVMEqjfDDgB14lgsv0Zq3dQclU=", "path": "github.com/aws/aws-sdk-go/service/route53", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "eEWM4wKzVbRqAwIy3MdMCDUGs2s=", "path": "github.com/aws/aws-sdk-go/service/s3", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "4NNi2Ab0iPu/MRGo/kn20mTNxg4=", "path": "github.com/aws/aws-sdk-go/service/ses", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "KpqdFUB/0gBsouCqZmflQ4YPXB0=", "path": "github.com/aws/aws-sdk-go/service/sfn", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "cRGam+7Yt9Ys4WQH6TNYg+Fjf20=", "path": "github.com/aws/aws-sdk-go/service/simpledb", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "3wN8qn+1be7xe/0zXrOM502s+8M=", "path": "github.com/aws/aws-sdk-go/service/sns", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "pMyhp8ffTMnHDoF+Wu0rcvhVoNE=", "path": "github.com/aws/aws-sdk-go/service/sqs", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "UEVVPCLpzuLRBIZI7X1A8mIpSuA=", "path": "github.com/aws/aws-sdk-go/service/ssm", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "Knj17ZMPWkGYTm2hZxEgnuboMM4=", "path": "github.com/aws/aws-sdk-go/service/sts", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "C99KOCRh6qMcFwKFZy3r8we9NNE=", "path": "github.com/aws/aws-sdk-go/service/waf", - "revision": "fa1a4bc634fffa6ac468d8fb217e05475e063440", - "revisionTime": "2017-03-08T00:44:25Z", - "version": "v1.7.5", - "versionExact": "v1.7.5" + "revision": "695fe24acaf9afe80b0ce261d4637f42ba0b4c7d", + "revisionTime": "2017-03-13T22:48:26Z", + "version": "v1", + "versionExact": "v1.7.9" }, { "checksumSHA1": "nqw2Qn5xUklssHTubS5HDvEL9L4=", @@ -1179,6 +1185,18 @@ "revision": "7bfb7937d106522a9c6d659864dca47cddcccc8a", "revisionTime": "2017-01-10T09:44:45Z" }, + { + "checksumSHA1": "6fUPaqXabil0m2nqKONt9lOmo4c=", + "path": "github.com/circonus-labs/circonus-gometrics/api", + "revision": "55add91cfb689b0fd6e9fa67c58c7a948310a80e", + "revisionTime": "2017-03-17T00:26:31Z" + }, + { + "checksumSHA1": "bQhz/fcyZPmuHSH2qwC4ZtATy5c=", + "path": "github.com/circonus-labs/circonus-gometrics/api/config", + "revision": "55add91cfb689b0fd6e9fa67c58c7a948310a80e", + "revisionTime": "2017-03-17T00:26:31Z" + }, { "checksumSHA1": "QhYMdplKQJAMptRaHZBB8CF6HdM=", "path": "github.com/cloudflare/cloudflare-go", @@ -1270,9 +1288,10 @@ "revisionTime": "2016-07-14T17:28:59Z" }, { - "checksumSHA1": "+LNqBN6tG7kPK8t4EIoNxk+VTvg=", + "checksumSHA1": "cSJrzeVJLa9x2xoVqrJLz2Y+l0Y=", "path": "github.com/cyberdelia/heroku-go/v3", - "revision": "81c5afa1abcf69cc18ccc24fa3716b5a455c9208" + "revision": "58deda4c1fb0b4803387b29dc916c21887b81954", + "revisionTime": "2017-03-06T18:52:00Z" }, { "checksumSHA1": "/5cvgU+J4l7EhMXTK76KaCAfOuU=", @@ -1336,6 +1355,42 @@ "revision": "50133d63723f8fa376e632a853739990a133be16", "revisionTime": "2017-02-21T19:08:14Z" }, + { + "checksumSHA1": "VTxWyFud/RedrpllGdQonVtGM/A=", + "path": "github.com/docker/docker/api/types/strslice", + "revision": "b248de7e332b6e67b08a8981f68060e6ae629ccf", + "revisionTime": "2016-09-15T05:15:42Z" + }, + { + "checksumSHA1": "28zvWJsE4skyLANiN3Png632NLM=", + "path": "github.com/docker/docker/pkg/urlutil", + "revision": "b248de7e332b6e67b08a8981f68060e6ae629ccf", + "revisionTime": "2016-09-15T05:15:42Z" + }, + { + "checksumSHA1": "lEqVJt7+iIa0nMKYWIUoQhh9VTM=", + "path": "github.com/docker/go-connections/nat", + "revision": "990a1a1a70b0da4c4cb70e117971a4f0babfbf1a", + "revisionTime": "2016-06-08T02:44:54Z" + }, + { + "checksumSHA1": "P03iBfOzJJIhNWtskEWCoaEamBs=", + "path": "github.com/docker/libcompose/config", + "revision": "f5739a73c53493ebd1ff76d6ec95f3fc1c478c38", + "revisionTime": "2017-02-24T10:46:12Z" + }, + { + "checksumSHA1": "VCGi4eudukyWy7ulSjKeEHCyfks=", + "path": "github.com/docker/libcompose/utils", + "revision": "f5739a73c53493ebd1ff76d6ec95f3fc1c478c38", + "revisionTime": "2017-02-24T10:46:12Z" + }, + { + "checksumSHA1": "qUasAZHQeUb/16j062vNLffMhlA=", + "path": "github.com/docker/libcompose/yaml", + "revision": "f5739a73c53493ebd1ff76d6ec95f3fc1c478c38", + "revisionTime": "2017-02-24T10:46:12Z" + }, { "checksumSHA1": "zkENTbOfU8YoxPfFwVAhTz516Dg=", "path": "github.com/dustin/go-humanize", @@ -1386,6 +1441,12 @@ "revision": "a720dfa8df582c51dee1b36feabb906bde1588bd", "revisionTime": "2017-01-03T08:10:50Z" }, + { + "checksumSHA1": "VdXZPcDRHK1T7XjBu2KW8Mb8S6w=", + "path": "github.com/flynn/go-shlex", + "revision": "3f9db97f856818214da2e1057f8ad84803971cff", + "revisionTime": "2015-05-15T14:53:56Z" + }, { "checksumSHA1": "n25vuAkZbpXDMGrutoefN+b4g+M=", "path": "github.com/franela/goreq", @@ -1474,9 +1535,10 @@ "revisionTime": "2016-11-17T03:31:26Z" }, { - "checksumSHA1": "ov+6gzPH5YDff0pSit5Zolkh2gQ=", + "checksumSHA1": "nu3W9toub02L8S239VzXF+pevWM=", "path": "github.com/google/go-github/github", - "revision": "ac4445ca1c9dfacf5c0bbf34b712b23a3bb59b6c" + "revision": "c1bdf188056730d883ce163c5f7400f25ba766d6", + "revisionTime": "2017-03-11T05:09:05Z" }, { "checksumSHA1": "Evpv9y6iPdy+8FeAVDmKrqV1sqo=", @@ -1498,278 +1560,284 @@ { "checksumSHA1": "xKB/9qxVhWxAERkjZLYfuUBR4P8=", "path": "github.com/gophercloud/gophercloud", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "S3zTth9INyj1RfyHkQEvJAvRWvw=", + "checksumSHA1": "0KdIjTH5IO8hlIl8kdfI6313GiY=", "path": "github.com/gophercloud/gophercloud/openstack", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "XAKLUSwXSMGtbp+U874qU4MzT/A=", + "checksumSHA1": "f2hdkOhYmmO2ljNtr+OThK8VAEI=", "path": "github.com/gophercloud/gophercloud/openstack/blockstorage/extensions/volumeactions", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "9DTfNt/B4aZWXEmTNqXU5rNrrDc=", "path": "github.com/gophercloud/gophercloud/openstack/blockstorage/v1/volumes", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "B4IXSmq364HcBruvvV0QjDFxZgc=", "path": "github.com/gophercloud/gophercloud/openstack/blockstorage/v2/volumes", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" + }, + { + "checksumSHA1": "y49Ur726Juznj85+23ZgqMvehgg=", + "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/availabilityzones", + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "w2wHF5eEBE89ZYlkS9GAJsSIq9U=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/bootfromvolume", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "e7AW3YDVYJPKUjpqsB4AL9RRlTw=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "RWwUliHD65cWApdEo4ckOcPSArg=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "tOmntqlmZ/r8aObUChNloddLhwk=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/schedulerhints", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "jNrUTQf+9dYfaD7YqvKwC+kGvyY=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/secgroups", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "ci4gzd7Uy9JC4NcQ2ms19pjtW6s=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/servergroups", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "qBpGbX7LQMPATdO8XyQmU7IXDiI=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "5JuziAp9BSRA/z+8pTjVLTWeTw4=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/tenantnetworks", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "2VNgU0F9PDax5VKClvMLmbzuksw=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/volumeattach", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "a9xDFPigDjHlPlthknKlBduGvKY=", + "checksumSHA1": "S1BV3o8Pa0aM5RaUuRYXY7LnPIc=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/flavors", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "UGeqrw3KdPNRwDxl315MAYyy/uY=", + "checksumSHA1": "Rnzx2YgOD41k8KoPA08tR992PxQ=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/images", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "S8zR7Y8Yf6dz5+m5jyWYu5ar+vk=", + "checksumSHA1": "IjCvcaNnRW++hclt21WUkMYinaA=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/servers", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "1sVqsZBZBNhDXLY9XzjMkcOkcbg=", "path": "github.com/gophercloud/gophercloud/openstack/identity/v2/tenants", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "AvUU5En9YpG25iLlcAPDgcQODjI=", "path": "github.com/gophercloud/gophercloud/openstack/identity/v2/tokens", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "ZKyEbJuIlvuZ9aUushINCXJHF4w=", "path": "github.com/gophercloud/gophercloud/openstack/identity/v3/tokens", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "5+wNKnxGvSGV8lHS+7km0ZiNEts=", "path": "github.com/gophercloud/gophercloud/openstack/imageservice/v2/imagedata", - "revision": "f47ca3a2d457dd4601b823eb17ecc3094baf5fab", - "revisionTime": "2017-02-17T17:23:12Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "TG1z1hNllqjUgBpNnqZTxHqXBTs=", + "checksumSHA1": "fyXTcJg3obtp3n+99WOGtUiMelg=", "path": "github.com/gophercloud/gophercloud/openstack/imageservice/v2/images", - "revision": "f47ca3a2d457dd4601b823eb17ecc3094baf5fab", - "revisionTime": "2017-02-17T17:23:12Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "aTHxjMlfNXFJ3l2TZyvIwqt/3kM=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/firewalls", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "14ZhP0wE/WCL/6oujcML755AaH4=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/policies", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "sYET5A7WTyJ7dpuxR/VXYoReldw=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/fwaas/rules", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "0UcU/7oQbhlnYKoT+I+T403U8MQ=", + "checksumSHA1": "CHmnyRSFPivC+b/ojgfeEIY5ReM=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "Mjt7GwFygyqPxygY8xZZnUasHmk=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/routers", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "mCTz2rnyVfhjJ+AD/WihCNcYWiY=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas/members", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "B2mtHvADREtFLam72wyijyQh/Ds=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas/monitors", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "pTr22CKKJ26yvhgd0SRxFF4jkEs=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas/pools", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "E7/Z7g5O9o+ge+8YklheTpKgWNw=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas/vips", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "mhpwj5tPv7Uw5aUfC55fhLPBcKo=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/listeners", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "5efJz6UH7JCFeav5ZCCzicXCFTU=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/loadbalancers", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "TVFgBTz7B6bb1R4TWdgAkbE1/fk=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/monitors", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "xirjw9vJIN6rmkT3T56bfPfOLUM=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/lbaas_v2/pools", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "FKwSMrpQf7b3TcCOQfh+ovoBShA=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/security/groups", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "CsS/kI3VeLcSHzMKviFVDwqwgvk=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/security/rules", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "zKOhFTL5BDZPMC58ZzZkryjskno=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/networks", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "Lx257Qaf6y2weNwHTx6lm3OY7a8=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/ports", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "wY0MY7RpX0Z2Y0rMmrAuYS6cHYA=", "path": "github.com/gophercloud/gophercloud/openstack/networking/v2/subnets", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "LtdQKIKKRKe6FOGdBvrBz/bg1Gc=", "path": "github.com/gophercloud/gophercloud/openstack/objectstorage/v1/accounts", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "1lwXcRrM5A7iCfekbn3bpfNLe3g=", "path": "github.com/gophercloud/gophercloud/openstack/objectstorage/v1/containers", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "dotTh+ZsNiyv8e9Z4e0chPEZDKE=", "path": "github.com/gophercloud/gophercloud/openstack/objectstorage/v1/objects", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "roxPPVwS2CjJhf0CApHNQxAX7EA=", "path": "github.com/gophercloud/gophercloud/openstack/objectstorage/v1/swauth", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "TDOZnaS0TO0NirpxV1QwPerAQTY=", "path": "github.com/gophercloud/gophercloud/openstack/utils", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { - "checksumSHA1": "pmpLcbUZ+EgLUmTbzMtGRq3haOU=", + "checksumSHA1": "FNy075ydQZXvnL2bNNIOCmy/ghs=", "path": "github.com/gophercloud/gophercloud/pagination", - "revision": "b06120d13e262ceaf890ef38ee30898813696af0", - "revisionTime": "2017-02-14T04:36:15Z" + "revision": "0f64da0e36de86a0ca1a8f2fc1b0570a0d3f7504", + "revisionTime": "2017-03-10T01:59:53Z" }, { "checksumSHA1": "6tvhO5ieOvX9R6o0vtl19s0lr8E=", @@ -1827,6 +1895,12 @@ "revision": "1792bd8de119ba49b17fd8d3c3c1f488ec613e62", "revisionTime": "2016-11-07T20:49:10Z" }, + { + "checksumSHA1": "jfELEMRhiTcppZmRH+ZwtkVS5Uw=", + "path": "github.com/hashicorp/consul/acl", + "revision": "144a5e5340893a5e726e831c648f26dc19fef1e7", + "revisionTime": "2017-03-10T23:35:18Z" + }, { "checksumSHA1": "ygEjA1d52B1RDmZu8+1WTwkrYDQ=", "comment": "v0.6.3-28-g3215b87", @@ -1834,6 +1908,24 @@ "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2", "revisionTime": "2017-02-07T15:38:46Z" }, + { + "checksumSHA1": "nomqbPd9j3XelMMcv7+vTEPsdr4=", + "path": "github.com/hashicorp/consul/consul/structs", + "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2", + "revisionTime": "2017-02-07T15:38:46Z" + }, + { + "checksumSHA1": "dgYoWTG7nIL9CUBuktDvMZqYDR8=", + "path": "github.com/hashicorp/consul/testutil", + "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2", + "revisionTime": "2017-02-07T15:38:46Z" + }, + { + "checksumSHA1": "ZPDLNuKJGZJFV9HlJ/V0O4/c/Ko=", + "path": "github.com/hashicorp/consul/types", + "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2", + "revisionTime": "2017-02-07T15:38:46Z" + }, { "checksumSHA1": "cdOCt0Yb+hdErz8NAQqayxPmRsY=", "path": "github.com/hashicorp/errwrap", @@ -1891,6 +1983,12 @@ "revision": "6bb64b370b90e7ef1fa532be9e591a81c3493e00", "revisionTime": "2016-05-03T14:34:40Z" }, + { + "checksumSHA1": "BGODc7juQbdG3vNXHZG07kt+lKI=", + "path": "github.com/hashicorp/go-sockaddr", + "revision": "f910dd83c2052566cad78352c33af714358d1372", + "revisionTime": "2017-02-08T07:30:35Z" + }, { "checksumSHA1": "85XUnluYJL7F55ptcwdmN8eSOsk=", "path": "github.com/hashicorp/go-uuid", @@ -1902,6 +2000,18 @@ "revision": "e96d3840402619007766590ecea8dd7af1292276", "revisionTime": "2016-10-31T18:26:05Z" }, + { + "checksumSHA1": "d9PxF1XQGLMJZRct2R8qVM/eYlE=", + "path": "github.com/hashicorp/golang-lru", + "revision": "0a025b7e63adc15a622f29b0b2c4c3848243bbf6", + "revisionTime": "2016-08-13T22:13:03Z" + }, + { + "checksumSHA1": "9hffs0bAIU6CquiRhKQdzjHnKt0=", + "path": "github.com/hashicorp/golang-lru/simplelru", + "revision": "0a025b7e63adc15a622f29b0b2c4c3848243bbf6", + "revisionTime": "2016-08-13T22:13:03Z" + }, { "checksumSHA1": "Ok3Csn6Voou7pQT6Dv2mkwpqFtw=", "path": "github.com/hashicorp/hcl", @@ -1998,12 +2108,30 @@ "revision": "0dc08b1671f34c4250ce212759ebd880f743d883", "revisionTime": "2015-06-09T07:04:31Z" }, + { + "checksumSHA1": "1zk7IeGClUqBo+Phsx89p7fQ/rQ=", + "path": "github.com/hashicorp/memberlist", + "revision": "23ad4b7d7b38496cd64c241dfd4c60b7794c254a", + "revisionTime": "2017-02-08T21:15:06Z" + }, + { + "checksumSHA1": "wpirHJV/6VEbbD+HyAP2/6Xc0ek=", + "path": "github.com/hashicorp/raft", + "revision": "aaad9f10266e089bd401e7a6487651a69275641b", + "revisionTime": "2016-11-10T00:52:40Z" + }, { "checksumSHA1": "o8In5byYGDCY/mnTuV4Tfmci+3w=", "comment": "v0.7.0-12-ge4ec8cc", "path": "github.com/hashicorp/serf/coordinate", "revision": "e4ec8cc423bbe20d26584b96efbeb9102e16d05f" }, + { + "checksumSHA1": "/0bDLkmtbWMm06hjM7HnHaw0QBo=", + "path": "github.com/hashicorp/serf/serf", + "revision": "19f2c401e122352c047a84d6584dd51e2fb8fcc4", + "revisionTime": "2017-03-08T19:39:51Z" + }, { "checksumSHA1": "2fkVZIzvxIGBLhSiVnkTgGiqpQ4=", "path": "github.com/hashicorp/vault/api", @@ -2712,6 +2840,12 @@ "revision": "d10489e5d217ebe9c23470c4d0ba7081a6d1e799", "revisionTime": "2016-12-25T12:04:19Z" }, + { + "checksumSHA1": "tnMZLo/kR9Kqx6GtmWwowtTLlA8=", + "path": "github.com/sean-/seed", + "revision": "e2103e2c35297fb7e17febb81e49b312087a2372", + "revisionTime": "2017-03-13T16:33:22Z" + }, { "checksumSHA1": "ySSmShoczI/i/5PzurH8Uhi/dbA=", "path": "github.com/sethvargo/go-fastly", @@ -2889,6 +3023,24 @@ "revision": "ba9c9e33906f58169366275e3450db66139a31a9", "revisionTime": "2015-12-15T15:34:51Z" }, + { + "checksumSHA1": "B9K+5clCq0PU8n8/utbKT0QjQyU=", + "path": "github.com/xeipuuv/gojsonpointer", + "revision": "6fe8760cad3569743d51ddbb243b26f8456742dc", + "revisionTime": "2017-02-25T23:34:18Z" + }, + { + "checksumSHA1": "pSoUW+qY6LwIJ5lFwGohPU5HUpg=", + "path": "github.com/xeipuuv/gojsonreference", + "revision": "e02fc20de94c78484cd5ffb007f8af96be030a45", + "revisionTime": "2015-08-08T06:50:54Z" + }, + { + "checksumSHA1": "yuSXFgYAa6NfA3+8Kv1s7pYdFZE=", + "path": "github.com/xeipuuv/gojsonschema", + "revision": "ff0417f4272e480246b4507459b3f6ae721a87ac", + "revisionTime": "2017-02-25T17:21:24Z" + }, { "checksumSHA1": "eXEiPlpDRaamJQ4vPX/9t333kQc=", "comment": "v1.5.4-13-g75ce5fb", @@ -3099,10 +3251,10 @@ "revisionTime": "2017-01-13T00:03:17Z" }, { - "checksumSHA1": "qt8Mg1hYm0ApdGODreQxBh30FDU=", + "checksumSHA1": "lAMqZyc46cU5WaRuw4mVHFXpvps=", "path": "google.golang.org/api/container/v1", - "revision": "3cc2e591b550923a2c5f0ab5a803feda924d5823", - "revisionTime": "2016-11-27T23:54:21Z" + "revision": "64485db7e8c8be51e572801d06cdbcfadd3546c1", + "revisionTime": "2017-02-23T23:41:36Z" }, { "checksumSHA1": "JYl35km48fLrIx7YUtzcgd4J7Rk=", @@ -3111,10 +3263,10 @@ "revisionTime": "2016-11-27T23:54:21Z" }, { - "checksumSHA1": "a1NkriuA/uk+Wv6yCFzxz4LIaDg=", + "checksumSHA1": "C7k1pbU/WU4CBoBwA4EBUnV/iek=", "path": "google.golang.org/api/gensupport", - "revision": "8840436417f044055c16fc7e4018f08484f52839", - "revisionTime": "2017-01-13T00:03:17Z" + "revision": "64485db7e8c8be51e572801d06cdbcfadd3546c1", + "revisionTime": "2017-02-23T23:41:36Z" }, { "checksumSHA1": "yQREK/OWrz9PLljbr127+xFk6J0=", @@ -3280,50 +3432,50 @@ { "checksumSHA1": "IOhjrvLMN5Mw8PeiRF/xAfSxvew=", "path": "gopkg.in/ns1/ns1-go.v2", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { - "checksumSHA1": "t20/HSVruhTb/TVwgc9mpw/oMTA=", + "checksumSHA1": "e7eKqt/2RnmGPYJtcJd4IY2M/DU=", "path": "gopkg.in/ns1/ns1-go.v2/rest", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { "checksumSHA1": "euh1cYwe0t2erigdvOMueyniPH0=", "path": "gopkg.in/ns1/ns1-go.v2/rest/model", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { "checksumSHA1": "tdMxXKsUHn3yZpur14ZNLMVyQJM=", "path": "gopkg.in/ns1/ns1-go.v2/rest/model/account", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { "checksumSHA1": "gBVND8veklEQV0gxF3lERV6mSZk=", "path": "gopkg.in/ns1/ns1-go.v2/rest/model/data", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { "checksumSHA1": "GbL7ThrBZfKs1lhzguxzscIynac=", "path": "gopkg.in/ns1/ns1-go.v2/rest/model/dns", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { "checksumSHA1": "CuurmNep8iMdYFodxRxAeewowsQ=", "path": "gopkg.in/ns1/ns1-go.v2/rest/model/filter", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { "checksumSHA1": "B0C8F5th11AHl1fo8k0I8+DvrjE=", "path": "gopkg.in/ns1/ns1-go.v2/rest/model/monitor", - "revision": "49e3a8a0b594e847e01cdac77810ba49f9564ccf", - "revisionTime": "2017-03-02T13:56:36Z" + "revision": "5bff869d22e76e3699281eaa61d9d285216f321a", + "revisionTime": "2017-03-21T12:56:04Z" }, { "checksumSHA1": "mkLQOQwQwoUc9Kr9+PaVGrKUzI4=", diff --git a/website/Gemfile b/website/Gemfile index f9b604b3c6..08e6fe65e5 100644 --- a/website/Gemfile +++ b/website/Gemfile @@ -1,3 +1,3 @@ source "https://rubygems.org" -gem "middleman-hashicorp", "0.3.12" +gem "middleman-hashicorp", "0.3.13" diff --git a/website/Gemfile.lock b/website/Gemfile.lock index ff3b5b7b0b..0811f6d62e 100644 --- a/website/Gemfile.lock +++ b/website/Gemfile.lock @@ -77,7 +77,7 @@ GEM rack (>= 1.4.5, < 2.0) thor (>= 0.15.2, < 2.0) tilt (~> 1.4.1, < 2.0) - middleman-hashicorp (0.3.12) + middleman-hashicorp (0.3.13) bootstrap-sass (~> 3.3) builder (~> 3.2) middleman (~> 3.4) @@ -151,7 +151,7 @@ PLATFORMS ruby DEPENDENCIES - middleman-hashicorp (= 0.3.12) + middleman-hashicorp (= 0.3.13) BUNDLED WITH 1.14.6 diff --git a/website/Makefile b/website/Makefile index 91a898c3a7..41fcf114ed 100644 --- a/website/Makefile +++ b/website/Makefile @@ -1,4 +1,4 @@ -VERSION?="0.3.12" +VERSION?="0.3.13" website: @echo "==> Starting website in Docker..." diff --git a/website/config.rb b/website/config.rb index 88673e08d9..6810a2abfc 100644 --- a/website/config.rb +++ b/website/config.rb @@ -2,7 +2,7 @@ set :base_url, "https://www.terraform.io/" activate :hashicorp do |h| h.name = "terraform" - h.version = "0.8.5" + h.version = "0.9.1" h.github_slug = "hashicorp/terraform" end diff --git a/website/packer.json b/website/packer.json index b8068d9f4e..b51f638015 100644 --- a/website/packer.json +++ b/website/packer.json @@ -8,7 +8,7 @@ "builders": [ { "type": "docker", - "image": "hashicorp/middleman-hashicorp:0.3.12", + "image": "hashicorp/middleman-hashicorp:0.3.13", "discard": "true", "run_command": ["-d", "-i", "-t", "{{ .Image }}", "/bin/sh"] } diff --git a/website/source/assets/images/logo-header.svg b/website/source/assets/images/logo-header.svg new file mode 100644 index 0000000000..7f9e690f4c --- /dev/null +++ b/website/source/assets/images/logo-header.svg @@ -0,0 +1,30 @@ + + + HashiCorp Terraform + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/website/source/assets/javascripts/application.js b/website/source/assets/javascripts/application.js index 5e308bc94e..bc70be7fbb 100644 --- a/website/source/assets/javascripts/application.js +++ b/website/source/assets/javascripts/application.js @@ -1,3 +1,4 @@ +//= require turbolinks //= require jquery //= require bootstrap @@ -23,3 +24,6 @@ //= require app/_Engine.Typewriter //= require app/_Sidebar //= require app/_Init + +// assets/javascripts/application.js +//= require hashicorp/mega-nav diff --git a/website/source/assets/stylesheets/_announcement-bnr.scss b/website/source/assets/stylesheets/_announcement-bnr.scss deleted file mode 100755 index 6e3253039f..0000000000 --- a/website/source/assets/stylesheets/_announcement-bnr.scss +++ /dev/null @@ -1,142 +0,0 @@ -// -// announcement bnr -// -------------------------------------------------- - -$enterprise-bnr-font-weight: 300; -$enterprise-bnr-consul-color: #B52A55; -$enterprise-color-dark-white: #A9B1B5; - -body{ - // when _announcment-bnr.erb (ie. Consul Enterprise Announcment) is being used in layout we need to push down content to accommodate - // add this class to body - &.-displaying-bnr{ - #header{ - > .container{ - padding-top: 8px; - -webkit-transform: translateY(32px); - -ms-transform: translateY(32px); - transform: translateY(32px); - } - } - - #jumbotron { - .container{ - .jumbo-logo-wrap{ - margin-top: 160px; - } - } - } - - &.page-sub{ - #header{ - > .container{ - padding-bottom: 32px; - } - } - } - } -} - - -#announcement-bnr { - height: 40px; - flex-shrink: 0; - background-color: #000; - - &.-absolute{ - position: absolute; - top: 0; - left: 0; - width: 100%; - z-index: 9999; - } - - a,p{ - font-size: 14px; - color: $enterprise-color-dark-white; - font-family: $header-font-family; - font-weight: $enterprise-bnr-font-weight; - font-size: 13px; - line-height: 40px; - margin-bottom: 0; - } - - .link-highlight{ - display: inline-block; - margin-left: 3px; - color: lighten($purple, 10%); - font-weight: 400; - -webkit-transform: translateY(1px); - -ms-transform: translateY(1px); - transform: translateY(1px); - } - - .enterprise-logo{ - position: relative; - top: 4px; - - &:hover{ - text-decoration: none; - - svg{ - rect{ - fill: $enterprise-color-dark-white; - } - } - } - - svg{ - width: 156px; - height: 18px; - fill: $white; - margin-right: 4px; - margin-left: 3px; - - rect{ - @include transition(all .1s ease-in); - } - } - } -} - -.hcaret{ - display: inline-block; - -moz-transform: translate(0, -1px) rotate(135deg); - -webkit-transform: translate(0, -1px) rotate(135deg); - transform: translate(0, -1px) rotate(135deg); - width: 7px; - height: 7px; - border-top: 1px solid lighten($purple, 10%); - border-left: 1px solid lighten($purple, 10%); - @include transition(all .1s ease-in); -} - -@media (max-width: 768px) { - #announcement-bnr { - .tagline{ - display: none; - } - } -} - -@media (max-width: 320px) { - #announcement-bnr { - a,p{ - font-size: 12px; - } - - .link-highlight{ - display: inline-block; - margin-left: 1px; - } - - .enterprise-logo svg{ - width: 128px; - margin-left: 2px; - } - - .hcaret{ - display: none; - } - } -} diff --git a/website/source/assets/stylesheets/_docs.scss b/website/source/assets/stylesheets/_docs.scss index 76a7c80e06..35f16eb60d 100755 --- a/website/source/assets/stylesheets/_docs.scss +++ b/website/source/assets/stylesheets/_docs.scss @@ -18,6 +18,7 @@ body.layout-azure, body.layout-bitbucket, body.layout-chef, body.layout-azurerm, +body.layout-circonus, body.layout-clc, body.layout-cloudflare, body.layout-cloudstack, @@ -39,6 +40,7 @@ body.layout-heroku, body.layout-ignition, body.layout-icinga2, body.layout-influxdb, +body.layout-kubernetes, body.layout-librato, body.layout-logentries, body.layout-mailgun, diff --git a/website/source/assets/stylesheets/_global.scss b/website/source/assets/stylesheets/_global.scss index bfd2e5c901..f30bbff6db 100755 --- a/website/source/assets/stylesheets/_global.scss +++ b/website/source/assets/stylesheets/_global.scss @@ -45,7 +45,6 @@ h4 { p { margin-bottom: 30px; font-size: 16px; - font-family: $font-family-open-sans; font-weight: regular; line-height: 1.5; } diff --git a/website/source/assets/stylesheets/_header.scss b/website/source/assets/stylesheets/_header.scss index 4acd721693..3106642355 100755 --- a/website/source/assets/stylesheets/_header.scss +++ b/website/source/assets/stylesheets/_header.scss @@ -13,12 +13,11 @@ body.page-sub{ #header { .navbar-brand { .logo{ - font-size: 32px; + width: $project-logo-width; + height: $project-logo-height; + font-size: 0px; font-family: $font-family-klavika; - font-weight: 500; - background: image-url('logo-header.png') 0 0 no-repeat; - @include img-retina("../images/logo-header.png", "../images/logo-header@2x.png", $project-logo-width, $project-logo-height); - background-position: 0 40%; + background: image-url('logo-header.svg') 0 32% no-repeat; &:hover{ opacity: .6; @@ -56,27 +55,3 @@ body.page-sub{ } } } - -@media (max-width: 414px) { - #header { - .navbar-brand { - .logo{ - padding-left: 37px; - font-size: 18px; - @include img-retina("../images/logo-header.png", "../images/logo-header@2x.png", $project-logo-width * .75, $project-logo-height * .75); - //background-position: 0 45%; - } - } - } -} - - -@media (max-width: 320px) { - #header { - .navbar-brand { - .logo{ - font-size: 0 !important; //hide terraform text - } - } - } -} diff --git a/website/source/assets/stylesheets/_home.scss b/website/source/assets/stylesheets/_home.scss index d22edac97c..807a8b5421 100755 --- a/website/source/assets/stylesheets/_home.scss +++ b/website/source/assets/stylesheets/_home.scss @@ -31,6 +31,25 @@ body.page-home { z-index: 0; } + .announcement { + margin-top: 60px; + border: 1px solid rgba(255,255,255,.3); + padding: 25px 10px; + + p { + color: $gray; + line-height: 1.2; + margin-bottom: 0; + + a { + color: $purple; + text-decoration: underline; + // inline-block ensures links doesn't text-wrap; + display: inline-block; + } + } + } + #customer-logos{ position: relative; width: 100%; diff --git a/website/source/assets/stylesheets/_variables.scss b/website/source/assets/stylesheets/_variables.scss index 768ff74ab4..1a1b0ff072 100755 --- a/website/source/assets/stylesheets/_variables.scss +++ b/website/source/assets/stylesheets/_variables.scss @@ -11,7 +11,7 @@ $header-height: 90px; $jumbotron-color: #fff; $btn-border-radius: 4px; $el-border-radius: 6px; -$negative-hero-margin: -70px; +$negative-hero-margin: -80px; // colors // ------------------------- diff --git a/website/source/assets/stylesheets/application.scss b/website/source/assets/stylesheets/application.scss index e4c0ae7a89..165f2a01f3 100755 --- a/website/source/assets/stylesheets/application.scss +++ b/website/source/assets/stylesheets/application.scss @@ -3,6 +3,9 @@ @import url("//fonts.googleapis.com/css?family=Open+Sans:300,400,600"); +// Mega Nav +@import 'hashicorp/mega-nav'; + // Core variables and mixins @import '_variables'; @@ -19,7 +22,6 @@ @import 'hashicorp-shared/_hashicorp-sidebar'; // Components -@import '_announcement-bnr'; @import '_header'; @import '_footer'; @import '_jumbotron'; diff --git a/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss index 699a2d073d..f549cfed26 100755 --- a/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss +++ b/website/source/assets/stylesheets/hashicorp-shared/_hashicorp-header.scss @@ -310,32 +310,3 @@ } } } - -@media (max-width: 414px) { - #header { - .navbar-toggle{ - padding-top: 10px; - height: $header-mobile-height; - } - - .navbar-brand { - height: $header-mobile-height; - - .logo{ - height: $header-mobile-height; - line-height: $header-mobile-height; - } - .by-hashicorp{ - height: $header-mobile-height; - line-height: $header-mobile-height; - padding-top: 0; - } - } - .main-links, - .external-links { - li > a { - line-height: $header-mobile-height; - } - } - } -} diff --git a/website/source/assets/stylesheets/hashicorp-shared/_project-utility.scss b/website/source/assets/stylesheets/hashicorp-shared/_project-utility.scss index 570d6932c2..01f04a96b3 100755 --- a/website/source/assets/stylesheets/hashicorp-shared/_project-utility.scss +++ b/website/source/assets/stylesheets/hashicorp-shared/_project-utility.scss @@ -4,8 +4,8 @@ // -------------------------------------------------- // Variables -$project-logo-width: 38px; -$project-logo-height: 40px; +$project-logo-width: 163px; +$project-logo-height: 90px; $project-logo-pad-left: 8px; // Mixins diff --git a/website/source/docs/backends/config.html.md b/website/source/docs/backends/config.html.md index 9673f45167..36ae09cb1b 100644 --- a/website/source/docs/backends/config.html.md +++ b/website/source/docs/backends/config.html.md @@ -54,7 +54,7 @@ the configuration itself. We call this specifying only a _partial_ configuration With a partial configuration, the remaining configuration is expected as part of the [initialization](/docs/backends/init.html) process. There are -two ways to supply the remaining configuration: +a few ways to supply the remaining configuration: * **Interactively**: Terraform will interactively ask you for the required values. Terraform will not ask you for optional values. @@ -63,13 +63,32 @@ two ways to supply the remaining configuration: This file can then be sourced via some secure means (such as [Vault](https://www.vaultproject.io)). -In both cases, the final configuration is stored on disk in the + * **Command-line key/value pairs**: Key/value pairs in the format of + `key=value` can be specified as part of the init command. Note that + many shells retain command-line flags in a history file, so this isn't + recommended for secrets. + +In all cases, the final configuration is stored on disk in the ".terraform" directory, which should be ignored from version control. This means that sensitive information can be omitted from version control but it ultimately still lives on disk. In the future, Terraform may provide basic encryption on disk so that values are at least not plaintext. +When using partial configuration, Terraform requires at a minimum that +an empty backend configuration is in the Terraform files. For example: + +``` +terraform { + backend "consul" {} +} +``` + +This minimal requirement allows Terraform to detect _unsetting_ backends. +We cannot accept the backend type on the command-line because while it is +technically possible, Terraform would then be unable to detect if you +want to unset your backend (and move back to local state). + ## Changing Configuration You can change your backend configuration at any time. You can change diff --git a/website/source/docs/backends/legacy-0-8.html.md b/website/source/docs/backends/legacy-0-8.html.md index 596cda0e43..c3197d23c8 100644 --- a/website/source/docs/backends/legacy-0-8.html.md +++ b/website/source/docs/backends/legacy-0-8.html.md @@ -118,12 +118,15 @@ and you will lose any changes that were in the remote location. The `terraform remote config` command has been replaced with `terraform init`. The new command is better in many ways by allowing file-based -configuration, automatic state migration, and more. However, the new -command doesn't support configuration via command-line flags. +configuration, automatic state migration, and more. + +You should be able to very easily migrate `terraform remote config` +scripting to the new `terraform init` command. The new `terraform init` command takes a `-backend-config` flag which is -an HCL file that is merged with the backend configuration in your Terraform -files. This lets you keep secrets out of your actual configuration. +eitheran HCL file or a string in the format of `key=value`. This configuration +is merged with the backend configuration in your Terraform files. +This lets you keep secrets out of your actual configuration. We call this "partial configuration" and you can learn more in the docs on [configuring backends](/docs/backends/config.html). diff --git a/website/source/docs/backends/types/consul.html.md b/website/source/docs/backends/types/consul.html.md index 59c05724e8..f03e6dcc01 100644 --- a/website/source/docs/backends/types/consul.html.md +++ b/website/source/docs/backends/types/consul.html.md @@ -53,3 +53,5 @@ The following configuration options / environment variables are supported: * `datacenter` - (Optional) The datacenter to use. Defaults to that of the agent. * `http_auth` / `CONSUL_HTTP_AUTH` - (Optional) HTTP Basic Authentication credentials to be used when communicating with Consul, in the format of either `user` or `user:pass`. + * `gzip` - (Optional) `true` to compress the state data using gzip, or `false` (the default) to leave it uncompressed. + * `lock` - (Optional) `false` to disable locking. This defaults to true, but will require session permissions with Consul to perform locking. diff --git a/website/source/docs/commands/apply.html.markdown b/website/source/docs/commands/apply.html.markdown index d1c46f1544..8224c8a14b 100644 --- a/website/source/docs/commands/apply.html.markdown +++ b/website/source/docs/commands/apply.html.markdown @@ -43,11 +43,11 @@ The command-line flags are all optional. The list of available flags are: apply. * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. * `-state-out=path` - Path to write updated state file. By default, the `-state` path will be used. Ignored when - [remote state](/docs/state/remote/index.html) is used. + [remote state](/docs/state/remote.html) is used. * `-target=resource` - A [Resource Address](/docs/internals/resource-addressing.html) to target. Operation will diff --git a/website/source/docs/commands/console.html.markdown b/website/source/docs/commands/console.html.markdown index 9a70f1757f..62d8a5025b 100644 --- a/website/source/docs/commands/console.html.markdown +++ b/website/source/docs/commands/console.html.markdown @@ -53,7 +53,7 @@ $ echo "1 + 5" | terraform console ## Remote State The `terraform console `command will read configured state even if it -is [remote](/docs/state/remote/index.html). This is great for scripting +is [remote](/docs/state/remote.html). This is great for scripting state reading in CI environments or other remote scenarios. After configuring remote state, run a `terraform remote pull` command diff --git a/website/source/docs/commands/import.html.md b/website/source/docs/commands/import.html.md index 7c43e7c399..6502419726 100644 --- a/website/source/docs/commands/import.html.md +++ b/website/source/docs/commands/import.html.md @@ -43,10 +43,10 @@ The command-line flags are all optional. The list of available flags are: * `-input=true` - Whether to ask for input for provider configuration. * `-state=path` - The path to read and save state files (unless state-out is - specified). Ignored when [remote state](/docs/state/remote/index.html) is used. + specified). Ignored when [remote state](/docs/state/remote.html) is used. * `-state-out=path` - Path to write the final state file. By default, this is - the state path. Ignored when [remote state](/docs/state/remote/index.html) is + the state path. Ignored when [remote state](/docs/state/remote.html) is used. * `-provider=provider` - Specified provider to use for import. This is used for diff --git a/website/source/docs/commands/init.html.markdown b/website/source/docs/commands/init.html.markdown index 11047aaf82..a9c0863fe1 100644 --- a/website/source/docs/commands/init.html.markdown +++ b/website/source/docs/commands/init.html.markdown @@ -44,17 +44,19 @@ The command-line flags are all optional. The list of available flags are: * `-backend=true` - Initialize the [backend](/docs/backends) for this environment. -* `-backend-config=path` - Path to an HCL file with additional configuration - for the backend. This is merged with the backend in the Terraform configuration. +* `-backend-config=value` - Value can be a path to an HCL file or a string + in the format of 'key=value'. This specifies additional configuration to merge + for the backend. This can be specified multiple times. Flags specified + later in the line override those specified earlier if they conflict. * `-get=true` - Download any modules for this configuration. * `-input=true` - Ask for input interactively if necessary. If this is false and input is required, `init` will error. -## Backend Config File +## Backend Config -The `-backend-config` path can be used to specify additional +The `-backend-config` can take a path or `key=value` pair to specify additional backend configuration when [initialize a backend](/docs/backends/init.html). This is particularly useful for @@ -62,7 +64,7 @@ This is particularly useful for configuration lets you keep sensitive information out of your Terraform configuration. -The backend configuration file is a basic HCL file with key/value pairs. +For path values, the backend configuration file is a basic HCL file with key/value pairs. The keys are configuration keys for your backend. You do not need to wrap it in a `terraform` block. For example, the following file is a valid backend configuration file for the Consul backend type: @@ -71,3 +73,17 @@ configuration file for the Consul backend type: address = "demo.consul.io" path = "newpath" ``` + +If the value contains an equal sign (`=`), it is parsed as a `key=value` pair. +The format of this flag is identical to the `-var` flag for plan, apply, +etc. but applies to configuration keys for backends. For example: + +``` +$ terraform init \ + -backend-config 'address=demo.consul.io' \ + -backend-config 'path=newpath' +``` + +These two formats can be mixed. In this case, the values will be merged by +key with keys specified later in the command-line overriding conflicting +keys specified earlier. diff --git a/website/source/docs/commands/output.html.markdown b/website/source/docs/commands/output.html.markdown index c8aade6636..9e6a602523 100644 --- a/website/source/docs/commands/output.html.markdown +++ b/website/source/docs/commands/output.html.markdown @@ -25,7 +25,7 @@ The command-line flags are all optional. The list of available flags are: a key per output. If `NAME` is specified, only the output specified will be returned. This can be piped into tools such as `jq` for further processing. * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. * `-module=module_name` - The module path which has needed output. By default this is the root path. Other modules can be specified by a period-separated list. Example: "foo" would reference the module diff --git a/website/source/docs/commands/plan.html.markdown b/website/source/docs/commands/plan.html.markdown index 48d5374736..fb0b506b8a 100644 --- a/website/source/docs/commands/plan.html.markdown +++ b/website/source/docs/commands/plan.html.markdown @@ -56,7 +56,7 @@ The command-line flags are all optional. The list of available flags are: * `-refresh=true` - Update the state prior to checking for differences. * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. * `-target=resource` - A [Resource Address](/docs/internals/resource-addressing.html) to target. Operation will diff --git a/website/source/docs/commands/push.html.markdown b/website/source/docs/commands/push.html.markdown index 4714d99266..8fc5ed3c74 100644 --- a/website/source/docs/commands/push.html.markdown +++ b/website/source/docs/commands/push.html.markdown @@ -117,7 +117,7 @@ or plan), and the `-overwrite` flag tells the push command to update Atlas. ## Remote State Requirement `terraform push` requires that -[remote state](/docs/commands/remote-config.html) +[remote state](/docs/state/remote.html) is enabled. The reasoning for this is simple: `terraform push` sends your configuration to be managed remotely. For it to keep the state in sync and for you to be able to easily access that state, remote state must diff --git a/website/source/docs/commands/refresh.html.markdown b/website/source/docs/commands/refresh.html.markdown index 67d6c65bd7..faaff08467 100644 --- a/website/source/docs/commands/refresh.html.markdown +++ b/website/source/docs/commands/refresh.html.markdown @@ -32,11 +32,11 @@ The command-line flags are all optional. The list of available flags are: * `-no-color` - Disables output with coloring * `-state=path` - Path to read and write the state file to. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. * `-state-out=path` - Path to write updated state file. By default, the `-state` path will be used. Ignored when - [remote state](/docs/state/remote/index.html) is used. + [remote state](/docs/state/remote.html) is used. * `-target=resource` - A [Resource Address](/docs/internals/resource-addressing.html) to target. Operation will diff --git a/website/source/docs/commands/state/list.html.md b/website/source/docs/commands/state/list.html.md index 4970028d02..9b8a6dadcb 100644 --- a/website/source/docs/commands/state/list.html.md +++ b/website/source/docs/commands/state/list.html.md @@ -30,7 +30,7 @@ in [resource addressing format](/docs/commands/state/addressing.html). The command-line flags are all optional. The list of available flags are: * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. ## Example: All Resources diff --git a/website/source/docs/commands/state/mv.html.md b/website/source/docs/commands/state/mv.html.md index 089bd1d913..ee58bcf948 100644 --- a/website/source/docs/commands/state/mv.html.md +++ b/website/source/docs/commands/state/mv.html.md @@ -47,12 +47,12 @@ The command-line flags are all optional. The list of available flags are: This is only necessary if `-state-out` is specified. * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. * `-state-out=path` - Path to the state file to write to. If this isn't specified the state specified by `-state` will be used. This can be a new or existing path. Ignored when - [remote state](/docs/state/remote/index.html) is used. + [remote state](/docs/state/remote.html) is used. ## Example: Rename a Resource diff --git a/website/source/docs/commands/state/show.html.md b/website/source/docs/commands/state/show.html.md index cda7ef3781..6f553d634d 100644 --- a/website/source/docs/commands/state/show.html.md +++ b/website/source/docs/commands/state/show.html.md @@ -30,7 +30,7 @@ in [resource addressing format](/docs/commands/state/addressing.html). The command-line flags are all optional. The list of available flags are: * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. ## Example: Show a Resource diff --git a/website/source/docs/commands/taint.html.markdown b/website/source/docs/commands/taint.html.markdown index 32ae60b4b9..6ec9011366 100644 --- a/website/source/docs/commands/taint.html.markdown +++ b/website/source/docs/commands/taint.html.markdown @@ -56,8 +56,8 @@ The command-line flags are all optional. The list of available flags are: * `-no-color` - Disables output with coloring * `-state=path` - Path to read and write the state file to. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. * `-state-out=path` - Path to write updated state file. By default, the `-state` path will be used. Ignored when - [remote state](/docs/state/remote/index.html) is used. + [remote state](/docs/state/remote.html) is used. diff --git a/website/source/docs/commands/untaint.html.markdown b/website/source/docs/commands/untaint.html.markdown index 7850c8714f..b077098c5e 100644 --- a/website/source/docs/commands/untaint.html.markdown +++ b/website/source/docs/commands/untaint.html.markdown @@ -56,8 +56,8 @@ certain cases, see above note). The list of available flags are: * `-no-color` - Disables output with coloring * `-state=path` - Path to read and write the state file to. Defaults to "terraform.tfstate". - Ignored when [remote state](/docs/state/remote/index.html) is used. + Ignored when [remote state](/docs/state/remote.html) is used. * `-state-out=path` - Path to write updated state file. By default, the `-state` path will be used. Ignored when - [remote state](/docs/state/remote/index.html) is used. + [remote state](/docs/state/remote.html) is used. diff --git a/website/source/docs/configuration/interpolation.html.md b/website/source/docs/configuration/interpolation.html.md index 2d1e3052bc..f6bc3e3c8b 100644 --- a/website/source/docs/configuration/interpolation.html.md +++ b/website/source/docs/configuration/interpolation.html.md @@ -84,6 +84,12 @@ interpolate the path to the current module. `root` will interpolate the path of the root module. In general, you probably want the `path.module` variable. +#### Terraform meta information + +The syntax is `terraform.FIELD`. This variable type contains metadata about +the currently executing Terraform run. FIELD can currently only be `env` to +reference the currently active [state environment](/docs/state/environments.html). + ## Conditionals @@ -273,7 +279,7 @@ The supported built-in functions are: * `pathexpand(string)` - Returns a filepath string with `~` expanded to the home directory. Note: This will create a plan diff between two different hosts, unless the filepaths are the same. - + * `replace(string, search, replace)` - Does a search and replace on the given string. All instances of `search` are replaced with the value of `replace`. If `search` is wrapped in forward slashes, it is treated @@ -312,6 +318,8 @@ The supported built-in functions are: `a_resource_param = ["${split(",", var.CSV_STRING)}"]`. Example: `split(",", module.amod.server_ids)` + * `substr(string, offset, length)` - Extracts a substring from the input string. A negative offset is interpreted as being equivalent to a positive offset measured backwards from the end of the string. A length of `-1` is interpretted as meaning "until the end of the string". + * `timestamp()` - Returns a UTC timestamp string in RFC 3339 format. This string will change with every invocation of the function, so in order to prevent diffs on every plan & apply, it must be used with the [`ignore_changes`](/docs/configuration/resources.html#ignore-changes) lifecycle attribute. diff --git a/website/source/docs/configuration/resources.html.md b/website/source/docs/configuration/resources.html.md index ab15b67442..1cc79bda4a 100644 --- a/website/source/docs/configuration/resources.html.md +++ b/website/source/docs/configuration/resources.html.md @@ -102,7 +102,7 @@ wildcard (e.g. `"rout*"`) is **not** supported. ### Timeouts -Individual Resources may provide a `timeout` block to enable users to configure the +Individual Resources may provide a `timeouts` block to enable users to configure the amount of time a specific operation is allowed to take before being considered an error. For example, the [aws_db_instance](/docs/providers/aws/r/db_instance.html#timeouts) @@ -122,7 +122,7 @@ resource "aws_db_instance" "timeout_example" { name = "mydb" [...] - timeout { + timeouts { create = "60m" delete = "2h" } diff --git a/website/source/docs/configuration/variables.html.md b/website/source/docs/configuration/variables.html.md index 223dcb1c01..c98b0bfaa8 100644 --- a/website/source/docs/configuration/variables.html.md +++ b/website/source/docs/configuration/variables.html.md @@ -296,7 +296,7 @@ the last value specified is effective. ### Variable Merging -When variables are conflicting, map values are merged and all are values are +When variables are conflicting, map values are merged and all other values are overridden. Map values are always merged. For example, if you set a variable twice on the command line: diff --git a/website/source/docs/import/importability.html.md b/website/source/docs/import/importability.html.md index 39e6908e1a..e1949f83f0 100644 --- a/website/source/docs/import/importability.html.md +++ b/website/source/docs/import/importability.html.md @@ -119,6 +119,11 @@ To make a resource importable, please see the * azurerm_storage_account * azurerm_virtual_network +### Circonus + +* circonus_check +* circonus_contact_group + ### DigitalOcean * digitalocean_domain diff --git a/website/source/docs/providers/aws/d/db_instance.html.markdown b/website/source/docs/providers/aws/d/db_instance.html.markdown index 25eff50ee1..a1dc380ed3 100644 --- a/website/source/docs/providers/aws/d/db_instance.html.markdown +++ b/website/source/docs/providers/aws/d/db_instance.html.markdown @@ -28,6 +28,7 @@ The following arguments are supported: The following attributes are exported: +* `address` - The address of the RDS instance. * `allocated_storage` - Specifies the allocated storage size specified in gigabytes. * `auto_minor_version_upgrade` - Indicates that minor version patches are applied automatically. * `availability_zone` - Specifies the name of the Availability Zone the DB instance is located in. @@ -40,8 +41,10 @@ The following attributes are exported: * `db_security_groups` - Provides List of DB security groups associated to this DB instance. * `db_subnet_group` - Specifies the name of the subnet group associated with the DB instance. * `db_instance_port` - Specifies the port that the DB instance listens on. +* `endpoint` - The connection endpoint. * `engine` - Provides the name of the database engine to be used for this DB instance. * `engine_version` - Indicates the database engine version. +* `hosted_zone_id` - The canonical hosted zone ID of the DB instance (to be used in a Route 53 Alias record). * `iops` - Specifies the Provisioned IOPS (I/O operations per second) value. * `kms_key_id` - If StorageEncrypted is true, the KMS key identifier for the encrypted DB instance. * `license_model` - License model information for this DB instance. @@ -50,6 +53,7 @@ The following attributes are exported: * `monitoring_role_arn` - The ARN for the IAM role that permits RDS to send Enhanced Monitoring metrics to CloudWatch Logs. * `multi_az` - Specifies if the DB instance is a Multi-AZ deployment. * `option_group_memberships` - Provides the list of option group memberships for this DB instance. +* `port` - The database port. * `preferred_backup_window` - Specifies the daily time range during which automated backups are created. * `preferred_maintenance_window` - Specifies the weekly time range during which system maintenance can occur in UTC. * `publicly_accessible` - Specifies the accessibility options for the DB instance. diff --git a/website/source/docs/providers/aws/d/route_table.html.markdown b/website/source/docs/providers/aws/d/route_table.html.markdown index 09ed6aa287..e8e17e4fe4 100644 --- a/website/source/docs/providers/aws/d/route_table.html.markdown +++ b/website/source/docs/providers/aws/d/route_table.html.markdown @@ -71,6 +71,8 @@ the selected Route Table. Each route supports the following: * `cidr_block` - The CIDR block of the route. +* `ipv6_cidr_block` - The IPv6 CIDR block of the route. +* `egress_only_gateway_id` - The ID of the Egress Only Internet Gateway. * `gateway_id` - The Internet Gateway ID. * `nat_gateway_id` - The NAT Gateway ID. * `instance_id` - The EC2 instance ID. diff --git a/website/source/docs/providers/aws/d/sns_topic.html.markdown b/website/source/docs/providers/aws/d/sns_topic.html.markdown index cc0e9346bd..0e1e6a4c27 100644 --- a/website/source/docs/providers/aws/d/sns_topic.html.markdown +++ b/website/source/docs/providers/aws/d/sns_topic.html.markdown @@ -1,6 +1,6 @@ --- layout: "aws" -page_title: "AWS: aws_sns_topic +page_title: "AWS: aws_sns_topic" sidebar_current: "docs-aws-datasource-sns-topic" description: |- Get information on a Amazon Simple Notification Service (SNS) Topic diff --git a/website/source/docs/providers/aws/d/vpn_gateway.html.markdown b/website/source/docs/providers/aws/d/vpn_gateway.html.markdown index 1505785525..96b2bf340a 100644 --- a/website/source/docs/providers/aws/d/vpn_gateway.html.markdown +++ b/website/source/docs/providers/aws/d/vpn_gateway.html.markdown @@ -14,7 +14,16 @@ a specific VPN gateway. ## Example Usage ``` +data "aws_vpn_gateway" "selected" { + filter { + name = "tag:Name" + values = ["vpn-gw"] + } +} +output "vpn_gateway_id" { + value = "${data.aws_vpn_gateway.selected.id}" +} ``` ## Argument Reference diff --git a/website/source/docs/providers/aws/r/api_gateway_api_key.html.markdown b/website/source/docs/providers/aws/r/api_gateway_api_key.html.markdown index ad537cfd6f..ad7e8413f2 100644 --- a/website/source/docs/providers/aws/r/api_gateway_api_key.html.markdown +++ b/website/source/docs/providers/aws/r/api_gateway_api_key.html.markdown @@ -10,6 +10,8 @@ description: |- Provides an API Gateway API Key. +~> **Warning:** Since the API Gateway usage plans feature was launched on August 11, 2016, usage plans are now **required** to associate an API key with an API stage. + ## Example Usage ``` @@ -39,6 +41,7 @@ The following arguments are supported: * `name` - (Required) The name of the API key * `description` - (Optional) The API key description. Defaults to "Managed by Terraform". * `enabled` - (Optional) Specifies whether the API key can be used by callers. Defaults to `true`. +* `value` - (Optional) The value of the API key. If not specified, it will be automatically generated by AWS on creation. * `stage_key` - (Optional) A list of stage keys associated with the API key - see below `stage_key` block supports the following: @@ -53,6 +56,7 @@ The following attributes are exported: * `id` - The ID of the API key * `created_date` - The creation date of the API key * `last_updated_date` - The last update date of the API key +* `value` - The value of the API key ## Import diff --git a/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown b/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown index 2f95fecc1a..4ad0a59509 100644 --- a/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown +++ b/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown @@ -10,8 +10,8 @@ description: |- Provides an API Gateway Deployment. --> **Note:** Depends on having `aws_api_gateway_method` inside your rest api. To ensure this -you might need to add an explicit `depends_on` for clean runs. +-> **Note:** Depends on having `aws_api_gateway_integration` inside your rest api (which in turn depends on `aws_api_gateway_method`). To avoid race conditions +you might need to add an explicit `depends_on = ["aws_api_gateway_integration.name"]`. ## Example Usage diff --git a/website/source/docs/providers/aws/r/api_gateway_domain_name.html.markdown b/website/source/docs/providers/aws/r/api_gateway_domain_name.html.markdown index ed0a5ead46..b58f4e6882 100644 --- a/website/source/docs/providers/aws/r/api_gateway_domain_name.html.markdown +++ b/website/source/docs/providers/aws/r/api_gateway_domain_name.html.markdown @@ -55,15 +55,16 @@ resource "aws_route53_record" "example" { The following arguments are supported: * `domain_name` - (Required) The fully-qualified domain name to register -* `certificate_name` - (Required) The unique name to use when registering this - cert as an IAM server certificate -* `certificate_body` - (Required) The certificate issued for the domain name - being registered, in PEM format -* `certificate_chain` - (Required) The certificate for the CA that issued the +* `certificate_name` - (Optional) The unique name to use when registering this + cert as an IAM server certificate. Conflicts with `certificate_arn`. +* `certificate_body` - (Optional) The certificate issued for the domain name + being registered, in PEM format. Conflicts with `certificate_arn`. +* `certificate_chain` - (Optional) The certificate for the CA that issued the certificate, along with any intermediate CA certificates required to - create an unbroken chain to a certificate trusted by the intended API clients. -* `certificate_private_key` - (Required) The private key associated with the - domain certificate given in `certificate_body`. + create an unbroken chain to a certificate trusted by the intended API clients. Conflicts with `certificate_arn`. +* `certificate_private_key` - (Optional) The private key associated with the + domain certificate given in `certificate_body`. Conflicts with `certificate_arn`. +* `certificate_arn` - (Optional) The ARN for an AWS-managed certificate. Conflicts with `certificate_name`, `certificate_body`, `certificate_chain` and `certificate_private_key`. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/api_gateway_usage_plan.html.markdown b/website/source/docs/providers/aws/r/api_gateway_usage_plan.html.markdown new file mode 100644 index 0000000000..46fa85d8e9 --- /dev/null +++ b/website/source/docs/providers/aws/r/api_gateway_usage_plan.html.markdown @@ -0,0 +1,107 @@ +--- +layout: "aws" +page_title: "AWS: aws_api_gateway_usage_plan" +sidebar_current: "docs-aws-resource-api-gateway-usage-plan" +description: |- + Provides an API Gateway Usage Plan. +--- + +# aws\_api\_usage\_plan + +Provides an API Gateway Usage Plan. + +## Example Usage + +``` +resource "aws_api_gateway_rest_api" "myapi" { + name = "MyDemoAPI" +} + +... + +resource "aws_api_gateway_deployment" "dev" { + rest_api_id = "${aws_api_gateway_rest_api.myapi.id}" + stage_name = "dev" +} + +resource "aws_api_gateway_deployment" "prod" { + rest_api_id = "${aws_api_gateway_rest_api.myapi.id}" + stage_name = "prod" +} + +resource "aws_api_gateway_usage_plan" "MyUsagePlan" { + name = "my-usage-plan" + description = "my description" + product_code = "MYCODE" + + api_stages { + api_id = "${aws_api_gateway_rest_api.myapi.id}" + stage = "${aws_api_gateway_deployment.dev.stage_name}" + } + + api_stages { + api_id = "${aws_api_gateway_rest_api.myapi.id}" + stage = "${aws_api_gateway_deployment.prod.stage_name}" + } + + quota_settings { + limit = 20 + offset = 2 + period = "WEEK" + } + + throttle_settings { + burst_limit = 5 + rate_limit = 10 + } +} +``` + +## Argument Reference + +The API Gateway Usage Plan argument layout is a structure composed of several sub-resources - these resources are laid out below. + +### Top-Level Arguments + +* `name` - (Required) The name of the usage plan. +* `description` - (Required) The description of a usage plan. +* `api_stages` - (Optional) The associated [API stages](#api-stages-arguments) of the usage plan. +* `quota_settings` - (Optional) The [quota settings](#quota-settings-arguments) of the usage plan. +* `throttle_settings` - (Optional) The [throttling limits](#throttling-settings-arguments) of the usage plan. +* `product_code` - (Optional) The AWS Markeplace product identifier to associate with the usage plan as a SaaS product on AWS Marketplace. + +#### Api Stages arguments + + * `api_id` (Optional) - API Id of the associated API stage in a usage plan. + * `stage` (Optional) - API stage name of the associated API stage in a usage plan. + +#### Quota Settings Arguments + + * `limit` (Optional) - The maximum number of requests that can be made in a given time period. + * `offset` (Optional) - The number of requests subtracted from the given limit in the initial time period. + * `period` (Optional) - The time period in which the limit applies. Valid values are "DAY", "WEEK" or "MONTH". + +#### Throttling Settings Arguments + + * `burst_limit` (Optional) - The API request burst limit, the maximum rate limit over a time ranging from one to a few seconds, depending upon whether the underlying token bucket is at its full capacity. + * `rate_limit` (Optional) - The API request steady-state rate limit. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the API resource +* `name` - The name of the usage plan. +* `description` - The description of a usage plan. +* `api_stages` - The associated API stages of the usage plan. +* `quota_settings` - The quota of the usage plan. +* `throttle_settings` - The throttling limits of the usage plan. +* `product_code` - The AWS Markeplace product identifier to associate with the usage plan as a SaaS product on AWS Marketplace. + +## Import + +AWS API Gateway Usage Plan can be imported using the `id`, e.g. + +``` +$ terraform import aws_api_gateway_usage_plan.myusageplan +``` diff --git a/website/source/docs/providers/aws/r/api_gateway_usage_plan_key.html.markdown b/website/source/docs/providers/aws/r/api_gateway_usage_plan_key.html.markdown new file mode 100644 index 0000000000..0a4293eea0 --- /dev/null +++ b/website/source/docs/providers/aws/r/api_gateway_usage_plan_key.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: aws_api_gateway_usage_plan_key" +sidebar_current: "docs-aws-resource-api-gateway-usage-plan-key" +description: |- + Provides an API Gateway Usage Plan Key. +--- + +# aws\_api\_usage\_plan\_key + +Provides an API Gateway Usage Plan Key. + +## Example Usage + +``` +resource "aws_api_gateway_rest_api" "test" { + name = "MyDemoAPI" +} + +... + +resource "aws_api_gateway_usage_plan" "myusageplan" { + name = "my_usage_plan" +} + +resource "aws_api_gateway_api_key" "mykey" { + name = "my_key" + + stage_key { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "${aws_api_gateway_deployment.foo.stage_name}" + } +} + +resource "aws_api_gateway_usage_plan_key" "main" { + key_id = "${aws_api_gateway_api_key.mykey.id}" + key_type = "API_KEY" + usage_plan_id = "${aws_api_gateway_usage_plan.myusageplan.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `key_id` - (Required) The identifier of the API key resource. +* `key_type` - (Required) The type of the API key resource. Currently, the valid key type is API_KEY. +* `usage_plan_id` - (Required) The Id of the usage plan resource representing to associate the key to. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Id of a usage plan key. +* `key_id` - The type of a usage plan key. Currently, the valid key type is API_KEY. +* `key_type` - The ID of the API resource +* `usage_plan_id` - The ID of the API resource +* `name` - The name of a usage plan key. +* `value` - The value of a usage plan key. diff --git a/website/source/docs/providers/aws/r/autoscaling_attachment.html.markdown b/website/source/docs/providers/aws/r/autoscaling_attachment.html.markdown index 6e966bade7..d12560da25 100644 --- a/website/source/docs/providers/aws/r/autoscaling_attachment.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_attachment.html.markdown @@ -16,6 +16,7 @@ an ELB), and an [AutoScaling Group resource](autoscaling_group.html) with `load_balancers` defined in-line. At this time you cannot use an ASG with in-line load balancers in conjunction with an ASG Attachment resource. Doing so will cause a conflict and will overwrite attachments. + ## Example Usage ``` @@ -26,10 +27,19 @@ resource "aws_autoscaling_attachment" "asg_attachment_bar" { } ``` +``` +# Create a new ALB Target Group attachment +resource "aws_autoscaling_attachment" "asg_attachment_bar" { + autoscaling_group_name = "${aws_autoscaling_group.asg.id}" + alb_target_group_arn = "${aws_alb_target_group.test.arn}" +} +``` + ## Argument Reference The following arguments are supported: * `autoscaling_group_name` - (Required) Name of ASG to associate with the ELB. -* `elb` - (Required) The name of the ELB. +* `elb` - (Optional) The name of the ELB. +* `alb_target_group_arn` - (Optional) The ARN of an ALB Target Group. diff --git a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown index 5aad4c8abe..0ceacc2ba6 100644 --- a/website/source/docs/providers/aws/r/autoscaling_group.html.markdown +++ b/website/source/docs/providers/aws/r/autoscaling_group.html.markdown @@ -64,7 +64,9 @@ EOF The following arguments are supported: -* `name` - (Optional) The name of the auto scale group. By default generated by terraform. +* `name` - (Optional) The name of the auto scaling group. By default generated by Terraform. +* `name_prefix` - (Optional) Creates a unique name beginning with the specified + prefix. Conflicts with `name`. * `max_size` - (Required) The maximum size of the auto scale group. * `min_size` - (Required) The minimum size of the auto scale group. (See also [Waiting for Capacity](#waiting-for-capacity) below.) diff --git a/website/source/docs/providers/aws/r/db_instance.html.markdown b/website/source/docs/providers/aws/r/db_instance.html.markdown index eafba74177..cc957a9ea5 100644 --- a/website/source/docs/providers/aws/r/db_instance.html.markdown +++ b/website/source/docs/providers/aws/r/db_instance.html.markdown @@ -31,6 +31,7 @@ for more information. ``` resource "aws_db_instance" "default" { allocated_storage = 10 + storage_type = "gp2" engine = "mysql" engine_version = "5.6.17" instance_class = "db.t1.micro" @@ -56,7 +57,7 @@ The following arguments are supported: * `instance_class` - (Required) The instance type of the RDS instance. * `storage_type` - (Optional) One of "standard" (magnetic), "gp2" (general purpose SSD), or "io1" (provisioned IOPS SSD). The default is "io1" if - `iops` is specified, "standard" if not. + `iops` is specified, "standard" if not. Note that this behaviour is different from the AWS web console, where the default is "gp2". * `final_snapshot_identifier` - (Optional) The name of your final DB snapshot when this DB instance is deleted. If omitted, no final snapshot will be made. diff --git a/website/source/docs/providers/aws/r/default_route_table.html.markdown b/website/source/docs/providers/aws/r/default_route_table.html.markdown index 8efb284469..4ef364ea2c 100644 --- a/website/source/docs/providers/aws/r/default_route_table.html.markdown +++ b/website/source/docs/providers/aws/r/default_route_table.html.markdown @@ -68,6 +68,8 @@ The following arguments are supported: Each route supports the following: * `cidr_block` - (Required) The CIDR block of the route. +* `ipv6_cidr_block` - Optional) The Ipv6 CIDR block of the route +* `egress_only_gateway_id` - (Optional) The Egress Only Internet Gateway ID. * `gateway_id` - (Optional) The Internet Gateway ID. * `nat_gateway_id` - (Optional) The NAT Gateway ID. * `instance_id` - (Optional) The EC2 instance ID. diff --git a/website/source/docs/providers/aws/r/ecs_service.html.markdown b/website/source/docs/providers/aws/r/ecs_service.html.markdown index c141b219a4..8e8c9716f4 100644 --- a/website/source/docs/providers/aws/r/ecs_service.html.markdown +++ b/website/source/docs/providers/aws/r/ecs_service.html.markdown @@ -27,7 +27,7 @@ resource "aws_ecs_service" "mongo" { placement_strategy { type = "binpack" - field = "CPU" + field = "cpu" } load_balancer { diff --git a/website/source/docs/providers/aws/r/ecs_task_definition.html.markdown b/website/source/docs/providers/aws/r/ecs_task_definition.html.markdown index 899a2d1e30..0db84067aa 100644 --- a/website/source/docs/providers/aws/r/ecs_task_definition.html.markdown +++ b/website/source/docs/providers/aws/r/ecs_task_definition.html.markdown @@ -30,7 +30,7 @@ resource "aws_ecs_task_definition" "service" { ``` The referenced `task-definitions/service.json` file contains a valid JSON document, -which is show below, and its content is going to be passed directly into the +which is shown below, and its content is going to be passed directly into the `container_definitions` attribute as a string. Please note that this example contains only a small subset of the available parameters. @@ -69,7 +69,7 @@ contains only a small subset of the available parameters. The following arguments are supported: -* `family` - (Required) An unique name for your task definition. +* `family` - (Required) A unique name for your task definition. * `container_definitions` - (Required) A list of valid [container definitions] (http://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDefinition.html) provided as a single valid JSON document. Please note that you should only provide values that are part of the container diff --git a/website/source/docs/providers/aws/r/elb.html.markdown b/website/source/docs/providers/aws/r/elb.html.markdown index ee0bbdd4b3..f682817d0c 100644 --- a/website/source/docs/providers/aws/r/elb.html.markdown +++ b/website/source/docs/providers/aws/r/elb.html.markdown @@ -72,7 +72,9 @@ resource "aws_elb" "bar" { The following arguments are supported: -* `name` - (Optional) The name of the ELB. By default generated by terraform. +* `name` - (Optional) The name of the ELB. By default generated by Terraform. +* `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified + prefix. Conflicts with `name`. * `access_logs` - (Optional) An Access Logs block. Access Logs documented below. * `availability_zones` - (Required for an EC2-classic ELB) The AZ's to serve traffic in. * `security_groups` - (Optional) A list of security group IDs to assign to the ELB. diff --git a/website/source/docs/providers/aws/r/emr_cluster.html.md b/website/source/docs/providers/aws/r/emr_cluster.html.md index 9568cb54bc..e2c19172d4 100644 --- a/website/source/docs/providers/aws/r/emr_cluster.html.md +++ b/website/source/docs/providers/aws/r/emr_cluster.html.md @@ -80,10 +80,10 @@ flow. Defined below the cluster nodes. Defined below * `configurations` - (Optional) List of configurations supplied for the EMR cluster you are creating * `visible_to_all_users` - (Optional) Whether the job flow is visible to all IAM users of the AWS account associated with the job flow. Default `true` +* `autoscaling_role` - (Optional) An IAM role for automatic scaling policies. The IAM role provides permissions that the automatic scaling feature requires to launch and terminate EC2 instances in an instance group. * `tags` - (Optional) list of tags to apply to the EMR Cluster - ## ec2\_attributes Attributes for the Amazon EC2 instances running the job flow diff --git a/website/source/docs/providers/aws/r/iam_account_alias.html.markdown b/website/source/docs/providers/aws/r/iam_account_alias.html.markdown new file mode 100644 index 0000000000..7acd088347 --- /dev/null +++ b/website/source/docs/providers/aws/r/iam_account_alias.html.markdown @@ -0,0 +1,35 @@ +--- +layout: "aws" +page_title: "AWS: aws_iam_account_alias" +sidebar_current: "docs-aws-resource-iam-account-alias" +description: |- + Manages the account alias for the AWS Account. +--- + +# aws\_iam\_account\_alias + +-> **Note:** There is only a single account alias per AWS account. + +Manages the account alias for the AWS Account. + +## Example Usage + +``` +resource "aws_iam_account_alias" "alias" { + account_alias = "my-account-alias" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `account_alias` - (Required) The account alias + +## Import + +The current Account Alias can be imported using the `account_alias`, e.g. + +``` +$ terraform import aws_iam_account_alias.alias my-account-alias +``` diff --git a/website/source/docs/providers/aws/r/iam_group_policy.html.markdown b/website/source/docs/providers/aws/r/iam_group_policy.html.markdown index c752a98fb8..09b6f963bc 100644 --- a/website/source/docs/providers/aws/r/iam_group_policy.html.markdown +++ b/website/source/docs/providers/aws/r/iam_group_policy.html.markdown @@ -45,7 +45,10 @@ The following arguments are supported: * `policy` - (Required) The policy document. This is a JSON formatted string. The heredoc syntax or `file` function is helpful here. -* `name` - (Required) Name of the policy. +* `name` - (Optional) The name of the policy. If omitted, Terraform will +assign a random, unique name. +* `name_prefix` - (Optional) Creates a unique name beginning with the specified + prefix. Conflicts with `name`. * `group` - (Required) The IAM group to attach to the policy. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown b/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown index 9089dcf53f..73b66e6dfd 100644 --- a/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown +++ b/website/source/docs/providers/aws/r/iam_instance_profile.html.markdown @@ -44,7 +44,7 @@ EOF The following arguments are supported: -* `name` - (Optional, Forces new resource) The profile's name. +* `name` - (Optional, Forces new resource) The profile's name. If omitted, Terraform will assign a random, unique name. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. * `path` - (Optional, default "/") Path in which to create the profile. * `roles` - (Required) A list of role names to include in the profile. The current default is 1. If you see an error message similar to `Cannot exceed quota for InstanceSessionsPerInstanceProfile: 1`, then you must contact AWS support and ask for a limit increase. diff --git a/website/source/docs/providers/aws/r/iam_policy.html.markdown b/website/source/docs/providers/aws/r/iam_policy.html.markdown index 2d0e37036c..3f2fdc7a59 100644 --- a/website/source/docs/providers/aws/r/iam_policy.html.markdown +++ b/website/source/docs/providers/aws/r/iam_policy.html.markdown @@ -38,7 +38,7 @@ EOF The following arguments are supported: * `description` - (Optional) Description of the IAM policy. -* `name` - (Optional, Forces new resource) The name of the policy. +* `name` - (Optional, Forces new resource) The name of the policy. If omitted, Terraform will assign a random, unique name. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. * `path` - (Optional, default "/") Path in which to create the policy. See [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) for more information. @@ -64,4 +64,4 @@ IAM Policies can be imported using the `arn`, e.g. ``` $ terraform import aws_iam_policy.administrator arn:aws:iam::123456789012:policy/UsersManageOwnCredentials -``` \ No newline at end of file +``` diff --git a/website/source/docs/providers/aws/r/iam_role.html.markdown b/website/source/docs/providers/aws/r/iam_role.html.markdown index 4064a73517..7abee66b03 100644 --- a/website/source/docs/providers/aws/r/iam_role.html.markdown +++ b/website/source/docs/providers/aws/r/iam_role.html.markdown @@ -38,7 +38,7 @@ EOF The following arguments are supported: -* `name` - (Optional, Forces new resource) The name of the role. +* `name` - (Optional, Forces new resource) The name of the role. If omitted, Terraform will assign a random, unique name. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. * `assume_role_policy` - (Required) The policy that grants an entity permission to assume the role. diff --git a/website/source/docs/providers/aws/r/iam_role_policy.html.markdown b/website/source/docs/providers/aws/r/iam_role_policy.html.markdown index adf96c3ad9..c05db5e515 100644 --- a/website/source/docs/providers/aws/r/iam_role_policy.html.markdown +++ b/website/source/docs/providers/aws/r/iam_role_policy.html.markdown @@ -58,7 +58,10 @@ EOF The following arguments are supported: -* `name` - (Required) The name of the role policy. +* `name` - (Optional) The name of the role policy. If omitted, Terraform will +assign a random, unique name. +* `name_prefix` - (Optional) Creates a unique name beginning with the specified + prefix. Conflicts with `name`. * `policy` - (Required) The policy document. This is a JSON formatted string. The heredoc syntax or `file` function is helpful here. * `role` - (Required) The IAM role to attach to the policy. diff --git a/website/source/docs/providers/aws/r/iam_user.html.markdown b/website/source/docs/providers/aws/r/iam_user.html.markdown index b054002a2a..4a7bd77dbc 100644 --- a/website/source/docs/providers/aws/r/iam_user.html.markdown +++ b/website/source/docs/providers/aws/r/iam_user.html.markdown @@ -57,8 +57,9 @@ The following arguments are supported: The following attributes are exported: -* `unique_id` - The [unique ID][1] assigned by AWS. * `arn` - The ARN assigned by AWS for this user. +* `name` - The user's name. +* `unique_id` - The [unique ID][1] assigned by AWS. [1]: https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#GUIDs diff --git a/website/source/docs/providers/aws/r/iam_user_policy.html.markdown b/website/source/docs/providers/aws/r/iam_user_policy.html.markdown index bc9b0d42a9..4f3caaead0 100644 --- a/website/source/docs/providers/aws/r/iam_user_policy.html.markdown +++ b/website/source/docs/providers/aws/r/iam_user_policy.html.markdown @@ -49,7 +49,8 @@ The following arguments are supported: * `policy` - (Required) The policy document. This is a JSON formatted string. The heredoc syntax or `file` function is helpful here. -* `name` - (Required) Name of the policy. +* `name` - (Optional) The name of the policy. If omitted, Terraform will assign a random, unique name. +* `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. * `user` - (Required) IAM user to which to attach this policy. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/key_pair.html.markdown b/website/source/docs/providers/aws/r/key_pair.html.markdown index 154615e143..acb49999e9 100644 --- a/website/source/docs/providers/aws/r/key_pair.html.markdown +++ b/website/source/docs/providers/aws/r/key_pair.html.markdown @@ -10,7 +10,7 @@ description: |- Provides an [EC2 key pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) resource. A key pair is used to control login access to EC2 instances. -Currently this resource only supports importing a user-supplied key pair, not the creation of a new key pair. +Currently this resource requires an existing user-supplied key pair. This key pair's public key will be registered with AWS to allow logging-in to EC2 instances. When importing an existing key pair the public key material may be in any format supported by AWS. Supported formats (per the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws)) are: diff --git a/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown b/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown index 137cb81434..1c2cde2d4f 100644 --- a/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown +++ b/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown @@ -82,7 +82,7 @@ resource "aws_kinesis_firehose_delivery_stream" "test_stream" { username = "testuser" password = "T3stPass" data_table_name = "test-table" - copy_options = "GZIP" + copy_options = "delimiter '|'" # the default delimiter data_table_columns = "test-col" } } @@ -152,7 +152,7 @@ The `redshift_configuration` object supports the following: * `retry_duration` - (Optional) The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value. * `role_arn` - (Required) The arn of the role the stream assumes. * `data_table_name` - (Required) The name of the table in the redshift cluster that the s3 bucket will copy to. -* `copy_options` - (Optional) Copy options for copying the data from the s3 intermediate bucket into redshift. +* `copy_options` - (Optional) Copy options for copying the data from the s3 intermediate bucket into redshift, for example to change the default delimiter. For valid values, see the [AWS documentation](http://docs.aws.amazon.com/firehose/latest/APIReference/API_CopyCommand.html) * `data_table_columns` - (Optional) The data table columns that will be targeted by the copy command. * `cloudwatch_logging_options` - (Optional) The CloudWatch Logging Options for the delivery stream. More details are given below diff --git a/website/source/docs/providers/aws/r/kms_key.html.markdown b/website/source/docs/providers/aws/r/kms_key.html.markdown index d0c89be02a..9a494977f2 100644 --- a/website/source/docs/providers/aws/r/kms_key.html.markdown +++ b/website/source/docs/providers/aws/r/kms_key.html.markdown @@ -32,6 +32,7 @@ The following arguments are supported: * `is_enabled` - (Optional) Specifies whether the key is enabled. Defaults to true. * `enable_key_rotation` - (Optional) Specifies whether [key rotation](http://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html) is enabled. Defaults to false. +* `tags` - (Optional) A mapping of tags to assign to the object. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/network_acl.html.markdown b/website/source/docs/providers/aws/r/network_acl.html.markdown index b9505c5dcc..28143f0ccf 100644 --- a/website/source/docs/providers/aws/r/network_acl.html.markdown +++ b/website/source/docs/providers/aws/r/network_acl.html.markdown @@ -63,6 +63,7 @@ Both `egress` and `ingress` support the following keys: protocol, you must specify a from and to port of 0. * `cidr_block` - (Optional) The CIDR block to match. This must be a valid network mask. +* `ipv6_cidr_block` - (Optional) The IPv6 CIDR block. * `icmp_type` - (Optional) The ICMP type to be used. Default 0. * `icmp_code` - (Optional) The ICMP type code to be used. Default 0. diff --git a/website/source/docs/providers/aws/r/network_acl_rule.html.markdown b/website/source/docs/providers/aws/r/network_acl_rule.html.markdown index 85a12afdb6..87912249a8 100644 --- a/website/source/docs/providers/aws/r/network_acl_rule.html.markdown +++ b/website/source/docs/providers/aws/r/network_acl_rule.html.markdown @@ -29,6 +29,8 @@ resource "aws_network_acl_rule" "bar" { } ``` +~> **Note:** One of either `cidr_block` or `ipv6_cidr_block` is required. + ## Argument Reference The following arguments are supported: @@ -38,7 +40,8 @@ The following arguments are supported: * `egress` - (Optional, bool) Indicates whether this is an egress rule (rule is applied to traffic leaving the subnet). Default `false`. * `protocol` - (Required) The protocol. A value of -1 means all protocols. * `rule_action` - (Required) Indicates whether to allow or deny the traffic that matches the rule. Accepted values: `allow` | `deny` -* `cidr_block` - (Required) The network range to allow or deny, in CIDR notation (for example 172.16.0.0/24 ). +* `cidr_block` - (Optional) The network range to allow or deny, in CIDR notation (for example 172.16.0.0/24 ). +* `ipv6_cidr_block` - (Optional) The IPv6 CIDR block to allow or deny. * `from_port` - (Optional) The from port to match. * `to_port` - (Optional) The to port to match. * `icmp_type` - (Optional) ICMP protocol: The ICMP type. Required if specifying ICMP for the protocol. e.g. -1 diff --git a/website/source/docs/providers/aws/r/route.html.markdown b/website/source/docs/providers/aws/r/route.html.markdown index c33c1c6bea..b6589298ff 100644 --- a/website/source/docs/providers/aws/r/route.html.markdown +++ b/website/source/docs/providers/aws/r/route.html.markdown @@ -27,19 +27,40 @@ resource "aws_route" "r" { } ``` +##Example IPv6 Usage: + +``` +resource "aws_vpc" "vpc" { + cidr_block = "10.1.0.0/16" + assign_generated_ipv6_cidr_block = true +} + +resource "aws_egress_only_internet_gateway" "egress" { + vpc_id = "${aws_vpc.vpc.id}" +} + +resource "aws_route" "r" { + route_table_id = "rtb-4fbb3ac4" + destination_ipv6_cidr_block = "::/0" + egress_only_gateway_id = "${aws_egress_only_internet_gateway.egress.id}" +} +``` + ## Argument Reference The following arguments are supported: * `route_table_id` - (Required) The ID of the routing table. -* `destination_cidr_block` - (Required) The destination CIDR block. +* `destination_cidr_block` - (Optional) The destination CIDR block. +* `destination_ipv6_cidr_block` - (Optional) The destination IPv6 CIDR block. * `vpc_peering_connection_id` - (Optional) An ID of a VPC peering connection. +* `egress_only_gateway_id` - (Optional) An ID of a VPC Egress Only Internet Gateway. * `gateway_id` - (Optional) An ID of a VPC internet gateway or a virtual private gateway. * `nat_gateway_id` - (Optional) An ID of a VPC NAT gateway. * `instance_id` - (Optional) An ID of an EC2 instance. * `network_interface_id` - (Optional) An ID of a network interface. -Each route must contain either a `gateway_id`, a `nat_gateway_id`, an +Each route must contain either a `gateway_id`, `egress_only_gateway_id` a `nat_gateway_id`, an `instance_id` or a `vpc_peering_connection_id` or a `network_interface_id`. Note that the default route, mapping the VPC's CIDR block to "local", is created implicitly and cannot be specified. @@ -53,7 +74,9 @@ will be exported as an attribute once the resource is created. * `route_table_id` - The ID of the routing table. * `destination_cidr_block` - The destination CIDR block. +* `destination_ipv6_cidr_block` - The destination IPv6 CIDR block. * `vpc_peering_connection_id` - An ID of a VPC peering connection. +* `egress_only_gateway_id` - An ID of a VPC Egress Only Internet Gateway. * `gateway_id` - An ID of a VPC internet gateway or a virtual private gateway. * `nat_gateway_id` - An ID of a VPC NAT gateway. * `instance_id` - An ID of a NAT instance. diff --git a/website/source/docs/providers/aws/r/route_table.html.markdown b/website/source/docs/providers/aws/r/route_table.html.markdown index 679dbe2da7..f8e79d6fa5 100644 --- a/website/source/docs/providers/aws/r/route_table.html.markdown +++ b/website/source/docs/providers/aws/r/route_table.html.markdown @@ -27,6 +27,11 @@ resource "aws_route_table" "r" { gateway_id = "${aws_internet_gateway.main.id}" } + route { + ipv6_cidr_block = "::/0" + egress_only_gateway_id = "${aws_egress_only_internet_gateway.foo.id}" + } + tags { Name = "main" } @@ -44,7 +49,9 @@ The following arguments are supported: Each route supports the following: -* `cidr_block` - (Required) The CIDR block of the route. +* `cidr_block` - (Optional) The CIDR block of the route. +* `ipv6_cidr_block` - Optional) The Ipv6 CIDR block of the route +* `egress_only_gateway_id` - (Optional) The Egress Only Internet Gateway ID. * `gateway_id` - (Optional) The Internet Gateway ID. * `nat_gateway_id` - (Optional) The NAT Gateway ID. * `instance_id` - (Optional) The EC2 instance ID. diff --git a/website/source/docs/providers/aws/r/security_group.html.markdown b/website/source/docs/providers/aws/r/security_group.html.markdown index 50aa380756..7404de5885 100644 --- a/website/source/docs/providers/aws/r/security_group.html.markdown +++ b/website/source/docs/providers/aws/r/security_group.html.markdown @@ -85,6 +85,7 @@ assign a random, unique name The `ingress` block supports: * `cidr_blocks` - (Optional) List of CIDR blocks. +* `ipv6_cidr_blocks` - (Optional) List of IPv6 CIDR blocks. * `from_port` - (Required) The start port (or ICMP type number if protocol is "icmp") * `protocol` - (Required) The protocol. If you select a protocol of "-1" (semantically equivalent to `"all"`, which is not a valid value here), you must specify a "from_port" and "to_port" equal to 0. If not icmp, tcp, udp, or "-1" use the [protocol number](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) @@ -97,6 +98,7 @@ The `ingress` block supports: The `egress` block supports: * `cidr_blocks` - (Optional) List of CIDR blocks. +* `ipv6_cidr_blocks` - (Optional) List of IPv6 CIDR blocks. * `prefix_list_ids` - (Optional) List of prefix list IDs (for allowing access to VPC endpoints) * `from_port` - (Required) The start port (or ICMP type number if protocol is "icmp") * `protocol` - (Required) The protocol. If you select a protocol of diff --git a/website/source/docs/providers/aws/r/security_group_rule.html.markdown b/website/source/docs/providers/aws/r/security_group_rule.html.markdown index db123effe2..56deb5b47b 100644 --- a/website/source/docs/providers/aws/r/security_group_rule.html.markdown +++ b/website/source/docs/providers/aws/r/security_group_rule.html.markdown @@ -42,6 +42,7 @@ The following arguments are supported: * `type` - (Required) The type of rule being created. Valid options are `ingress` (inbound) or `egress` (outbound). * `cidr_blocks` - (Optional) List of CIDR blocks. Cannot be specified with `source_security_group_id`. +* `ipv6_cidr_blocks` - (Optional) List of IPv6 CIDR blocks. * `prefix_list_ids` - (Optional) List of prefix list IDs (for allowing access to VPC endpoints). Only valid with `egress`. * `from_port` - (Required) The start port (or ICMP type number if protocol is "icmp"). diff --git a/website/source/docs/providers/aws/r/spot_fleet_request.html.markdown b/website/source/docs/providers/aws/r/spot_fleet_request.html.markdown index 829c69a635..b4ac14a99d 100644 --- a/website/source/docs/providers/aws/r/spot_fleet_request.html.markdown +++ b/website/source/docs/providers/aws/r/spot_fleet_request.html.markdown @@ -82,6 +82,7 @@ Most of these arguments directly correspond to the Spot instances on your behalf when you cancel its Spot fleet request using CancelSpotFleetRequests or when the Spot fleet request expires, if you set terminateInstancesWithExpiration. +* `replace_unhealthy_instances` - (Optional) Indicates whether Spot fleet should replace unhealthy instances. Default `false`. * `launch_specification` - Used to define the launch configuration of the spot-fleet request. Can be specified multiple times to define different bids across different markets and instance types. diff --git a/website/source/docs/providers/aws/r/sqs_queue_policy.html.markdown b/website/source/docs/providers/aws/r/sqs_queue_policy.html.markdown index 32a4ae52ce..4d96e00e0f 100644 --- a/website/source/docs/providers/aws/r/sqs_queue_policy.html.markdown +++ b/website/source/docs/providers/aws/r/sqs_queue_policy.html.markdown @@ -48,5 +48,5 @@ POLICY The following arguments are supported: -* `queue_url` - (Required) The URL of the SNS Queue to which to attach the policy +* `queue_url` - (Required) The URL of the SQS Queue to which to attach the policy * `policy` - (Required) The JSON policy for the SQS queue diff --git a/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown b/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown index 53763254f8..4633ee966e 100644 --- a/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown +++ b/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown @@ -316,6 +316,7 @@ The following arguments are supported: * `os_profile_linux_config` - (Required, when a linux machine) A Linux config block as documented below. * `os_profile_secrets` - (Optional) A collection of Secret blocks as documented below. * `network_interface_ids` - (Required) Specifies the list of resource IDs for the network interfaces associated with the virtual machine. +* `primary_network_interface_id` - (Optional) Specifies the resource ID for the primary network interface associated with the virtual machine. * `tags` - (Optional) A mapping of tags to assign to the resource. For more information on the different example configurations, please check out the [azure documentation](https://msdn.microsoft.com/en-us/library/mt163591.aspx#Anchor_2) @@ -398,7 +399,7 @@ For more information on the different example configurations, please check out t * `disable_password_authentication` - (Required) Specifies whether password authentication should be disabled. * `ssh_keys` - (Optional) Specifies a collection of `path` and `key_data` to be placed on the virtual machine. -~> **Note:** Please note that the only allowed `path` is `/home//.ssh/authorized_keys` due to a limitation of Azure_ +~> **Note:** Please note that the only allowed `path` is `/home//.ssh/authorized_keys` due to a limitation of Azure. `os_profile_secrets` supports the following: diff --git a/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown b/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown index fe30e7b90a..adfe9dd769 100644 --- a/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown +++ b/website/source/docs/providers/azurerm/r/virtual_machine_scale_sets.html.markdown @@ -122,6 +122,7 @@ The following arguments are supported: * `network_profile` - (Required) A collection of network profile block as documented below. * `storage_profile_os_disk` - (Required) A storage profile os disk block as documented below * `storage_profile_image_reference` - (Optional) A storage profile image reference block as documented below. +* `extension` - (Optional) Can be specified multiple times to add extension profiles to the scale set. Each `extension` block supports the fields documented below. * `tags` - (Optional) A mapping of tags to assign to the resource. @@ -206,6 +207,16 @@ The following arguments are supported: * `sku` - (Required) Specifies the SKU of the image used to create the virtual machines. * `version` - (Optional) Specifies the version of the image used to create the virtual machines. +`extension` supports the following: + +* `name` - (Required) Specifies the name of the extension. +* `publisher` - (Required) The publisher of the extension, available publishers can be found by using the Azure CLI. +* `type` - (Required) The type of extension, available types for a publisher can be found using the Azure CLI. +* `type_handler_version` - (Required) Specifies the version of the extension to use, available versions can be found using the Azure CLI. +* `auto_upgrade_minor_version` - (Optional) Specifies whether or not to use the latest minor version available. +* `settings` - (Required) The settings passed to the extension, these are specified as a JSON object in a string. +* `protected_settings` - (Optional) The protected_settings passed to the extension, like settings, these are specified as a JSON object in a string. + ## Attributes Reference The following attributes are exported: diff --git a/website/source/docs/providers/circonus/d/account.html.markdown b/website/source/docs/providers/circonus/d/account.html.markdown new file mode 100644 index 0000000000..e1560c3aa4 --- /dev/null +++ b/website/source/docs/providers/circonus/d/account.html.markdown @@ -0,0 +1,82 @@ +--- +layout: "circonus" +page_title: "Circonus: account" +sidebar_current: "docs-circonus-datasource-account" +description: |- + Provides details about a specific Circonus Account. +--- + +# circonus_account + +`circonus_account` provides +[details](https://login.circonus.com/resources/api/calls/account) about a specific +[Circonus Account](https://login.circonus.com/user/docs/Administration/Account). + +The `circonus_account` data source can be used for pulling various attributes +about a specific Circonus Account. + +## Example Usage + +The following example shows how the resource might be used to obtain the metrics +usage and limit of a given Circonus Account. + +``` +data "circonus_account" "current" { + current = true +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +regions. The given filters must match exactly one region whose data will be +exported as attributes. + +* `id` - (Optional) The Circonus ID of a given account. +* `current` - (Optional) Automatically use the current Circonus Account attached + to the API token making the request. + +At least one of the above attributes should be provided when searching for a +account. + +## Attributes Reference + +The following attributes are exported: + +* `address1` - The first line of the address associated with the account. + +* `address2` - The second line of the address associated with the account. + +* `cc_email` - An optionally specified email address used in the CC line of invoices. + +* `id` - The Circonus ID of the selected Account. + +* `city` - The city part of the address associated with the account. + +* `contact_groups` - A list of IDs for each contact group in the account. + +* `country` - The country of the user's address. + +* `description` - Description of the account. + +* `invites` - An list of users invited to use the platform. Each element in the + list has both an `email` and `role` attribute. + +* `name` - The name of the account. + +* `owner` - The Circonus ID of the user who owns this account. + +* `state_prov` - The state or province of the address associated with the account. + +* `timezone` - The timezone that events will be displayed in the web interface + for this account. + +* `ui_base_url` - The base URL of this account. + +* `usage` - A list of account usage limits. Each element in the list will have + a `limit` attribute, a limit `type`, and a `used` attribute. + +* `users` - A list of users who have access to this account. Each element in + the list has both an `id` and a `role`. The `id` is a Circonus ID referencing + the user. + diff --git a/website/source/docs/providers/circonus/d/collector.html.markdown b/website/source/docs/providers/circonus/d/collector.html.markdown new file mode 100644 index 0000000000..3f7be59199 --- /dev/null +++ b/website/source/docs/providers/circonus/d/collector.html.markdown @@ -0,0 +1,98 @@ +--- +layout: "circonus" +page_title: "Circonus: collector" +sidebar_current: "docs-circonus-datasource-collector" +description: |- + Provides details about a specific Circonus Collector. +--- + +# circonus_collector + +`circonus_collector` provides +[details](https://login.circonus.com/resources/api/calls/broker) about a specific +[Circonus Collector](https://login.circonus.com/user/docs/Administration/Brokers). + +As well as validating a given Circonus ID, this resource can be used to discover +the additional details about a collector configured within the provider. The +results of a `circonus_collector` API call can return more than one collector +per Circonus ID. Details of each individual collector in the group of +collectors can be found via the `details` attribute described below. + +~> **NOTE regarding `cirocnus_collector`:** The `circonus_collector` data source +actually queries and operates on Circonus "brokers" at the broker group level. +The `circonus_collector` is simply a renamed Circonus "broker" to make it clear +what the function of the "broker" actually does: act as a fan-in agent that +either pulls or has metrics pushed into it and funneled back through Circonus. + +## Example Usage + +The following example shows how the resource might be used to obtain +the name of the Circonus Collector configured on the provider. + +``` +data "circonus_collector" "ashburn" { + id = "/broker/1" +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +regions. The given filters must match exactly one region whose data will be +exported as attributes. + +* `id` - (Optional) The Circonus ID of a given collector. + +At least one of the above attributes should be provided when searching for a +collector. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The Circonus ID of the selected Collector. + +* `details` - A list of details about the individual Collector instances that + make up the group of collectors. See below for a list of attributes within + each collector. + +* `latitude` - The latitude of the selected Collector. + +* `longitude` - The longitude of the selected Collector. + +* `name` - The name of the selected Collector. + +* `tags` - A list of tags assigned to the selected Collector. + +* `type` - The of the selected Collector. This value is either `circonus` for a + Circonus-managed, public Collector, or `enterprise` for a private collector that is + private to an account. + +## Collector Details + +* `cn` - The CN of an individual Collector in the Collector Group. + +* `external_host` - The external host information for an individual Collector in + the Collector Group. This is useful or important when talking with a Collector + through a NAT'ing firewall. + +* `external_port` - The external port number for an individual Collector in the + Collector Group. This is useful or important when talking with a Collector through + a NAT'ing firewall. + +* `ip` - The IP address of an individual Collector in the Collector Group. This is + the IP address of the interface listening on the network. + +* `min_version` - ?? + +* `modules` - A list of what modules (types of checks) this collector supports. + +* `port` - The port the collector responds to the Circonus HTTPS REST wire protocol + on. + +* `skew` - The clock drift between this collector and the Circonus server. + +* `status` - The status of this particular collector. A string containing either + `active`, `unprovisioned`, `pending`, `provisioned`, or `retired`. + +* `version` - The version of the collector software the collector is running. diff --git a/website/source/docs/providers/circonus/index.html.markdown b/website/source/docs/providers/circonus/index.html.markdown new file mode 100644 index 0000000000..6652443d83 --- /dev/null +++ b/website/source/docs/providers/circonus/index.html.markdown @@ -0,0 +1,28 @@ +--- +layout: "circonus" +page_title: "Provider: Circonus" +sidebar_current: "docs-circonus-index" +description: |- + A provider for Circonus. +--- + +# Circonus Provider + +The Circonus provider gives the ability to manage a Circonus account. + +Use the navigation to the left to read about the available resources. + +## Usage + +``` +provider "circonus" { + key = "b8fec159-f9e5-4fe6-ad2c-dc1ec6751586" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `key` - (Required) The Circonus API Key. +* `api_url` - (Optional) The API URL to use to talk with. The default is `https://api.circonus.com/v2`. diff --git a/website/source/docs/providers/circonus/r/check.html.markdown b/website/source/docs/providers/circonus/r/check.html.markdown new file mode 100644 index 0000000000..9c1787cc91 --- /dev/null +++ b/website/source/docs/providers/circonus/r/check.html.markdown @@ -0,0 +1,549 @@ +--- +layout: "circonus" +page_title: "Circonus: circonus_check" +sidebar_current: "docs-circonus-resource-circonus_check" +description: |- + Manages a Circonus check. +--- + +# circonus\_check + +The ``circonus_check`` resource creates and manages a +[Circonus Check](https://login.circonus.com/resources/api/calls/check_bundle). + +~> **NOTE regarding `cirocnus_check` vs a Circonus Check Bundle:** The +`circonus_check` resource is implemented in terms of a +[Circonus Check Bundle](https://login.circonus.com/resources/api/calls/check_bundle). +The `circonus_check` creates a higher-level abstraction over the implementation +of a Check Bundle. As such, the naming and structure does not map 1:1 with the +underlying Circonus API. + +## Usage + +``` +variable api_token { + default = "my-token" +} + +resource "circonus_check" "usage" { + name = "Circonus Usage Check" + + notes = <<-EOF +A check to extract a usage metric. +EOF + + collector { + id = "/broker/1" + } + + metric { + name = "${circonus_metric.used.name}" + tags = "${circonus_metric.used.tags}" + type = "${circonus_metric.used.type}" + unit = "${circonus_metric.used.unit}" + } + + json { + url = "https://api.circonus.com/v2" + + http_headers = { + Accept = "application/json" + X-Circonus-App-Name = "TerraformCheck" + X-Circonus-Auth-Token = "${var.api_token}" + } + } + + period = 60 + tags = ["source:circonus", "author:terraform"] + timeout = 10 +} + +resource "circonus_metric" "used" { + name = "_usage`0`_used" + type = "numeric" + unit = "qty" + + tags = { + source = "circonus" + } +} +``` + +## Argument Reference + +* `active` - (Optional) Whether or not the check is enabled or not (default + `true`). + +* `caql` - (Optional) A [Circonus Analytics Query Language + (CAQL)](https://login.circonus.com/user/docs/CAQL) check. See below for + details on how to configure a `caql` check. + +* `cloudwatch` - (Optional) A [CloudWatch + check](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check. + See below for details on how to configure a `cloudwatch` check. + +* `collector` - (Required) A collector ID. The collector(s) that are + responsible for running a `circonus_check`. The `id` can be the Circonus ID + for a Circonus collector (a.k.a. "broker") running in the cloud or an + enterprise collector running in your datacenter. One collection of metrics + will be automatically created for each `collector` specified. + +* `http` - (Optional) A poll-based HTTP check. See below for details on how to configure + the `http` check. + +* `httptrap` - (Optional) An push-based HTTP check. This check method expects + clients to send a specially crafted HTTP JSON payload. See below for details + on how to configure the `httptrap` check. + +* `icmp_ping` - (Optional) An ICMP ping check. See below for details on how to + configure the `icmp_ping` check. + +* `json` - (Optional) A JSON check. See below for details on how to configure + the `json` check. + +* `metric` - (Required) A list of one or more `metric` configurations. All + metrics obtained from this check instance will be available as individual + metric streams. See below for a list of supported `metric` attrbutes. + +* `metric_limit` - (Optional) Setting a metric limit will tell the Circonus + backend to periodically look at the check to see if there are additional + metrics the collector has seen that we should collect. It will not reactivate + metrics previously collected and then marked as inactive. Values are `0` to + disable, `-1` to enable all metrics or `N+` to collect up to the value `N` + (both `-1` and `N+` can not exceed other account restrictions). + +* `mysql` - (Optional) A MySQL check. See below for details on how to configure + the `mysql` check. + +* `name` - (Optional) The name of the check that will be displayed in the web + interface. + +* `notes` - (Optional) Notes about this check. + +* `period` - (Optional) The period between each time the check is made in + seconds. + +* `postgresql` - (Optional) A PostgreSQL check. See below for details on how to + configure the `postgresql` check. + +* `statsd` - (Optional) A statsd check. See below for details on how to + configure the `statsd` check. + +* `tags` - (Optional) A list of tags assigned to this check. + +* `target` - (Required) A string containing the location of the thing being + checked. This value changes based on the check type. For example, for an + `http` check type this would be the URL you're checking. For a DNS check it + would be the hostname you wanted to look up. + +* `tcp` - (Optional) A TCP check. See below for details on how to configure the + `tcp` check (includes TLS support). + +* `timeout` - (Optional) A floating point number representing the maximum number + of seconds this check should wait for a result. Defaults to `10.0`. + +## Supported `metric` Attributes + +The following attributes are available within a `metric`. + +* `active` - (Optional) Whether or not the metric is active or not. Defaults to `true`. +* `name` - (Optional) The name of the metric. A string containing freeform text. +* `tags` - (Optional) A list of tags assigned to the metric. +* `type` - (Required) A string containing either `numeric`, `text`, `histogram`, `composite`, or `caql`. +* `units` - (Optional) The unit of measurement the metric represents (e.g., bytes, seconds, milliseconds). A string containing freeform text. + +## Supported Check Types + +Circonus supports a variety of different checks. Each check type has its own +set of options that must be configured. Each check type conflicts with every +other check type (i.e. a `circonus_check` configured for a `json` check will +conflict with all other check types, therefore a `postgresql` check must be a +different `circonus_check` resource). + +### `caql` Check Type Attributes + +* `query` - (Required) The [CAQL + Query](https://login.circonus.com/user/docs/caql_reference) to run. + +Available metrics depend on the payload returned in the `caql` check. See the +[`caql` check type](https://login.circonus.com/resources/api/calls/check_bundle) for +additional details. + +### `cloudwatch` Check Type Attributes + +* `api_key` - (Required) The AWS access key. If this value is not explicitly + set, this value is populated by the environment variable `AWS_ACCESS_KEY_ID`. + +* `api_secret` - (Required) The AWS secret key. If this value is not explicitly + set, this value is populated by the environment variable `AWS_SECRET_ACCESS_KEY`. + +* `dimmensions` - (Required) A map of the CloudWatch dimmensions to include in + the check. + +* `metric` - (Required) A list of metric names to collect in this check. + +* `namespace` - (Required) The namespace to pull parameters from. + +* `url` - (Required) The AWS URL to pull from. This should be set to the + region-specific endpoint (e.g. prefer + `https://monitoring.us-east-1.amazonaws.com` over + `https://monitoring.amazonaws.com`). + +* `version` - (Optional) The version of the Cloudwatch API to use. Defaults to + `2010-08-01`. + +Available metrics depend on the payload returned in the `cloudwatch` check. See the +[`cloudwatch` check type](https://login.circonus.com/resources/api/calls/check_bundle) for +additional details. The `circonus_check` `period` attribute must be set to +either `60s` or `300s` for CloudWatch metrics. + +Example CloudWatch check (partial metrics collection): + +``` +variable "cloudwatch_rds_tags" { + type = "list" + default = [ + "app:postgresql", + "app:rds", + "source:cloudwatch", + ] +} + +resource "circonus_check" "rds_metrics" { + active = true + name = "Terraform test: RDS Metrics via CloudWatch" + notes = "Collect RDS metrics" + period = "60s" + + collector { + id = "/broker/1" + } + + cloudwatch { + dimmensions = { + DBInstanceIdentifier = "my-db-name", + } + + metric = [ + "CPUUtilization", + "DatabaseConnections", + ] + + namespace = "AWS/RDS" + url = "https://monitoring.us-east-1.amazonaws.com" + } + + metric { + name = "CPUUtilization" + tags = [ "${var.cloudwatch_rds_tags}" ] + type = "numeric" + unit = "%" + } + + metric { + name = "DatabaseConnections" + tags = [ "${var.cloudwatch_rds_tags}" ] + type = "numeric" + unit = "connections" + } +} +``` + +### `http` Check Type Attributes + +* `auth_method` - (Optional) HTTP Authentication method to use. When set must + be one of the values `Basic`, `Digest`, or `Auto`. + +* `auth_password` - (Optional) The password to use during authentication. + +* `auth_user` - (Optional) The user to authenticate as. + +* `body_regexp` - (Optional) This regular expression is matched against the body + of the response. If a match is not found, the check will be marked as "bad." + +* `ca_chain` - (Optional) A path to a file containing all the certificate + authorities that should be loaded to validate the remote certificate (for TLS + checks). + +* `certificate_file` - (Optional) A path to a file containing the client + certificate that will be presented to the remote server (for TLS checks). + +* `ciphers` - (Optional) A list of ciphers to be used in the TLS protocol (for + HTTPS checks). + +* `code` - (Optional) The HTTP code that is expected. If the code received does + not match this regular expression, the check is marked as "bad." + +* `extract` - (Optional) This regular expression is matched against the body of + the response globally. The first capturing match is the key and the second + capturing match is the value. Each key/value extracted is registered as a + metric for the check. + +* `headers` - (Optional) A map of the HTTP headers to be sent when executing the + check. + +* `key_file` - (Optional) A path to a file containing key to be used in + conjunction with the cilent certificate (for TLS checks). + +* `method` - (Optional) The HTTP Method to use. Defaults to `GET`. + +* `payload` - (Optional) The information transferred as the payload of an HTTP + request. + +* `read_limit` - (Optional) Sets an approximate limit on the data read (`0` + means no limit). Default `0`. + +* `redirects` - (Optional) The maximum number of HTTP `Location` header + redirects to follow. Default `0`. + +* `url` - (Required) The target for this `json` check. The `url` must include + the scheme, host, port (optional), and path to use + (e.g. `https://app1.example.org/healthz`) + +* `version` - (Optional) The HTTP version to use. Defaults to `1.1`. + +Available metrics include: `body_match`, `bytes`, `cert_end`, `cert_end_in`, +`cert_error`, `cert_issuer`, `cert_start`, `cert_subject`, `code`, `duration`, +`truncated`, `tt_connect`, and `tt_firstbyte`. See the +[`http` check type](https://login.circonus.com/resources/api/calls/check_bundle) for +additional details. + +### `httptrap` Check Type Attributes + +* `async_metrics` - (Optional) Boolean value specifies whether or not httptrap + metrics are logged immediately or held until the status message is to be + emitted. Default `false`. + +* `secret` - (Optional) Specify the secret with which metrics may be + submitted. + +Available metrics depend on the payload returned in the `httptrap` doc. See +the [`httptrap` check type](https://login.circonus.com/resources/api/calls/check_bundle) +for additional details. + +### `json` Check Type Attributes + +* `auth_method` - (Optional) HTTP Authentication method to use. When set must + be one of the values `Basic`, `Digest`, or `Auto`. + +* `auth_password` - (Optional) The password to use during authentication. + +* `auth_user` - (Optional) The user to authenticate as. + +* `ca_chain` - (Optional) A path to a file containing all the certificate + authorities that should be loaded to validate the remote certificate (for TLS + checks). + +* `certificate_file` - (Optional) A path to a file containing the client + certificate that will be presented to the remote server (for TLS checks). + +* `ciphers` - (Optional) A list of ciphers to be used in the TLS protocol (for + HTTPS checks). + +* `headers` - (Optional) A map of the HTTP headers to be sent when executing the + check. + +* `key_file` - (Optional) A path to a file containing key to be used in + conjunction with the cilent certificate (for TLS checks). + +* `method` - (Optional) The HTTP Method to use. Defaults to `GET`. + +* `port` - (Optional) The TCP Port number to use. Defaults to `81`. + +* `read_limit` - (Optional) Sets an approximate limit on the data read (`0` + means no limit). Default `0`. + +* `redirects` - (Optional) The maximum number of HTTP `Location` header + redirects to follow. Default `0`. + +* `url` - (Required) The target for this `json` check. The `url` must include + the scheme, host, port (optional), and path to use + (e.g. `https://app1.example.org/healthz`) + +* `version` - (Optional) The HTTP version to use. Defaults to `1.1`. + +Available metrics depend on the payload returned in the `json` doc. See the +[`json` check type](https://login.circonus.com/resources/api/calls/check_bundle) for +additional details. + +### `icmp_ping` Check Type Attributes + +The `icmp_ping` check requires the `target` top-level attribute to be set. + +* `availability` - (Optional) The percentage of ping packets that must be + returned for this measurement to be considered successful. Defaults to + `100.0`. +* `count` - (Optional) The number of ICMP ping packets to send. Defaults to + `5`. +* `interval` - (Optional) Interval between packets. Defaults to `2s`. + +Available metrics include: `available`, `average`, `count`, `maximum`, and +`minimum`. See the +[`ping_icmp` check type](https://login.circonus.com/resources/api/calls/check_bundle) +for additional details. + +### `mysql` Check Type Attributes + +The `mysql` check requires the `target` top-level attribute to be set. + +* `dsn` - (Required) The [MySQL DSN/connect + string](https://github.com/go-sql-driver/mysql/blob/master/README.md) to + use to talk to MySQL. +* `query` - (Required) The SQL query to execute. + +### `postgresql` Check Type Attributes + +The `postgresql` check requires the `target` top-level attribute to be set. + +* `dsn` - (Required) The [PostgreSQL DSN/connect + string](https://www.postgresql.org/docs/current/static/libpq-connect.html) to + use to talk to PostgreSQL. +* `query` - (Required) The SQL query to execute. + +Available metric names are dependent on the output of the `query` being run. + +### `statsd` Check Type Attributes + +* `source_ip` - (Required) Any statsd messages from this IP address (IPv4 or + IPv6) will be associated with this check. + +Available metrics depend on the metrics sent to the `statsd` check. + +### `tcp` Check Type Attributes + +* `banner_regexp` - (Optional) This regular expression is matched against the + response banner. If a match is not found, the check will be marked as bad. + +* `ca_chain` - (Optional) A path to a file containing all the certificate + authorities that should be loaded to validate the remote certificate (for TLS + checks). + +* `certificate_file` - (Optional) A path to a file containing the client + certificate that will be presented to the remote server (for TLS checks). + +* `ciphers` - (Optional) A list of ciphers to be used in the TLS protocol (for + HTTPS checks). + +* `host` - (Required) Hostname or IP address of the host to connect to. + +* `key_file` - (Optional) A path to a file containing key to be used in + conjunction with the cilent certificate (for TLS checks). + +* `port` - (Required) Integer specifying the port on which the management + interface can be reached. + +* `tls` - (Optional) When enabled establish a TLS connection. + +Available metrics include: `banner`, `banner_match`, `cert_end`, `cert_end_in`, +`cert_error`, `cert_issuer`, `cert_start`, `cert_subject`, `duration`, +`tt_connect`, `tt_firstbyte`. See the +[`tcp` check type](https://login.circonus.com/resources/api/calls/check_bundle) +for additional details. + +Sample `tcp` check: + +``` +resource "circonus_check" "tcp_check" { + name = "TCP and TLS check" + notes = "Obtains the connect time and TTL for the TLS cert" + period = "60s" + + collector { + id = "/broker/1" + } + + tcp { + host = "127.0.0.1" + port = 443 + tls = true + } + + metric { + name = "cert_end_in" + tags = [ "${var.tcp_check_tags}" ] + type = "numeric" + unit = "seconds" + } + + metric { + name = "tt_connect" + tags = [ "${var.tcp_check_tags}" ] + type = "numeric" + unit = "miliseconds" + } + + tags = [ "${var.tcp_check_tags}" ] +} +``` + +## Out Parameters + +* `check_by_collector` - Maps the ID of the collector (`collector_id`, the map + key) to the `check_id` (value) that is registered to a collector. + +* `check_id` - If there is only one `collector` specified for the check, this + value will be populated with the `check_id`. If more than one `collector` is + specified in the check, then this value will be an empty string. + `check_by_collector` will always be populated. + +* `checks` - List of `check_id`s created by this `circonus_check`. There is one + element in this list per collector specified in the check. + +* `created` - UNIX time at which this check was created. + +* `last_modified` - UNIX time at which this check was last modified. + +* `last_modified_by` - User ID in Circonus who modified this check last. + +* `reverse_connect_urls` - Only relevant to Circonus support. + +* `uuids` - List of Check `uuid`s created by this `circonus_check`. There is + one element in this list per collector specified in the check. + +## Import Example + +`circonus_check` supports importing resources. Supposing the following +Terraform (and that the referenced [`circonus_metric`](metric.html) has already +been imported): + +``` +provider "circonus" { + alias = "b8fec159-f9e5-4fe6-ad2c-dc1ec6751586" +} + +resource "circonus_metric" "used" { + name = "_usage`0`_used" + type = "numeric" +} + +resource "circonus_check" "usage" { + collector { + id = "/broker/1" + } + + json { + url = "https://api.circonus.com/account/current" + + http_headers = { + "Accept" = "application/json" + "X-Circonus-App-Name" = "TerraformCheck" + "X-Circonus-Auth-Token" = "${var.api_token}" + } + } + + metric { + name = "${circonus_metric.used.name}" + type = "${circonus_metric.used.type}" + } +} +``` + +It is possible to import a `circonus_check` resource with the following command: + +``` +$ terraform import circonus_check.usage ID +``` + +Where `ID` is the `_cid` or Circonus ID of the Check Bundle +(e.g. `/check_bundle/12345`) and `circonus_check.usage` is the name of the +resource whose state will be populated as a result of the command. diff --git a/website/source/docs/providers/circonus/r/contact_group.html.markdown b/website/source/docs/providers/circonus/r/contact_group.html.markdown new file mode 100644 index 0000000000..df27c0c1b1 --- /dev/null +++ b/website/source/docs/providers/circonus/r/contact_group.html.markdown @@ -0,0 +1,289 @@ +--- +layout: "circonus" +page_title: "Circonus: circonus_contact_group" +sidebar_current: "docs-circonus-resource-circonus_contact_group" +description: |- + Manages a Circonus Contact Group. +--- + +# circonus\_contact_group + +The ``circonus_contact_group`` resource creates and manages a +[Circonus Contact Group](https://login.circonus.com/user/docs/Alerting/ContactGroups). + + +## Usage + +``` +resource "circonus_contact_group" "myteam-alerts" { + name = "MyTeam Alerts" + + email { + user = "/user/1234" + } + + email { + user = "/user/5678" + } + + email { + address = "user@example.com" + } + + http { + address = "https://www.example.org/post/endpoint" + format = "json" + method = "POST" + } + + irc { + user = "/user/6331" + } + + slack { + channel = "#myteam" + team = "T038UT13D" + } + + sms { + user = "/user/1234" + } + + sms { + address = "8005551212" + } + + victorops { + api_key = "xxxx" + critical = 2 + info = 5 + team = "myteam" + warning = 3 + } + + xmpp { + user = "/user/9876" + } + + aggregation_window = "5m" + + alert_option { + severity = 1 + reminder = "5m" + escalate_to = "/contact_group/4444" + } + + alert_option { + severity = 2 + reminder = "15m" + escalate_after = "2h" + escalate_to = "/contact_group/4444" + } + + alert_option { + severity = 3 + reminder = "24m" + escalate_after = "3d" + escalate_to = "/contact_group/4444" + } +} +``` + +## Argument Reference + +* `aggregation_window` - (Optional) The aggregation window for batching up alert + notifications. + +* `alert_option` - (Optional) There is one `alert_option` per severity, where + severity can be any number between 1 (high) and 5 (low). If configured, the + alerting system will remind or escalate alerts to further contact groups if an + alert sent to this contact group is not acknowledged or resolved. See below + for details. + +* `email` - (Optional) Zero or more `email` attributes may be present to + dispatch email to Circonus users by referencing their user ID, or by + specifying an email address. See below for details on supported attributes. + +* `http` - (Optional) Zero or more `http` attributes may be present to dispatch + [Webhook/HTTP requests](https://login.circonus.com/user/docs/Alerting/ContactGroups#WebhookNotifications) + by Circonus. See below for details on supported attributes. + +* `irc` - (Optional) Zero or more `irc` attributes may be present to dispatch + IRC notifications to users. See below for details on supported attributes. + +* `long_message` - (Optional) The bulk of the message used in long form alert + messages. + +* `long_subject` - (Optional) The subject used in long form alert messages. + +* `long_summary` - (Optional) The brief summary used in long form alert messages. + +* `name` - (Required) The name of the contact group. + +* `pager_duty` - (Optional) Zero or more `pager_duty` attributes may be present + to dispatch to + [Pager Duty teams](https://login.circonus.com/user/docs/Alerting/ContactGroups#PagerDutyOptions). + See below for details on supported attributes. + +* `short_message` - (Optional) The subject used in short form alert messages. + +* `short_summary` - (Optional) The brief summary used in short form alert + messages. + +* `slack` - (Optional) Zero or more `slack` attributes may be present to + dispatch to Slack teams. See below for details on supported attributes. + +* `sms` - (Optional) Zero or more `sms` attributes may be present to dispatch + SMS messages to Circonus users by referencing their user ID, or by specifying + an SMS Phone Number. See below for details on supported attributes. + +* `tags` - (Optional) A list of tags attached to the Contact Group. + +* `victorops` - (Optional) Zero or more `victorops` attributes may be present + to dispatch to + [VictorOps teams](https://login.circonus.com/user/docs/Alerting/ContactGroups#VictorOps). + See below for details on supported attributes. + +## Supported Contact Group `alert_option` Attributes + +* `escalate_after` - (Optional) How long to wait before escalating an alert that + is received at a given severity. + +* `escalate_to` - (Optional) The Contact Group ID who will receive the + escalation. + +* `reminder` - (Optional) If specified, reminders will be sent after a user + configurable number of minutes for open alerts. + +* `severity` - (Required) An `alert_option` must be assigned to a given severity + level. Valid severity levels range from 1 (highest severity) to 5 (lowest + severity). + +## Supported Contact Group `email` Attributes + +Either an `address` or `user` attribute is required. + +* `address` - (Optional) A well formed email address. + +* `user` - (Optional) An email will be sent to the email address of record for + the corresponding user ID (e.g. `/user/1234`). + +A `user`'s email address is automatically maintained and kept up to date by the +recipient, whereas an `address` provides no automatic layer of indirection for +keeping the information accurate (including LDAP and SAML-based authentication +mechanisms). + +## Supported Contact Group `http` Attributes + +* `address` - (Required) URL to send a webhook request to. + +* `format` - (Optional) The payload of the request is a JSON-encoded payload + when the `format` is set to `json` (the default). The alternate payload + encoding is `params`. + +* `method` - (Optional) The HTTP verb to use when making a request. Either + `GET` or `POST` may be specified. The default verb is `POST`. + +## Supported Contact Group `irc` Attributes + +* `user` - (Required) When a user has configured IRC on their user account, they + will receive an IRC notification. + +## Supported Contact Group `pager_duty` Attributes + +* `contact_group_fallback` - (Optional) If there is a problem contacting + PagerDuty, relay the notification automatically to the specified Contact Group + (e.g. `/contact_group/1234`). + +* `service_key` - (Required) The PagerDuty Service Key. + +* `webhook_url` - (Required) The PagerDuty webhook URL that PagerDuty uses to + notify Circonus of acknowledged actions. + +## Supported Contact Group `slack` Attributes + +* `contact_group_fallback` - (Optional) If there is a problem contacting Slack, + relay the notification automatically to the specified Contact Group + (e.g. `/contact_group/1234`). + +* `buttons` - (Optional) Slack notifications can have acknowledgement buttons + built into the notification message itself when enabled. Defaults to `true`. + +* `channel` - (Required) Specify what Slack channel Circonus should send alerts + to. + +* `team` - (Required) Specify what Slack team Circonus should look in for the + aforementioned `channel`. + +* `username` - (Optional) Specify the username Circonus should advertise itself + as in Slack. Defaults to `Circonus`. + +## Supported Contact Group `sms` Attributes + +Either an `address` or `user` attribute is required. + +* `address` - (Optional) SMS Phone Number to send a short notification to. + +* `user` - (Optional) An SMS page will be sent to the phone number of record for + the corresponding user ID (e.g. `/user/1234`). + +A `user`'s phone number is automatically maintained and kept up to date by the +recipient, whereas an `address` provides no automatic layer of indirection for +keeping the information accurate (including LDAP and SAML-based authentication +mechanisms). + +## Supported Contact Group `victorops` Attributes + +* `contact_group_fallback` - (Optional) If there is a problem contacting + VictorOps, relay the notification automatically to the specified Contact Group + (e.g. `/contact_group/1234`). + +* `api_key` - (Required) The API Key for talking with VictorOps. + +* `critical` - (Required) +* `info` - (Required) +* `team` - (Required) +* `warning` - (Required) + +## Supported Contact Group `xmpp` Attributes + +Either an `address` or `user` attribute is required. + +* `address` - (Optional) XMPP address to send a short notification to. + +* `user` - (Optional) An XMPP notification will be sent to the XMPP address of + record for the corresponding user ID (e.g. `/user/1234`). + +## Import Example + +`circonus_contact_group` supports importing resources. Supposing the following +Terraform: + +``` +provider "circonus" { + alias = "b8fec159-f9e5-4fe6-ad2c-dc1ec6751586" +} + +resource "circonus_contact_group" "myteam" { + name = "My Team's Contact Group" + + email { + address = "myteam@example.com" + } + + slack { + channel = "#myteam" + team = "T024UT03C" + } +} +``` + +It is possible to import a `circonus_contact_group` resource with the following command: + +``` +$ terraform import circonus_contact_group.myteam ID +``` + +Where `ID` is the `_cid` or Circonus ID of the Contact Group +(e.g. `/contact_group/12345`) and `circonus_contact_group.myteam` is the name of +the resource whose state will be populated as a result of the command. diff --git a/website/source/docs/providers/circonus/r/graph.html.markdown b/website/source/docs/providers/circonus/r/graph.html.markdown new file mode 100644 index 0000000000..65169b335a --- /dev/null +++ b/website/source/docs/providers/circonus/r/graph.html.markdown @@ -0,0 +1,179 @@ +--- +layout: "circonus" +page_title: "Circonus: circonus_graph" +sidebar_current: "docs-circonus-resource-circonus_graph" +description: |- + Manages a Circonus graph. +--- + +# circonus\_graph + +The ``circonus_graph`` resource creates and manages a +[Circonus Graph](https://login.circonus.com/user/docs/Visualization/Graph/Create). + +https://login.circonus.com/resources/api/calls/graph). + +## Usage + +``` +variable "myapp-tags" { + type = "list" + default = [ "app:myapp", "owner:myteam" ] +} + +resource "circonus_graph" "latency-graph" { + name = "Latency Graph" + description = "A sample graph showing off two data points" + notes = "Misc notes about this graph" + graph_style = "line" + line_style = "stepped" + + metric { + check = "${circonus_check.api_latency.checks[0]}" + metric_name = "maximum" + metric_type = "numeric" + name = "Maximum Latency" + axis = "left" + color = "#657aa6" + } + + metric { + check = "${circonus_check.api_latency.checks[0]}" + metric_name = "minimum" + metric_type = "numeric" + name = "Minimum Latency" + axis = "right" + color = "#0000ff" + } + + tags = [ "${var.myapp-tags}" ] +} +``` + +## Argument Reference + +* `description` - (Optional) Description of what the graph is for. + +* `graph_style` - (Optional) How the graph should be rendered. Valid options + are `area` or `line` (default). + +* `left` - (Optional) A map of graph left axis options. Valid values in `left` + include: `logarithmic` can be set to `0` (default) or `1`; `min` is the `min` + Y axis value on the left; and `max` is the Y axis max value on the left. + +* `line_style` - (Optional) How the line should change between points. Can be + either `stepped` (default) or `interpolated`. + +* `name` - (Required) The title of the graph. + +* `notes` - (Optional) A place for storing notes about this graph. + +* `right` - (Optional) A map of graph right axis options. Valid values in + `right` include: `logarithmic` can be set to `0` (default) or `1`; `min` is + the `min` Y axis value on the right; and `max` is the Y axis max value on the + right. + +* `metric` - (Optional) A list of metric streams to graph. See below for + options. + +* `metric_cluster` - (Optional) A metric cluster to graph. See below for options. + +* `tags` - (Optional) A list of tags assigned to this graph. + +## `metric` Configuration + +An individual metric stream is the underlying source of data points used for +visualization in a graph. Either a `caql` attribute is required or a `check` and +`metric` must be set. The `metric` attribute can have the following options +set. + +* `active` - (Optional) A boolean if the metric stream is enabled or not. + +* `alpha` - (Optional) A floating point number between 0 and 1. + +* `axis` - (Optional) The axis that the metric stream will use. Valid options + are `left` (default) or `right`. + +* `caql` - (Optional) A CAQL formula. Conflicts with the `check` and `metric` + attributes. + +* `check` - (Optional) The check that this metric stream belongs to. + +* `color` - (Optional) A hex-encoded color of the line / area on the graph. + +* `formula` - (Optional) Formula that should be aplied to both the values in the + graph and the legend. + +* `legend_formula` - (Optional) Formula that should be applied to values in the + legend. + +* `function` - (Optional) What derivative value, if any, should be used. Valid + values are: `gauge` (default), `derive`, and `counter (_stddev)` + +* `metric_type` - (Required) The type of the metric. Valid values are: + `numeric`, `text`, `histogram`, `composite`, or `caql`. + +* `name` - (Optional) A name which will appear in the graph legend. + +* `metric_name` - (Optional) The name of the metric stream within the check to + graph. + +* `stack` - (Optional) If this metric is to be stacked, which stack set does it + belong to (starting at `0`). + +## `metric_cluster` Configuration + +A metric cluster selects multiple metric streams together dynamically using a +query language and returns the set of matching metric streams as a single result +set to the graph rendering engine. + +* `active` - (Optional) A boolean if the metric cluster is enabled or not. + +* `aggregate` - (Optional) The aggregate function to apply across this metric + cluster to create a single value. Valid values are: `none` (default), `min`, + `max`, `sum`, `mean`, or `geometric_mean`. + +* `axis` - (Optional) The axis that the metric cluster will use. Valid options + are `left` (default) or `right`. + +* `color` - (Optional) A hex-encoded color of the line / area on the graph. + This is a required attribute when `aggregate` is specified. + +* `group` - (Optional) The `metric_cluster` that will provide datapoints for this + graph. + +* `name` - (Optional) A name which will appear in the graph legend for this + metric cluster. + +## Import Example + +`circonus_graph` supports importing resources. Supposing the following +Terraform (and that the referenced [`circonus_metric`](metric.html) +and [`circonus_check`](check.html) have already been imported): + +``` +resource "circonus_graph" "icmp-graph" { + name = "Test graph" + graph_style = "line" + line_style = "stepped" + + metric { + check = "${circonus_check.api_latency.checks[0]}" + metric_name = "maximum" + metric_type = "numeric" + name = "Maximum Latency" + axis = "left" + } +} +``` + +It is possible to import a `circonus_graph` resource with the following command: + +``` +$ terraform import circonus_graph.usage ID +``` + +Where `ID` is the `_cid` or Circonus ID of the graph +(e.g. `/graph/bd72aabc-90b9-4039-cc30-c9ab838c18f5`) and +`circonus_graph.icmp-graph` is the name of the resource whose state will be +populated as a result of the command. diff --git a/website/source/docs/providers/circonus/r/metric.html.markdown b/website/source/docs/providers/circonus/r/metric.html.markdown new file mode 100644 index 0000000000..0070ea987a --- /dev/null +++ b/website/source/docs/providers/circonus/r/metric.html.markdown @@ -0,0 +1,73 @@ +--- +layout: "circonus" +page_title: "Circonus: circonus_metric" +sidebar_current: "docs-circonus-resource-circonus_metric" +description: |- + Manages a Circonus metric. +--- + +# circonus\_metric + +The ``circonus_metric`` resource creates and manages a +single [metric resource](https://login.circonus.com/resources/api/calls/metric) +that will be instantiated only once a referencing `circonus_check` has been +created. + +## Usage + +``` +resource "circonus_metric" "used" { + name = "_usage`0`_used" + type = "numeric" + units = "qty" + + tags = { + author = "terraform" + source = "circonus" + } +} +``` + +## Argument Reference + +* `active` - (Optional) A boolean indicating if the metric is being filtered out + at the `circonus_check`'s collector(s) or not. + +* `name` - (Required) The name of the metric. A `name` must be unique within a + `circonus_check` and its meaning is `circonus_check.type` specific. + +* `tags` - (Optional) A list of tags assigned to the metric. + +* `type` - (Required) The type of metric. This value must be present and can be + one of the following values: `numeric`, `text`, `histogram`, `composite`, or + `caql`. + +* `unit` - (Optional) The unit of measurement for this `circonus_metric`. + +## Import Example + +`circonus_metric` supports importing resources. Supposing the following +Terraform: + +``` +provider "circonus" { + alias = "b8fec159-f9e5-4fe6-ad2c-dc1ec6751586" +} + +resource "circonus_metric" "usage" { + name = "_usage`0`_used" + type = "numeric" + unit = "qty" + tags = { source = "circonus" } +} +``` + +It is possible to import a `circonus_metric` resource with the following command: + +``` +$ terraform import circonus_metric.usage ID +``` + +Where `ID` is a random, never before used UUID and `circonus_metric.usage` is +the name of the resource whose state will be populated as a result of the +command. diff --git a/website/source/docs/providers/circonus/r/metric_cluster.html.markdown b/website/source/docs/providers/circonus/r/metric_cluster.html.markdown new file mode 100644 index 0000000000..87dbd09d4d --- /dev/null +++ b/website/source/docs/providers/circonus/r/metric_cluster.html.markdown @@ -0,0 +1,86 @@ +--- +layout: "circonus" +page_title: "Circonus: circonus_metric_cluster" +sidebar_current: "docs-circonus-resource-circonus_metric_cluster" +description: |- + Manages a Circonus Metric Cluster. +--- + +# circonus\_metric\_cluster + +The ``circonus_metric_cluster`` resource creates and manages a +[Circonus Metric Cluster](https://login.circonus.com/user/docs/Data/View/MetricClusters). + +## Usage + +``` +resource "circonus_metric_cluster" "nomad-job-memory-rss" { + name = "My Job's Resident Memory" + description = <<-EOF +An aggregation of all resident memory metric streams across allocations in a Nomad job. +EOF + + query { + definition = "*`nomad-jobname`memory`rss" + type = "average" + } + tags = ["source:nomad","resource:memory"] +} +``` + +## Argument Reference + +* `description` - (Optional) A long-form description of the metric cluster. + +* `name` - (Required) The name of the metric cluster. This name must be unique + across all metric clusters in a given Circonus Account. + +* `query` - (Required) One or more `query` attributes must be present. Each + `query` must contain both a `definition` and a `type`. See below for details + on supported attributes. + +* `tags` - (Optional) A list of tags attached to the metric cluster. + +## Supported Metric Cluster `query` Attributes + +* `definition` - (Required) The definition of a metric cluster [query](https://login.circonus.com/resources/api/calls/metric_cluster). + +* `type` - (Required) The query type to execute per metric cluster. Valid query + types are: `average`, `count`, `counter`, `counter2`, `counter2_stddev`, + `counter_stddev`, `derive`, `derive2`, `derive2_stddev`, `derive_stddev`, + `histogram`, `stddev`, `text`. + +## Out parameters + +* `id` - ID of the Metric Cluster. + +## Import Example + +`circonus_metric_cluster` supports importing resources. Supposing the following +Terraform: + +``` +provider "circonus" { + alias = "b8fec159-f9e5-4fe6-ad2c-dc1ec6751586" +} + +resource "circonus_metric_cluster" "mymetriccluster" { + name = "Metric Cluster for a particular metric in a job" + + query { + definition = "*`nomad-jobname`memory`rss" + type = "average" + } +} +``` + +It is possible to import a `circonus_metric_cluster` resource with the following +command: + +``` +$ terraform import circonus_metric_cluster.mymetriccluster ID +``` + +Where `ID` is the `_cid` or Circonus ID of the Metric Cluster +(e.g. `/metric_cluster/12345`) and `circonus_metric_cluster.mymetriccluster` is the +name of the resource whose state will be populated as a result of the command. diff --git a/website/source/docs/providers/circonus/r/rule_set.html.markdown b/website/source/docs/providers/circonus/r/rule_set.html.markdown new file mode 100644 index 0000000000..4c1c481b35 --- /dev/null +++ b/website/source/docs/providers/circonus/r/rule_set.html.markdown @@ -0,0 +1,377 @@ +--- +layout: "circonus" +page_title: "Circonus: circonus_rule_set" +sidebar_current: "docs-circonus-resource-circonus_rule_set" +description: |- + Manages a Circonus rule set. +--- + +# circonus\_rule_set + +The ``circonus_rule_set`` resource creates and manages a +[Circonus Rule Set](https://login.circonus.com/resources/api/calls/rule_set). + +## Usage + +``` +variable "myapp-tags" { + type = "list" + default = [ "app:myapp", "owner:myteam" ] +} + +resource "circonus_rule_set" "myapp-cert-ttl-alert" { + check = "${circonus_check.myapp-https.checks[0]}" + metric_name = "cert_end_in" + link = "https://wiki.example.org/playbook/how-to-renew-cert" + + if { + value { + min_value = "${2 * 24 * 3600}" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 1 + } + } + + if { + value { + min_value = "${7 * 24 * 3600}" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 2 + } + } + + if { + value { + min_value = "${21 * 24 * 3600}" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 3 + } + } + + if { + value { + absent = "24h" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 1 + } + } + + tags = [ "${var.myapp-tags}" ] +} + +resource "circonus_rule_set" "myapp-healthy-alert" { + check = "${circonus_check.myapp-https.checks[0]}" + metric_name = "duration" + link = "https://wiki.example.org/playbook/debug-down-app" + + if { + value { + # SEV1 if it takes more than 9.5s for us to complete an HTTP request + max_value = "${9.5 * 1000}" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 1 + } + } + + if { + value { + # SEV2 if it takes more than 5s for us to complete an HTTP request + max_value = "${5 * 1000}" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 2 + } + } + + if { + value { + # SEV3 if the average response time is more than 500ms using a moving + # average over the last 10min. Any transient problems should have + # resolved themselves by now. Something's wrong, need to page someone. + over { + last = "10m" + using = "average" + } + max_value = "500" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 3 + } + } + + if { + value { + # SEV4 if it takes more than 500ms for us to complete an HTTP request. We + # want to record that things were slow, but not wake anyone up if it + # momentarily pops above 500ms. + min_value = "500" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 3 + } + } + + if { + value { + # If for whatever reason we're not recording any values for the last + # 24hrs, fire off a SEV1. + absent = "24h" + } + + then { + notify = [ "${circonus_contact_group.myapp-owners.id}" ] + severity = 1 + } + } + + tags = [ "${var.myapp-tags}" ] +} + +resource "circonus_contact_group" "myapp-owners" { + name = "My App Owners" + tags = [ "${var.myapp-tags}" ] +} + +resource "circonus_check" "myapp-https" { + name = "My App's HTTPS Check" + + notes = <<-EOF +A check to create metric streams for Time to First Byte, HTTP transaction +duration, and the TTL of a TLS cert. +EOF + + collector { + id = "/broker/1" + } + + http { + code = "^200$" + headers = { + X-Request-Type = "health-check", + } + url = "https://www.example.com/myapp/healthz" + } + + metric { + name = "${circonus_metric.myapp-cert-ttl.name}" + tags = "${circonus_metric.myapp-cert-ttl.tags}" + type = "${circonus_metric.myapp-cert-ttl.type}" + unit = "${circonus_metric.myapp-cert-ttl.unit}" + } + + metric { + name = "${circonus_metric.myapp-duration.name}" + tags = "${circonus_metric.myapp-duration.tags}" + type = "${circonus_metric.myapp-duration.type}" + unit = "${circonus_metric.myapp-duration.unit}" + } + + period = 60 + tags = ["source:circonus", "author:terraform"] + timeout = 10 +} + +resource "circonus_metric" "myapp-cert-ttl" { + name = "cert_end_in" + type = "numeric" + unit = "seconds" + tags = [ "${var.myapp-tags}", "resource:tls" ] +} + +resource "circonus_metric" "myapp-duration" { + name = "duration" + type = "numeric" + unit = "miliseconds" + tags = [ "${var.myapp-tags}" ] +} +``` + +## Argument Reference + +* `check` - (Required) The Circonus ID that this Rule Set will use to search for + a metric stream to alert on. + +* `if` - (Required) One or more ordered predicate clauses that describe when + Circonus should generate a notification. See below for details on the + structure of an `if` configuration clause. + +* `link` - (Optional) A link to external documentation (or anything else you + feel is important) when a notification is sent. This value will show up in + email alerts and the Circonus UI. + +* `metric_type` - (Optional) The type of metric this rule set will operate on. + Valid values are `numeric` (the default) and `text`. + +* `notes` - (Optional) Notes about this rule set. + +* `parent` - (Optional) A Circonus Metric ID that, if specified and active with + a severity 1 alert, will silence this rule set until all of the severity 1 + alerts on the parent clear. This value must match the format + `${check_id}_${metric_name}`. + +* `metric_name` - (Required) The name of the metric stream within a given check + that this rule set is active on. + +* `tags` - (Optional) A list of tags assigned to this rule set. + +## `if` Configuration + +The `if` configuration block is an +[ordered list of rules](https://login.circonus.com/user/docs/Alerting/Rules/Configure) that +are evaluated in order, first to last. The first `if` condition to evaluate +true shortcircuits all other `if` blocks in this rule set. An `if` block is also +referred to as a "rule." It is advised that all high-severity rules are ordered +before low-severity rules otherwise low-severity rules will mask notifications +that should be delivered with a high-severity. + +`if` blocks are made up of two configuration blocks: `value` and `then`. The +`value` configuration block specifies the criteria underwhich the metric streams +are evaluated. The `then` configuration block, optional, specifies what action +to take. + +### `value` Configuration + +A `value` block can have only one of several "predicate" attributes specified +because they conflict with each other. The list of mutually exclusive +predicates is dependent on the `metric_type`. To evaluate multiple predicates, +create multiple `if` configuration blocks in the proper order. + +#### `numeric` Predicates + +Metric types of type `numeric` support the following predicates. Only one of +the following predicates may be specified at a time. + +* `absent` - (Optional) If a metric has not been observed in this duration the + rule will fire. When present, this duration is evaluated in terms of seconds. + +* `changed` - (Optional) A boolean indicating this rule should fire when the + value changes (e.g. `n != n1`). + +* `min_value` - (Optional) When the value is less than this value, this rule will + fire (e.g. `n < ${min_value}`). + +* `max_value` - (Optional) When the value is greater than this value, this rule + will fire (e.g. `n > ${max_value}`). + +Additionally, a `numeric` check can also evaluate data based on a windowing +function versus the last measured value in the metric stream. In order to have +a rule evaluate on derived value from a window, include a nested `over` +attribute inside of the `value` configuration block. An `over` attribute needs +two attributes: + +* `last` - (Optional) A duration for the sliding window. Default `300s`. + +* `using` - (Optional) The window function to use over the `last` interval. + Valid window functions include: `average` (the default), `stddev`, `derive`, + `derive_stddev`, `counter`, `counter_stddev`, `derive_2`, `derive_2_stddev`, + `counter_2`, and `counter_2_stddev`. + +#### `text` Predicates + +Metric types of type `text` support the following predicates: + +* `absent` - (Optional) If a metric has not been observed in this duration the + rule will fire. When present, this duration is evaluated in terms of seconds. + +* `changed` - (Optional) A boolean indicating this rule should fire when the + last value in the metric stream changed from it's previous value (e.g. `n != + n-1`). + +* `contains` - (Optional) When the last value in the metric stream the value is + less than this value, this rule will fire (e.g. `strstr(n, ${contains}) != + NULL`). + +* `match` - (Optional) When the last value in the metric stream value exactly + matches this configured value, this rule will fire (e.g. `strcmp(n, ${match}) + == 0`). + +* `not_contain` - (Optional) When the last value in the metric stream does not + match this configured value, this rule will fire (e.g. `strstr(n, ${contains}) + == NULL`). + +* `not_match` - (Optional) When the last value in the metric stream does not match + this configured value, this rule will fire (e.g. `strstr(n, ${not_match}) == + NULL`). + +### `then` Configuration + +A `then` block can have the following attributes: + +* `after` - (Optional) Only execute this notification after waiting for this + number of minutes. Defaults to immediately, or `0m`. +* `notify` - (Optional) A list of contact group IDs to notify when this rule is + sends off a notification. +* `severity` - (Optional) The severity level of the notification. This can be + set to any value between `1` and `5`. Defaults to `1`. + +## Import Example + +`circonus_rule_set` supports importing resources. Supposing the following +Terraform (and that the referenced [`circonus_metric`](metric.html) +and [`circonus_check`](check.html) have already been imported): + +``` +resource "circonus_rule_set" "icmp-latency-alert" { + check = "${circonus_check.api_latency.checks[0]}" + metric_name = "maximum" + + if { + value { + absent = "600s" + } + + then { + notify = [ "${circonus_contact_group.test-trigger.id}" ] + severity = 1 + } + } + + if { + value { + over { + last = "120s" + using = "average" + } + + max_value = 0.5 # units are in miliseconds + } + + then { + notify = [ "${circonus_contact_group.test-trigger.id}" ] + severity = 2 + } + } +} +``` + +It is possible to import a `circonus_rule_set` resource with the following command: + +``` +$ terraform import circonus_rule_set.usage ID +``` + +Where `ID` is the `_cid` or Circonus ID of the Rule Set +(e.g. `/rule_set/201285_maximum`) and `circonus_rule_set.icmp-latency-alert` is +the name of the resource whose state will be populated as a result of the +command. diff --git a/website/source/docs/providers/cloudflare/index.html.markdown b/website/source/docs/providers/cloudflare/index.html.markdown index c36a3b219f..646285077e 100644 --- a/website/source/docs/providers/cloudflare/index.html.markdown +++ b/website/source/docs/providers/cloudflare/index.html.markdown @@ -3,13 +3,13 @@ layout: "cloudflare" page_title: "Provider: Cloudflare" sidebar_current: "docs-cloudflare-index" description: |- - The CloudFlare provider is used to interact with the DNS resources supported by CloudFlare. The provider needs to be configured with the proper credentials before it can be used. + The Cloudflare provider is used to interact with the DNS resources supported by Cloudflare. The provider needs to be configured with the proper credentials before it can be used. --- -# CloudFlare Provider +# Cloudflare Provider -The CloudFlare provider is used to interact with the -DNS resources supported by CloudFlare. The provider needs to be configured +The Cloudflare provider is used to interact with the +DNS resources supported by Cloudflare. The provider needs to be configured with the proper credentials before it can be used. Use the navigation to the left to read about the available resources. @@ -17,7 +17,7 @@ Use the navigation to the left to read about the available resources. ## Example Usage ``` -# Configure the CloudFlare provider +# Configure the Cloudflare provider provider "cloudflare" { email = "${var.cloudflare_email}" token = "${var.cloudflare_token}" diff --git a/website/source/docs/providers/cloudflare/r/record.html.markdown b/website/source/docs/providers/cloudflare/r/record.html.markdown index f7abeb1fd8..f01a4d5c3b 100644 --- a/website/source/docs/providers/cloudflare/r/record.html.markdown +++ b/website/source/docs/providers/cloudflare/r/record.html.markdown @@ -1,6 +1,6 @@ --- layout: "cloudflare" -page_title: "CloudFlare: cloudflare_record" +page_title: "Cloudflare: cloudflare_record" sidebar_current: "docs-cloudflare-resource-record" description: |- Provides a Cloudflare record resource. @@ -33,7 +33,7 @@ The following arguments are supported: * `type` - (Required) The type of the record * `ttl` - (Optional) The TTL of the record * `priority` - (Optional) The priority of the record -* `proxied` - (Optional) Whether the record gets CloudFlares origin protection. +* `proxied` - (Optional) Whether the record gets Cloudflare's origin protection. ## Attributes Reference @@ -46,5 +46,5 @@ The following attributes are exported: * `ttl` - The TTL of the record * `priority` - The priority of the record * `hostname` - The FQDN of the record -* `proxied` - (Optional) Whether the record gets CloudFlares origin protection. +* `proxied` - (Optional) Whether the record gets Cloudflare's origin protection. diff --git a/website/source/docs/providers/consul/index.html.markdown b/website/source/docs/providers/consul/index.html.markdown index 139ca8e291..0ab3a9f153 100644 --- a/website/source/docs/providers/consul/index.html.markdown +++ b/website/source/docs/providers/consul/index.html.markdown @@ -45,6 +45,7 @@ The following arguments are supported: * `address` - (Optional) The HTTP(S) API address of the agent to use. Defaults to "127.0.0.1:8500". * `scheme` - (Optional) The URL scheme of the agent to use ("http" or "https"). Defaults to "http". +* `http_auth` - (Optional) HTTP Basic Authentication credentials to be used when communicating with Consul, in the format of either `user` or `user:pass`. This may also be specified using the `CONSUL_HTTP_AUTH` environment variable. * `datacenter` - (Optional) The datacenter to use. Defaults to that of the agent. * `token` - (Optional) The ACL token to use by default when making requests to the agent. * `ca_file` - (Optional) A path to a PEM-encoded certificate authority used to verify the remote agent's certificate. diff --git a/website/source/docs/providers/datadog/r/downtime.html.markdown b/website/source/docs/providers/datadog/r/downtime.html.markdown new file mode 100644 index 0000000000..848b876082 --- /dev/null +++ b/website/source/docs/providers/datadog/r/downtime.html.markdown @@ -0,0 +1,56 @@ +--- +layout: "datadog" +page_title: "Datadog: datadog_downtime" +sidebar_current: "docs-datadog-resource-downtime" +description: |- + Provides a Datadog downtime resource. This can be used to create and manage downtimes. +--- + +# datadog\_downtime + +Provides a Datadog downtime resource. This can be used to create and manage Datadog downtimes. + +## Example Usage + +``` +# Create a new daily 1700-0900 Datadog downtime +resource "datadog_downtime" "foo" { + scope = ["*"] + start = 1483308000 + end = 1483365600 + + recurrence { + type = "days" + period = 1 + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `scope` - (Required) A list of items to apply the downtime to, e.g. host:X +* `start` - (Optional) POSIX timestamp to start the downtime. +* `end` - (Optional) POSIX timestamp to end the downtime. +* `recurrence` - (Optional) A dictionary to configure the downtime to be recurring. + * `type` - days, weeks, months, or years + * `period` - How often to repeat as an integer. For example to repeat every 3 days, select a type of days and a period of 3. + * `week_days` - (Optional) A list of week days to repeat on. Choose from: Mon, Tue, Wed, Thu, Fri, Sat or Sun. Only applicable when type is weeks. First letter must be capitalized. + * `until_occurrences` - (Optional) How many times the downtime will be rescheduled. `until_occurrences` and `until_date` are mutually exclusive. + * `until_date` - (Optional) The date at which the recurrence should end as a POSIX timestamp. `until_occurrences` and `until_date` are mutually exclusive. +* `message` - (Optional) A message to include with notifications for this downtime. + +## Attributes Reference + +The following attributes are exported: + +* `id` - ID of the Datadog downtime + +## Import + +Downtimes can be imported using their numeric ID, e.g. + +``` +$ terraform import datadog_downtime.bytes_received_localhost 2081 +``` diff --git a/website/source/docs/providers/dns/d/dns_a_record_set.html.markdown b/website/source/docs/providers/dns/d/dns_a_record_set.html.markdown new file mode 100644 index 0000000000..b4432197cb --- /dev/null +++ b/website/source/docs/providers/dns/d/dns_a_record_set.html.markdown @@ -0,0 +1,37 @@ +--- +layout: "dns" +page_title: "DNS: dns_a_record_set" +sidebar_current: "docs-dns-datasource-a-record-set" +description: |- + Get DNS A record set. +--- + +# dns\_a\_record\_set + +Use this data source to get DNS A records of the host. + +## Example Usage + +``` +data "dns_a_record_set" "google" { + host = "google.com" +} + +output "google_addrs" { + value = "${join(",", data.dns_a_record_set.google.addrs)}" +} +``` + +## Argument Reference + +The following arguments are supported: + + * `host` - (required): Host to look up + +## Attributes Reference + +The following attributes are exported: + + * `id` - Set to `host`. + + * `addrs` - A list of IP addresses. IP addresses are always sorted to avoid constant changing plans. \ No newline at end of file diff --git a/website/source/docs/providers/dns/d/dns_cname_record_set.html.markdown b/website/source/docs/providers/dns/d/dns_cname_record_set.html.markdown new file mode 100644 index 0000000000..b230fbbe96 --- /dev/null +++ b/website/source/docs/providers/dns/d/dns_cname_record_set.html.markdown @@ -0,0 +1,37 @@ +--- +layout: "dns" +page_title: "DNS: dns_cname_record_set" +sidebar_current: "docs-dns-datasource-cname-record-set" +description: |- + Get DNS CNAME record set. +--- + +# dns\_cname\_record\_set + +Use this data source to get DNS CNAME record set of the host. + +## Example Usage + +``` +data "dns_cname_record_set" "hashicorp" { + host = "www.hashicorp.com" +} + +output "hashi_cname" { + value = "${data.dns_cname_record_set.hashi.cname}" +} +``` + +## Argument Reference + +The following arguments are supported: + + * `host` - (required): Host to look up + +## Attributes Reference + +The following attributes are exported: + + * `id` - Set to `host`. + + * `cname` - A CNAME record associated with host. \ No newline at end of file diff --git a/website/source/docs/providers/dns/d/dns_txt_record_set.html.markdown b/website/source/docs/providers/dns/d/dns_txt_record_set.html.markdown new file mode 100644 index 0000000000..ab50c7ea23 --- /dev/null +++ b/website/source/docs/providers/dns/d/dns_txt_record_set.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "dns" +page_title: "DNS: dns_txt_record_set" +sidebar_current: "docs-dns-datasource-txt-record-set" +description: |- + Get DNS TXT record set. +--- + +# dns\_txt\_record\_set + +Use this data source to get DNS TXT record set of the host. + +## Example Usage + +``` +data "dns_txt_record_set" "hashicorp" { + host = "www.hashicorp.com" +} + +output "hashi_txt" { + value = "${data.dns_txt_record_set.hashi.record}" +} + +output "hashi_txts" { + value = "${join(",", data.dns_txt_record_set.hashi.records})" +} +``` + +## Argument Reference + +The following arguments are supported: + + * `host` - (required): Host to look up + +## Attributes Reference + +The following attributes are exported: + + * `id` - Set to `host`. + + * `record` - The first TXT record. + + * `records` - A list of TXT records. \ No newline at end of file diff --git a/website/source/docs/providers/dnsimple/r/record.html.markdown b/website/source/docs/providers/dnsimple/r/record.html.markdown index 18a94d1390..addf1be1e5 100644 --- a/website/source/docs/providers/dnsimple/r/record.html.markdown +++ b/website/source/docs/providers/dnsimple/r/record.html.markdown @@ -43,6 +43,7 @@ The following arguments are supported: * `value` - (Required) The value of the record * `type` - (Required) The type of the record * `ttl` - (Optional) The TTL of the record +* `priority` - (Optional) The priority of the record - only useful for some record types ## Attributes Reference diff --git a/website/source/docs/providers/docker/index.html.markdown b/website/source/docs/providers/docker/index.html.markdown index 50f4071a19..bff8f6140c 100644 --- a/website/source/docs/providers/docker/index.html.markdown +++ b/website/source/docs/providers/docker/index.html.markdown @@ -16,12 +16,6 @@ API hosts. Use the navigation to the left to read about the available resources. -
-Note: The Docker provider is new as of Terraform 0.4. -It is ready to be used but many features are still being added. If there -is a Docker feature missing, please report it in the GitHub repo. -
- ## Example Usage ``` diff --git a/website/source/docs/providers/external/data_source.html.md b/website/source/docs/providers/external/data_source.html.md index ca983d41ea..9241d02159 100644 --- a/website/source/docs/providers/external/data_source.html.md +++ b/website/source/docs/providers/external/data_source.html.md @@ -75,7 +75,7 @@ The following arguments are supported: arguments containing spaces. * `query` - (Optional) A map of string values to pass to the external program - as the query arguments. If not supplied, the program will recieve an empty + as the query arguments. If not supplied, the program will receive an empty object as its input. ## Attributes Reference diff --git a/website/source/docs/providers/fastly/r/service_v1.html.markdown b/website/source/docs/providers/fastly/r/service_v1.html.markdown index 994ad7f826..fa995c367a 100644 --- a/website/source/docs/providers/fastly/r/service_v1.html.markdown +++ b/website/source/docs/providers/fastly/r/service_v1.html.markdown @@ -180,7 +180,9 @@ Default `200`. * `port` - (Optional) The port number on which the Backend responds. Default `80`. * `request_condition` - (Optional, string) Name of already defined `condition`, which if met, will select this backend during a request. * `ssl_check_cert` - (Optional) Be strict about checking SSL certs. Default `true`. -* `ssl_hostname` - (Optional) Used for both SNI during the TLS handshake and to validate the cert. +* `ssl_hostname` - (Optional, deprecated by Fastly) Used for both SNI during the TLS handshake and to validate the cert. +* `ssl_cert_hostname` - (Optional) Overrides ssl_hostname, but only for cert verification. Does not affect SNI at all. +* `ssl_sni_hostname` - (Optional) Overrides ssl_hostname, but only for SNI in the handshake. Does not affect cert validation at all. * `shield` - (Optional) The POP of the shield designated to reduce inbound load. * `weight` - (Optional) The [portion of traffic](https://docs.fastly.com/guides/performance-tuning/load-balancing-configuration.html#how-weight-affects-load-balancing) to send to this Backend. Each Backend receives `weight / total` of the traffic. Default `100`. diff --git a/website/source/docs/providers/github/r/organization_webhook.html.markdown b/website/source/docs/providers/github/r/organization_webhook.html.markdown new file mode 100644 index 0000000000..1062d1fbca --- /dev/null +++ b/website/source/docs/providers/github/r/organization_webhook.html.markdown @@ -0,0 +1,45 @@ +--- +layout: "github" +page_title: "GitHub: github_organization_webhook" +sidebar_current: "docs-github-resource-organization-webhook" +description: |- + Creates and manages webhooks for Github organizations +--- + +# github\_organization\_webhook + +This resource allows you to create and manage webhooks for Github organization. + +## Example Usage + +``` +resource "github_organization_webhook" "foo" { + name = "web" + configuration { + url = "https://google.de/" + content_type = "form" + insecure_ssl = false + } + active = false + + events = ["issues"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The type of the webhook. See a list of [available hooks](https://api.github.com/hooks). + +* `events` - (Required) A list of events which should trigger the webhook. Defaults to `["push"]`. See a list of [available events](https://developer.github.com/v3/activity/events/types/) + +* `config` - (Required) key/value pair of configuration for this webhook. Available keys are `url`, `content_type`, `secret` and `insecure_ssl`. + +* `active` - (Optional) Indicate of the webhook should receive events. Defaults to `true`. + +## Attributes Reference + +The following additional attributes are exported: + +* `url` - URL of the webhook diff --git a/website/source/docs/providers/github/r/repository_webhook.html.markdown b/website/source/docs/providers/github/r/repository_webhook.html.markdown new file mode 100644 index 0000000000..ab57d5f5c2 --- /dev/null +++ b/website/source/docs/providers/github/r/repository_webhook.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "github" +page_title: "GitHub: github_repository_webhook" +sidebar_current: "docs-github-resource-repository-webhook" +description: |- + Creates and manages repository webhooks within Github organizations +--- + +# github\_repository\_webhook + +This resource allows you to create and manage webhooks for repositories within your +Github organization. + +This resource cannot currently be used to manage webhooks for *personal* repositories, +outside of organizations. + +## Example Usage + +``` +resource "github_repository" "repo" { + name = "foo" + description = "Terraform acceptance tests" + homepage_url = "http://example.com/" + + private = false +} + +resource "github_repository_webhook" "foo" { + repository = "${github_repository.repo.name}" + + name = "web" + configuration { + url = "https://google.de/" + content_type = "form" + insecure_ssl = false + } + active = false + + events = ["issues"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The type of the webhook. See a list of [available hooks](https://api.github.com/hooks). + +* `repository` - (Required) The repository of the webhook. + +* `events` - (Required) A list of events which should trigger the webhook. Defaults to `["push"]`. See a list of [available events](https://developer.github.com/v3/activity/events/types/) + +* `config` - (Required) key/value pair of configuration for this webhook. Available keys are `url`, `content_type`, `secret` and `insecure_ssl`. + +* `active` - (Optional) Indicate of the webhook should receive events. Defaults to `true`. + +## Attributes Reference + +The following additional attributes are exported: + +* `url` - URL of the webhook diff --git a/website/source/docs/providers/google/index.html.markdown b/website/source/docs/providers/google/index.html.markdown index fc125477a8..e503887e90 100644 --- a/website/source/docs/providers/google/index.html.markdown +++ b/website/source/docs/providers/google/index.html.markdown @@ -62,19 +62,6 @@ The following keys can be used to configure the provider. * `GCLOUD_REGION` * `CLOUDSDK_COMPUTE_REGION` -The following keys are supported for backwards compatibility, and may be -removed in a future version: - -* `account_file` - __Deprecated: please use `credentials` instead.__ - Path to or contents of the JSON file used to describe your - account credentials, downloaded from Google Cloud Console. More details on - retrieving this file are below. The `account file` can be "" if you are running - terraform from a GCE instance with a properly-configured [Compute Engine - Service Account](https://cloud.google.com/compute/docs/authentication). This - can also be specified with the `GOOGLE_ACCOUNT_FILE` shell environment - variable. - - ## Authentication JSON File Authenticating with Google Cloud services requires a JSON diff --git a/website/source/docs/providers/google/r/compute_disk.html.markdown b/website/source/docs/providers/google/r/compute_disk.html.markdown index faeb20cc7a..6544707dd0 100644 --- a/website/source/docs/providers/google/r/compute_disk.html.markdown +++ b/website/source/docs/providers/google/r/compute_disk.html.markdown @@ -17,7 +17,7 @@ resource "google_compute_disk" "default" { name = "test-disk" type = "pd-ssd" zone = "us-central1-a" - image = "debian7-wheezy" + image = "debian-cloud/debian-8" } ``` @@ -37,9 +37,11 @@ The following arguments are supported: encoded in [RFC 4648 base64](https://tools.ietf.org/html/rfc4648#section-4) to encrypt this disk. -* `image` - (Optional) The image from which to initialize this disk. Either the - full URL, a contraction of the form "project/name", or just a name (in which - case the current project is used). +* `image` - (Optional) The image from which to initialize this disk. This can be + one of: the image's `self_link`, `projects/{project}/global/images/{image}`, + `projects/{project}/global/images/family/{family}`, `global/images/{image}`, + `global/images/family/{family}`, `family/{family}`, `{project}/{family}`, + `{project}/{image}`, `{family}`, or `{image}`. * `project` - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used. diff --git a/website/source/docs/providers/google/r/compute_instance.html.markdown b/website/source/docs/providers/google/r/compute_instance.html.markdown index 174feee9d9..3481d4009b 100644 --- a/website/source/docs/providers/google/r/compute_instance.html.markdown +++ b/website/source/docs/providers/google/r/compute_instance.html.markdown @@ -117,12 +117,11 @@ the type is "local-ssd", in which case scratch must be true). * `disk` - The name of the existing disk (such as those managed by `google_compute_disk`) to attach. -* `image` - The image from which to initialize this - disk. Either the full URL, a contraction of the form "project/name", the - name of a Google-supported - [image family](https://cloud.google.com/compute/docs/images#image_families), - or simple the name of an image or image family (in which case the current - project is used). +* `image` - The image from which to initialize this disk. This can be + one of: the image's `self_link`, `projects/{project}/global/images/{image}`, + `projects/{project}/global/images/family/{family}`, `global/images/{image}`, + `global/images/family/{family}`, `family/{family}`, `{project}/{family}`, + `{project}/{image}`, `{family}`, or `{image}`. * `auto_delete` - (Optional) Whether or not the disk should be auto-deleted. This defaults to true. Leave true for local SSDs. diff --git a/website/source/docs/providers/google/r/compute_instance_template.html.markdown b/website/source/docs/providers/google/r/compute_instance_template.html.markdown index 0774f0e25c..5670409b7e 100644 --- a/website/source/docs/providers/google/r/compute_instance_template.html.markdown +++ b/website/source/docs/providers/google/r/compute_instance_template.html.markdown @@ -176,8 +176,12 @@ The `disk` block supports: * `disk_name` - (Optional) Name of the disk. When not provided, this defaults to the name of the instance. -* `source_image` - (Required if source not set) The name of the image to base - this disk off of. Accepts same arguments as a [google_compute_instance image](https://www.terraform.io/docs/providers/google/r/compute_instance.html#image). +* `source_image` - (Required if source not set) The image from which to + initialize this disk. This can be one of: the image's `self_link`, + `projects/{project}/global/images/{image}`, + `projects/{project}/global/images/family/{family}`, `global/images/{image}`, + `global/images/family/{family}`, `family/{family}`, `{project}/{family}`, + `{project}/{image}`, `{family}`, or `{image}`. * `interface` - (Optional) Specifies the disk interface to use for attaching this disk. diff --git a/website/source/docs/providers/google/r/compute_region_backend_service.html.markdown b/website/source/docs/providers/google/r/compute_region_backend_service.html.markdown index 583fa17d01..227f80568a 100644 --- a/website/source/docs/providers/google/r/compute_region_backend_service.html.markdown +++ b/website/source/docs/providers/google/r/compute_region_backend_service.html.markdown @@ -56,7 +56,6 @@ resource "google_compute_health_check" "default" { name = "test" check_interval_sec = 1 timeout_sec = 1 - type = "TCP" tcp_health_check { port = "80" diff --git a/website/source/docs/providers/google/r/container_cluster.html.markdown b/website/source/docs/providers/google/r/container_cluster.html.markdown index 9375319a67..1962ac2440 100644 --- a/website/source/docs/providers/google/r/container_cluster.html.markdown +++ b/website/source/docs/providers/google/r/container_cluster.html.markdown @@ -109,6 +109,9 @@ which the cluster's instances are launched * `disk_size_gb` - (Optional) Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB. +* `local_ssd_count` - (Optional) The amount of local SSD disks that will be + attached to each cluster node. Defaults to 0. + * `oauth_scopes` - (Optional) The set of Google API scopes to be made available on all of the node VMs under the "default" service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure @@ -121,6 +124,14 @@ which the cluster's instances are launched * `monitoring` (`https://www.googleapis.com/auth/monitoring`), if `monitoring_service` points to Google +* `service_account` - (Optional) The service account to be used by the Node VMs. + If not specified, the "default" service account is used. + +* `metadata` - (Optional) The metadata key/value pairs assigned to instances in + the cluster. + +* `image_type` - (Optional) The image type to use for this node. + **Addons Config** supports the following addons: * `http_load_balancing` - (Optional) The status of the HTTP Load Balancing diff --git a/website/source/docs/providers/google/r/container_node_pool.html.markdown b/website/source/docs/providers/google/r/container_node_pool.html.markdown new file mode 100644 index 0000000000..12a24cbc78 --- /dev/null +++ b/website/source/docs/providers/google/r/container_node_pool.html.markdown @@ -0,0 +1,69 @@ +--- +layout: "google" +page_title: "Google: google_container_node_pool" +sidebar_current: "docs-google-container-node-pool" +description: |- + Manages a GKE NodePool resource. +--- + +# google\_container\_node\_pool + +Manages a Node Pool resource within GKE. For more information see +[the official documentation](https://cloud.google.com/container-engine/docs/node-pools) +and +[API](https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters.nodePools). + +## Example usage + +```tf +resource "google_container_node_pool" "np" { + name = "my-node-pool" + zone = "us-central1-a" + cluster = "${google_container_cluster.primary.name}" + initial_node_count = 3 +} + +resource "google_container_cluster" "primary" { + name = "marcellus-wallace" + zone = "us-central1-a" + initial_node_count = 3 + + additional_zones = [ + "us-central1-b", + "us-central1-c", + ] + + master_auth { + username = "mr.yoda" + password = "adoy.rm" + } + + node_config { + oauth_scopes = [ + "https://www.googleapis.com/auth/compute", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring", + ] + } +} +``` + +## Argument Reference + +* `zone` - (Required) The zone in which the cluster resides. + +* `cluster` - (Required) The cluster to create the node pool for. + +* `initial_node_count` - (Required) The initial node count for the pool. + +- - - + +* `project` - (Optional) The project in which to create the node pool. If blank, + the provider-configured project will be used. + +* `name` - (Optional) The name of the node pool. If left blank, Terraform will + auto-generate a unique name. + +* `name_prefix` - (Optional) Creates a unique name for the node pool beginning + with the specified prefix. Conflicts with `name`. diff --git a/website/source/docs/providers/google/r/google_project.html.markdown b/website/source/docs/providers/google/r/google_project.html.markdown index d84d38cc28..ad668b6725 100755 --- a/website/source/docs/providers/google/r/google_project.html.markdown +++ b/website/source/docs/providers/google/r/google_project.html.markdown @@ -19,6 +19,24 @@ resource must have `roles/resourcemanager.projectCreator`. See the [Access Control for Organizations Using IAM](https://cloud.google.com/resource-manager/docs/access-control-org) doc for more information. +Note that prior to 0.8.5, `google_project` functioned like a data source, +meaning any project referenced by it had to be created and managed outside +Terraform. As of 0.8.5, `google_project` functions like any other Terraform +resource, with Terraform creating and managing the project. To replicate the old +behavior, either: + +* Use the project ID directly in whatever is referencing the project, using the + [google_project_iam_policy](/docs/providers/google/r/google_project_iam_policy.html) + to replace the old `policy_data` property. +* Use the [import](/docs/import/usage.html) functionality + to import your pre-existing project into Terraform, where it can be referenced and + used just like always, keeping in mind that Terraform will attempt to undo any changes + made outside Terraform. + +~> It's important to note that any project resources that were added to your Terraform config +prior to 0.8.5 will continue to function as they always have, and will not be managed by +Terraform. Only newly added projects are affected. + ## Example Usage ```js @@ -58,9 +76,6 @@ The following arguments are supported: * `name` - (Optional) The display name of the project. This is required if you are creating a new project. -* `services` - (Optional) The services/APIs that are enabled for this project. - For a list of available services, run `gcloud beta service-management list` - * `skip_delete` - (Optional) If true, the Terraform resource can be deleted without deleting the Project via the Google API. @@ -81,7 +96,7 @@ exported: ## ID Field -In previous versions of Terraform, `google_project` resources used an `id` field in +In versions of Terraform prior to 0.8.5, `google_project` resources used an `id` field in config files to specify the project ID. Unfortunately, due to limitations in Terraform, this field always looked empty to Terraform. Terraform fell back on using the project the Google Cloud provider is configured with. If you're using the `id` field in your diff --git a/website/source/docs/providers/google/r/google_project_iam_policy.html.markdown b/website/source/docs/providers/google/r/google_project_iam_policy.html.markdown index 94a991f975..dcc9d87b75 100644 --- a/website/source/docs/providers/google/r/google_project_iam_policy.html.markdown +++ b/website/source/docs/providers/google/r/google_project_iam_policy.html.markdown @@ -11,6 +11,9 @@ description: |- Allows creation and management of an IAM policy for an existing Google Cloud Platform project. +~> **Be careful!** You can accidentally lock yourself out of your project + using this resource. Proceed with caution. + ## Example Usage ```js diff --git a/website/source/docs/providers/google/r/google_project_services.html.markdown b/website/source/docs/providers/google/r/google_project_services.html.markdown index 98bd048115..d6d2eff133 100644 --- a/website/source/docs/providers/google/r/google_project_services.html.markdown +++ b/website/source/docs/providers/google/r/google_project_services.html.markdown @@ -16,7 +16,7 @@ in the config will be removed. ```js resource "google_project_services" "project" { - project_id = "your-project-id" + project = "your-project-id" services = ["iam.googleapis.com", "cloudresourcemanager.googleapis.com"] } ``` @@ -25,7 +25,7 @@ resource "google_project_services" "project" { The following arguments are supported: -* `project_id` - (Required) The project ID. +* `project` - (Required) The project ID. Changing this forces a new project to be created. * `services` - (Required) The list of services that are enabled. Supports diff --git a/website/source/docs/providers/ignition/d/config.html.md b/website/source/docs/providers/ignition/d/config.html.md index 16f5c7c44f..d565758ff9 100644 --- a/website/source/docs/providers/ignition/d/config.html.md +++ b/website/source/docs/providers/ignition/d/config.html.md @@ -15,7 +15,7 @@ Renders an ignition configuration as JSON. It contains all the disks, partition ``` data "ignition_config" "example" { systemd = [ - "${ignition_systemd_unit.example.id}", + "${data.ignition_systemd_unit.example.id}", ] } ``` @@ -55,4 +55,4 @@ The `append` and `replace` blocks supports: The following attributes are exported: -* `rendered` - The final rendered template. \ No newline at end of file +* `rendered` - The final rendered template. diff --git a/website/source/docs/providers/ignition/d/filesystem.html.md b/website/source/docs/providers/ignition/d/filesystem.html.md index 25c2b4fd48..b8ed0eaf46 100644 --- a/website/source/docs/providers/ignition/d/filesystem.html.md +++ b/website/source/docs/providers/ignition/d/filesystem.html.md @@ -18,7 +18,7 @@ data "ignition_filesystem" "foo" { mount { device = "/dev/disk/by-label/ROOT" format = "xfs" - force = true + create = true options = ["-L", "ROOT"] } } @@ -36,14 +36,16 @@ The following arguments are supported: The `mount` block supports: - + * `device` - (Required) The absolute path to the device. Devices are typically referenced by the _/dev/disk/by-*_ symlinks. * `format` - (Required) The filesystem format (ext4, btrfs, or xfs). -* `force` - (Optional) Whether or not the create operation shall overwrite an existing filesystem. +* `create` - (Optional) Indicates if the filesystem shall be created. -* `options` - (Optional) Any additional options to be passed to the format-specific mkfs utility. +* `force` - (Optional) Whether or not the create operation shall overwrite an existing filesystem. Only allowed if the filesystem is being created. + +* `options` - (Optional) Any additional options to be passed to the format-specific mkfs utility. Only allowed if the filesystem is being created ## Attributes Reference diff --git a/website/source/docs/providers/index.html.markdown b/website/source/docs/providers/index.html.markdown index b0c8c9de53..53b778a01a 100644 --- a/website/source/docs/providers/index.html.markdown +++ b/website/source/docs/providers/index.html.markdown @@ -16,6 +16,6 @@ Terraform is agnostic to the underlying platforms by supporting providers. A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Atlas, DNSimple, -CloudFlare). +Cloudflare). Use the navigation to the left to read about the available providers. diff --git a/website/source/docs/providers/kubernetes/index.html.markdown b/website/source/docs/providers/kubernetes/index.html.markdown new file mode 100644 index 0000000000..4a136de281 --- /dev/null +++ b/website/source/docs/providers/kubernetes/index.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "kubernetes" +page_title: "Provider: Kubernetes" +sidebar_current: "docs-kubernetes-index" +description: |- + The Kubernetes (K8s) provider is used to interact with the resources supported by Kubernetes. The provider needs to be configured with the proper credentials before it can be used. +--- + +# Kubernetes Provider + +The Kubernetes (K8S) provider is used to interact with the resources supported by Kubernetes. The provider needs to be configured with the proper credentials before it can be used. + +Use the navigation to the left to read about the available resources. + +-> **Note:** The Kubernetes provider is new as of Terraform 0.9. It is ready to be used but many features are still being added. If there is a Kubernetes feature missing, please report it in the GitHub repo. + +## Example Usage + +``` +provider "kubernetes" { + config_context_auth_info = "ops" + config_context_cluster = "mycluster" +} + +resource "kubernetes_namespace" "example" { + metadata { + name = "my-first-namespace" + } +} +``` + +## Authentication + +There are generally two ways to configure the Kubernetes provider. + +The provider always first tries to load **a config file** from a given +(or default) location - this requires valid `config_context_auth_info` & `config_context_cluster`. + +The other way is **statically** define all the credentials: + +``` +provider "kubernetes" { + host = "https://104.196.242.174" + username = "ClusterMaster" + password = "MindTheGap" + client_certificate = "${file("~/.kube/client-cert.pem")}" + client_key = "${file("~/.kube/client-key.pem")}" + cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}" +} +``` + +If you have **both** valid configuration in a config file and static configuration, the static one is used as override. +i.e. any static field will override its counterpart loaded from the config. + +## Argument Reference + +The following arguments are supported: + +* `host` - (Optional) The hostname (in form of URI) of Kubernetes master. Can be sourced from `KUBE_HOST`. Defaults to `https://localhost`. +* `username` - (Optional) The username to use for HTTP basic authentication when accessing the Kubernetes master endpoint. Can be sourced from `KUBE_USER`. +* `password` - (Optional) The password to use for HTTP basic authentication when accessing the Kubernetes master endpoint. Can be sourced from `KUBE_PASSWORD`. +* `insecure`- (Optional) Whether server should be accessed without verifying the TLS certificate. Can be sourced from `KUBE_INSECURE`. Defaults to `false`. +* `client_certificate` - (Optional) PEM-encoded client certificate for TLS authentication. Can be sourced from `KUBE_CLIENT_CERT_DATA`. +* `client_key` - (Optional) PEM-encoded client certificate key for TLS authentication. Can be sourced from `KUBE_CLIENT_KEY_DATA`. +* `cluster_ca_certificate` - (Optional) PEM-encoded root certificates bundle for TLS authentication. Can be sourced from `KUBE_CLUSTER_CA_CERT_DATA`. +* `config_path` - (Optional) Path to the kube config file. Can be sourced from `KUBE_CONFIG`. Defaults to `~/.kube/config`. +* `config_context_auth_info` - (Optional) Authentication info context of the kube config (name of the kubeconfig user, `--user` flag in `kubectl`). Can be sourced from `KUBE_CTX_AUTH_INFO`. +* `config_context_cluster` - (Optional) Cluster context of the kube config (name of the kubeconfig cluster, `--cluster` flag in `kubectl`). Can be sourced from `KUBE_CTX_CLUSTER`. diff --git a/website/source/docs/providers/kubernetes/r/config_map.html.markdown b/website/source/docs/providers/kubernetes/r/config_map.html.markdown new file mode 100644 index 0000000000..f8ca26c3f4 --- /dev/null +++ b/website/source/docs/providers/kubernetes/r/config_map.html.markdown @@ -0,0 +1,60 @@ +--- +layout: "kubernetes" +page_title: "Kubernetes: kubernetes_config_map" +sidebar_current: "docs-kubernetes-resource-config-map" +description: |- + The resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes. +--- + +# kubernetes_config_map + +The resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes. +Config Map can be used to store fine-grained information like individual properties or coarse-grained information like entire config files or JSON blobs. + +## Example Usage + +``` +resource "kubernetes_config_map" "example" { + metadata { + name = "my-config" + } + data { + api_host = "myhost:443" + db_host = "dbhost:5432" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `data` - (Optional) A map of the configuration data. +* `metadata` - (Required) Standard config map's metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata + +## Nested Blocks + +### `metadata` + +#### Arguments + +* `annotations` - (Optional) An unstructured key value map stored with the config map that may be used to store arbitrary metadata. More info: http://kubernetes.io/docs/user-guide/annotations +* `generate_name` - (Optional) Prefix, used by the server, to generate a unique name ONLY IF the `name` field has not been provided. This value will also be combined with a unique suffix. Read more: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#idempotency +* `labels` - (Optional) Map of string keys and values that can be used to organize and categorize (scope and select) the config map. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels +* `name` - (Optional) Name of the config map, must be unique. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names +* `namespace` - (Optional) Namespace defines the space within which name of the config map must be unique. + +#### Attributes + +* `generation` - A sequence number representing a specific generation of the desired state. +* `resource_version` - An opaque value that represents the internal version of this config map that can be used by clients to determine when config map has changed. Read more: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency +* `self_link` - A URL representing this config map. +* `uid` - The unique in time and space value for this config map. More info: http://kubernetes.io/docs/user-guide/identifiers#uids + +## Import + +Config Map can be imported using its name, e.g. + +``` +$ terraform import kubernetes_config_map.example my-config +``` diff --git a/website/source/docs/providers/kubernetes/r/namespace.html.markdown b/website/source/docs/providers/kubernetes/r/namespace.html.markdown new file mode 100644 index 0000000000..c24d3e7747 --- /dev/null +++ b/website/source/docs/providers/kubernetes/r/namespace.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "kubernetes" +page_title: "Kubernetes: kubernetes_namespace" +sidebar_current: "docs-kubernetes-resource-namespace" +description: |- + Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. +--- + +# kubernetes_namespace + +Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. +Read more about namespaces at https://kubernetes.io/docs/user-guide/namespaces/ + +## Example Usage + +``` +resource "kubernetes_namespace" "example" { + metadata { + annotations { + name = "example-annotation" + } + labels { + mylabel = "label-value" + } + name = "TerraformExampleNamespace" + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `metadata` - (Required) Standard namespace's [metadata](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata). + +## Nested Blocks + +### `metadata` + +#### Arguments + +* `annotations` - (Optional) An unstructured key value map stored with the namespace that may be used to store arbitrary metadata. More info: http://kubernetes.io/docs/user-guide/annotations +* `generate_name` - (Optional) Prefix, used by the server, to generate a unique name ONLY IF the `name` field has not been provided. This value will also be combined with a unique suffix. Read more about [name idempotency](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#idempotency). +* `labels` - (Optional) Map of string keys and values that can be used to organize and categorize (scope and select) namespaces. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels +* `name` - (Optional) Name of the namespace, must be unique. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names + +#### Attributes + +* `generation` - A sequence number representing a specific generation of the desired state. +* `resource_version` - An opaque value that represents the internal version of this namespace that can be used by clients to determine when namespaces have changed. Read more about [concurrency control and consistency](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency). +* `self_link` - A URL representing this namespace. +* `uid` - The unique in time and space value for this namespace. More info: http://kubernetes.io/docs/user-guide/identifiers#uids + +## Import + +Namespaces can be imported using their name, e.g. + +``` +$ terraform import kubernetes_namespace.n TerraformExampleNamespace +``` diff --git a/website/source/docs/providers/openstack/d/networking_network_v2.html.markdown b/website/source/docs/providers/openstack/d/networking_network_v2.html.markdown index c0bdc33d0f..571a698f11 100644 --- a/website/source/docs/providers/openstack/d/networking_network_v2.html.markdown +++ b/website/source/docs/providers/openstack/d/networking_network_v2.html.markdown @@ -24,6 +24,8 @@ data "openstack_networking_network_v2" "network" { A Neutron client is needed to retrieve networks ids. If omitted, the `OS_REGION_NAME` environment variable is used. +* `network_id` - (Optional) The ID of the network. + * `name` - (Optional) The name of the network. * `matching_subnet_cidr` - (Optional) The CIDR of a subnet within the network. diff --git a/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown index fc1cd2dfe6..1337902377 100644 --- a/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown +++ b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown @@ -337,6 +337,10 @@ The following arguments are supported: before destroying it, thus giving chance for guest OS daemons to stop correctly. If instance doesn't stop within timeout, it will be destroyed anyway. +* `force_delete` - (Optional) Whether to force the OpenStack instance to be + forcefully deleted. This is useful for environments that have reclaim / soft + deletion enabled. + The `network` block supports: diff --git a/website/source/docs/providers/pagerduty/index.html.markdown b/website/source/docs/providers/pagerduty/index.html.markdown index ca834d6e07..5b34c7c2fd 100644 --- a/website/source/docs/providers/pagerduty/index.html.markdown +++ b/website/source/docs/providers/pagerduty/index.html.markdown @@ -39,3 +39,4 @@ resource "pagerduty_user" "earline" { The following arguments are supported: * `token` - (Required) The v2 authorization token. See [API Documentation](https://v2.developer.pagerduty.com/docs/authentication) for more information. +* `skip_credentials_validation` - (Optional) Skip validation of the token against the PagerDuty API. diff --git a/website/source/docs/providers/pagerduty/r/schedule.html.markdown b/website/source/docs/providers/pagerduty/r/schedule.html.markdown index 0f11dcbf8c..6f1d0e7bd8 100644 --- a/website/source/docs/providers/pagerduty/r/schedule.html.markdown +++ b/website/source/docs/providers/pagerduty/r/schedule.html.markdown @@ -64,8 +64,8 @@ Schedule layers (`layer`) supports the following: Restriction blocks (`restriction`) supports the following: * `type` - (Required) Can be `daily_restriction` or `weekly_restriction` -* `start_time_of_day` - (Required) The duration of the restriction in `seconds`. -* `duration_seconds` - (Required) The start time in `HH:mm:ss` format. +* `start_time_of_day` - (Required) The start time in `HH:mm:ss` format. +* `duration_seconds` - (Required) The duration of the restriction in `seconds`. ## Attributes Reference diff --git a/website/source/docs/providers/rancher/r/certificate.html.markdown b/website/source/docs/providers/rancher/r/certificate.html.markdown new file mode 100644 index 0000000000..bc0d445ded --- /dev/null +++ b/website/source/docs/providers/rancher/r/certificate.html.markdown @@ -0,0 +1,66 @@ +--- +layout: "rancher" +page_title: "Rancher: rancher_certificate" +sidebar_current: "docs-rancher-resource-certificate" +description: |- + Provides a Rancher Certificate resource. This can be used to create certificates for rancher environments and retrieve their information. +--- + +# rancher\_certificate + +Provides a Rancher Certificate resource. This can be used to create certificates for rancher environments and retrieve their information. + +## Example Usage + +```hcl +# Create a new Rancher Certificate +resource rancher_certificate "foo" { + name = "foo" + description = "my foo certificate" + environment_id = "${rancher_environment.test.id}" + cert = "${file("server.crt")}" + key = "${file("server.key")}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the registry credential. +* `description` - (Optional) A registry credential description. +* `environment_id` - (Required) The ID of the environment to create the certificate for. +* `cert` - (Required) The certificate content. +* `cert_chain` - (Optional) The certificate chain. +* `key` - (Required) The certificate key. + +## Attributes Reference + +The following attributes are exported: + +* `cn` - The certificate CN. +* `algorithm` - The certificate algorithm. +* `cert_fingerprint` - The certificate fingerprint. +* `expires_at` - The certificate expiration date. +* `issued_at` - The certificate creation date. +* `issuer` - The certificate issuer. +* `key_size` - The certificate key size. +* `serial_number` - The certificate serial number. +* `subject_alternative_names` - The list of certificate Subject Alternative Names. +* `version` - The certificate version. + +## Import + +Registry credentials can be imported using the Registry and credentials +IDs in the format `/` + +``` +$ terraform import rancher_certificate.mycert 1sp31/1c605 +``` + +If the credentials for the Rancher provider have access to the global API, +then `environment_id` can be omitted e.g. + +``` +$ terraform import rancher_certificate.mycert 1c605 +``` diff --git a/website/source/docs/providers/rancher/r/host.html.markdown b/website/source/docs/providers/rancher/r/host.html.markdown new file mode 100644 index 0000000000..382d8ac867 --- /dev/null +++ b/website/source/docs/providers/rancher/r/host.html.markdown @@ -0,0 +1,36 @@ +--- +layout: "rancher" +page_title: "Rancher: rancher_host" +sidebar_current: "docs-rancher-resource-host" +description: |- + Provides a Rancher Host resource. This can be used to manage and delete hosts on Rancher. +--- + +# rancher\_host + +Provides a Rancher Host resource. This can be used to manage and delete hosts on Rancher. + +## Example usage + +```hcl +# Manage an existing Rancher host +resource rancher_host "foo" { + name = "foo" + description = "The foo node" + environment_id = "1a5" + hostname = "foo.example.com" + labels { + role = "database" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the host. +* `description` - (Optional) A host description. +* `environment_id` - (Required) The ID of the environment the host is associated to. +* `hostname` - (Required) The host name. Used as the primary key to detect the host ID. +* `labels` - (Optional) A dictionary of labels to apply to the host. Computed internal labels are excluded from that list. diff --git a/website/source/docs/providers/terraform/d/remote_state.html.md b/website/source/docs/providers/terraform/d/remote_state.html.md index d5ec49fa08..b2e6a3f088 100644 --- a/website/source/docs/providers/terraform/d/remote_state.html.md +++ b/website/source/docs/providers/terraform/d/remote_state.html.md @@ -32,7 +32,7 @@ The following arguments are supported: * `backend` - (Required) The remote backend to use. * `config` - (Optional) The configuration of the remote backend. - * Remote state config docs can be found [here](https://www.terraform.io/docs/state/remote/atlas.html) + * Remote state config docs can be found [here](/docs/backends/types/atlas.html) ## Attributes Reference diff --git a/website/source/docs/providers/vsphere/r/virtual_disk.html.markdown b/website/source/docs/providers/vsphere/r/virtual_disk.html.markdown index 39b07e4e8b..b6f97374ce 100644 --- a/website/source/docs/providers/vsphere/r/virtual_disk.html.markdown +++ b/website/source/docs/providers/vsphere/r/virtual_disk.html.markdown @@ -14,11 +14,12 @@ Provides a VMware virtual disk resource. This can be used to create and delete ``` resource "vsphere_virtual_disk" "myDisk" { - size = 2 - vmdk_path = "myDisk.vmdk" - datacenter = "Datacenter" - datastore = "local" - type = "thin" + size = 2 + vmdk_path = "myDisk.vmdk" + datacenter = "Datacenter" + datastore = "local" + type = "thin" + adapter_type = "lsiLogic" } ``` @@ -28,6 +29,7 @@ The following arguments are supported: * `size` - (Required) Size of the disk (in GB). * `vmdk_path` - (Required) The path, including filename, of the virtual disk to be created. This should end with '.vmdk'. -* `type` - (Optional) 'eagerZeroedThick' (the default), or 'thin' are supported options. +* `type` - (Optional) 'eagerZeroedThick' (the default), 'lazy', or 'thin' are supported options. +* `adapter_type` - (Optional) set adapter type, 'ide' (the default), 'lsiLogic', or 'busLogic' are supported options. * `datacenter` - (Optional) The name of a Datacenter in which to create the disk. -* `datastore` - (Required) The name of the Datastore in which to create the disk. \ No newline at end of file +* `datastore` - (Required) The name of the Datastore in which to create the disk. diff --git a/website/source/docs/state/environments.html.md b/website/source/docs/state/environments.html.md index d2323d914d..205a9ac08a 100644 --- a/website/source/docs/state/environments.html.md +++ b/website/source/docs/state/environments.html.md @@ -34,7 +34,7 @@ to switch environments you can use `terraform env select`, etc. For example, creating an environment: ``` -$ terraform env create bar +$ terraform env new bar Created and switched to environment "bar"! You're now on a new, empty environment. Environments isolate their state, @@ -47,14 +47,46 @@ any existing resources that existed on the default (or any other) environment. **These resources still physically exist,** but are managed by another Terraform environment. +## Current Environment Interpolation + +Within your Terraform configuration, you may reference the current environment +using the `${terraform.env}` interpolation variable. This can be used anywhere +interpolations are allowed. + +Referencing the current environment is useful for changing behavior based +on the environment. For example, for non-default environments, it may be useful +to spin up smaller cluster sizes. You can do this: + +``` +resource "aws_instance" "example" { + count = "${terraform.env == "default" ? 5 : 1}" + + # ... other fields +} +``` + +Another popular use case is using the environment as part of naming or +tagging behavior: + +``` +resource "aws_instance" "example" { + tags { Name = "web - ${terraform.env}" } + + # ... other fields +} +``` + ## Best Practices An environment alone **should not** be used to manage the difference between -development, staging, and production. While it is technically possible, it is -much more manageable and safe to use multiple independently managed Terraform -configurations linked together with -[terraform_remote_state](/docs/providers/terraform/d/remote_state.html) -data sources. +development, staging, and production. As Terraform configurations get larger, +it's much more manageable and safer to split one large configuration into many +smaller ones linked together with terraform_remote_state data sources. This +allows teams to delegate ownership and reduce the blast radius of changes. +For each smaller configuration, you can use environments to model the +differences between development, staging, and production. However, if you have +one large Terraform configuration, it is riskier and not recommended to use +environments to model those differences. The [terraform_remote_state](/docs/providers/terraform/d/remote_state.html) resource accepts an `environment` name to target. Therefore, you can link diff --git a/website/source/docs/state/purpose.html.md b/website/source/docs/state/purpose.html.md index ba3eef4b3d..d6e9beb58f 100644 --- a/website/source/docs/state/purpose.html.md +++ b/website/source/docs/state/purpose.html.md @@ -92,7 +92,7 @@ The primary motivation people have to remove state files is in an attempt to improve using Terraform with teams. State files can easily result in conflicts when two people modify infrastructure at the same time. -[Remote state](/docs/state/remote/index.html) is the recommended solution +[Remote state](/docs/state/remote.html) is the recommended solution to this problem. At the time of writing, remote state works well but there are still scenarios that can result in state conflicts. A priority for future versions of Terraform is to improve this. diff --git a/website/source/downloads.html.erb b/website/source/downloads.html.erb index 8a97c3f3b7..ab88622742 100644 --- a/website/source/downloads.html.erb +++ b/website/source/downloads.html.erb @@ -32,7 +32,7 @@ description: |- Checkout the v<%= latest_version %> CHANGELOG for information on the latest release.

- Note: if you are upgrading to 0.8 please see the upgrade guide. + Note: if you are upgrading to 0.9 please see the upgrade guide.

diff --git a/website/source/index.html.erb b/website/source/index.html.erb index 151c9b8f28..34f99a2f40 100644 --- a/website/source/index.html.erb +++ b/website/source/index.html.erb @@ -27,6 +27,13 @@ Get Started

+
+
+

+ Announcing Terraform Enterprise, collaboration for teams. Learn more. +

+
+
@@ -162,23 +169,14 @@

Latest

-
-
- -

Join the live webinar to learn about provisioning Microsoft Azure with HashiCorp Terraform and see a demo

-

- Register Now -

-
-
-
+
-

Terraform 0.8 Released

+

Terraform 0.9 Released

- Terraform continues to be HashiCorp's fastest growing project. Read the highlights from the 0.8 release + Terraform 0.9 adds major new functionality to Terraform. Read the highlights from the 0.9 release

- Read more + Read more

diff --git a/website/source/intro/getting-started/build.html.md b/website/source/intro/getting-started/build.html.md index f770b2228d..ac278cb71a 100644 --- a/website/source/intro/getting-started/build.html.md +++ b/website/source/intro/getting-started/build.html.md @@ -74,6 +74,16 @@ AWS access key and secret key, available from We're hardcoding them for now, but will extract these into variables later in the getting started guide. +~> **Note**: If you simply leave out AWS credentials, Terraform will +automatically search for saved API credentials (for example, +in `~/.aws/credentials`) or IAM instance profile credentials. +This option is much cleaner for situations where tf files are checked into +source control or where there is more than one admin user. +See details [here](https://aws.amazon.com/blogs/apn/terraform-beyond-the-basics-with-aws/). +Leaving IAM credentials out of the Terraform configs allows you to leave those +credentials out of source control, and also use different IAM credentials +for each user without having to modify the configuration files. + This is a complete configuration that Terraform is ready to apply. The general structure should be intuitive and straightforward. @@ -180,7 +190,7 @@ by default. This state file is extremely important; it maps various resource metadata to actual resource IDs so that Terraform knows what it is managing. This file must be saved and distributed to anyone who might run Terraform. It is generally recommended to -[setup remote state](https://www.terraform.io/docs/state/remote/index.html) +[setup remote state](https://www.terraform.io/docs/state/remote.html) when working with Terraform. This will mean that any potential secrets stored in the state file, will not be checked into version control diff --git a/website/source/intro/getting-started/destroy.html.md b/website/source/intro/getting-started/destroy.html.md index 76e2a4547e..c986e489af 100644 --- a/website/source/intro/getting-started/destroy.html.md +++ b/website/source/intro/getting-started/destroy.html.md @@ -63,4 +63,6 @@ resources, Terraform will destroy in the proper order. You now know how to create, modify, and destroy infrastructure from a local machine. -Next, we learn how to [use Terraform remotely and the associated benefits](/intro/getting-started/remote.html). +Next, we move on to features that make Terraform configurations +slightly more useful: [variables, resource dependencies, provisioning, +and more](/intro/getting-started/dependencies.html). diff --git a/website/source/intro/getting-started/modules.html.md b/website/source/intro/getting-started/modules.html.md index 92d828f7fa..e7ce4c706f 100644 --- a/website/source/intro/getting-started/modules.html.md +++ b/website/source/intro/getting-started/modules.html.md @@ -83,7 +83,7 @@ $ terraform get This command will download the modules if they haven't been already. By default, the command will not check for updates, so it is safe (and fast) -to run multiple times. You can use the `-u` flag to check and download +to run multiple times. You can use the `-update` flag to check and download updates. ## Planning and Apply Modules @@ -160,6 +160,4 @@ For more information on modules, the types of sources supported, how to write modules, and more, read the in depth [module documentation](/docs/modules/index.html). -We've now concluded the getting started guide, however -there are a number of [next steps](/intro/getting-started/next-steps.html) -to get started with Terraform. +Next, we learn how to [use Terraform remotely and the associated benefits](/intro/getting-started/remote.html). diff --git a/website/source/intro/getting-started/remote.html.markdown b/website/source/intro/getting-started/remote.html.markdown index 125eb00dc6..7a1bc13aca 100644 --- a/website/source/intro/getting-started/remote.html.markdown +++ b/website/source/intro/getting-started/remote.html.markdown @@ -6,68 +6,152 @@ description: |- We've now seen how to build, change, and destroy infrastructure from a local machine. However, you can use Atlas by HashiCorp to run Terraform remotely to version and audit the history of your infrastructure. --- -# Why Use Terraform Remotely? +# Remote Backends + We've now seen how to build, change, and destroy infrastructure from a local machine. This is great for testing and development, however in production environments it is more responsible to run Terraform remotely and store a master Terraform state remotely. -[Atlas](https://atlas.hashicorp.com/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform), -HashiCorp's solution for Terraform remote, runs an -infrastructure version control. Running Terraform -in Atlas allows teams to easily version, audit, and collaborate +Terraform supports a feature known as [remote backends](/docs/backends) +to support this. Backends are the recommended way to use Terraform in +a team environment. + +Depending on the features you wish to use, Terraform has multiple remote +backend options. You could use Consul for state storage, locking, and +environments. This is a free and open source option. You can use S3 which +only supports state storage, for a low cost and minimally featured solution. + +[Terraform Enterprise](https://www.hashicorp.com/products/terraform/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform) +is HashiCorp's commercial solution and also acts as a remote backend. +Terraform Enterprise allows teams to easily version, audit, and collaborate on infrastructure changes. Each proposed change generates a Terraform plan which can be reviewed and collaborated on as a team. -When a proposed change is accepted, the Terraform logs are stored -in Atlas, resulting in a linear history of infrastructure states to +When a proposed change is accepted, the Terraform logs are stored, +resulting in a linear history of infrastructure states to help with auditing and policy enforcement. Additional benefits to running Terraform remotely include moving access credentials off of developer machines and releasing local machines from long-running Terraform processes. -# How to Use Terraform Remotely -You can learn how to use Terraform remotely with our [interactive tutorial](https://atlas.hashicorp.com/tutorial/terraform/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform) -or you can follow the outlined steps below. +## How to Store State Remotely -First, If you don't have an Atlas account, you can [create an account here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform). +First, we'll use [Consul](https://www.consul.io) as our backend. Consul +is a free and open source solution that provides state storage, locking, and +environments. It is a great way to get started with Terraform backends. -The Terraform CLI uses your `Atlas Token` to securely communicate with your Atlas account. To generate a token: from the main menu, select your username in the left side navigation menu to access your profile. Under `Personal`, click on the `Tokens` tab and hit `Generate`. +We'll use the [demo Consul server](https://demo.consul.io) for this guide. +This should not be used for real data. Additionally, the demo server doesn't +permit locking. If you want to play with [state locking](/docs/state/locking.html), +you'll have to run your own Consul server or use a backend that supports locking. -For the purposes of this tutorial you can use this token by exporting it to your local shell session: +First, configure the backend in your configuration: + +``` +terraform { + backend "consul" { + address = "demo.consul.io" + path = "getting-started-RANDOMSTRING" + lock = false + } +} +``` + +Please replace "RANDOMSTRING" with some random text. The demo server is +public and we want to try to avoid overlapping with someone else running +through the getting started guide. + +The `backend` section configures the backend you want to use. After +configuring a backend, run `terraform init` to setup Terraform. It should +ask if you want to migrate your state to Consul. Say "yes" and Terraform +will copy your state. + +Now, if you run `terraform plan`, Terraform should state that there are +no changes: + +``` +$ terraform plan +... + +No changes. Infrastructure is up-to-date. + +This means that Terraform did not detect any differences between your +configuration and real physical resources that exist. As a result, Terraform +doesn't need to do anything. +``` + +Terraform is now storing your state remotely in Consul. Remote state +storage makes collaboration easier and keeps state and secret information +off your local disk. Remote state is loaded only in memory when it is used. + +If you want to move back to local state, you can remove the backend configuration +block from your configuration and run `terraform init` again. Terraform will +once again ask if you want to migrate your state back to local. + +## Terraform Enterprise + +HashiCorp (the makers of Terraform) also provide a commercial solution which +functions as a Terraform backend as well as enabling many other features such +as remote apply, run history, state history, state diffing, and more. + +This section will guide you through a demo of Terraform Enterprise. Note that +this is commercial software. If you are not interested at this time, you may +skip this section. + +First, [create an account here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform) unless you already have one. + +Terraform uses your access token to securely communicate with Terraform +Enterprise. To generate a token: select your username in the left side +navigation menu, click "Accounts Settings", "click "Tokens", then click +"Generate". + +For the purposes of this tutorial you can use this token by exporting it to +your local shell session: ``` $ export ATLAS_TOKEN=ATLAS_ACCESS_TOKEN ``` -Replace `ATLAS_ACCESS_TOKEN` with the token generated earlier -Then configure [Terraform remote state storage](/docs/commands/remote.html) with the command: +Replace `ATLAS_ACCESS_TOKEN` with the token generated earlier. Next, +configure the Terraform Enterprise backend: ``` -$ terraform remote config -backend-config="name=ATLAS_USERNAME/getting-started" +terraform { + backend "atlas" { + name = "USERNAME/getting-started" + } +} ``` -Replace `ATLAS_USERNAME` with your Atlas username. +Replace `USERNAME` with your Terraform Enterprise username. Note that the +backend name is "atlas" for legacy reasons and will be renamed soon. -Before you [push](/docs/commands/push.html) your Terraform configuration to Atlas you'll need to start a local version control system with at least one commit. Here is an example using `git`. +Remember to run `terraform init`. At this point, Terraform is using Terraform +Enterprise for everything shown before with Consul. Next, we'll show you some +additional functionality Terraform Enterprise enables. + +Before you [push](/docs/commands/push.html) your Terraform configuration to +Terraform Enterprise you'll need to start a local version control system with +at least one commit. Here is an example using `git`. ``` $ git init $ git add example.tf $ git commit -m "init commit" ``` -Next, [push](/docs/commands/push.html) your Terraform configuration to Atlas with: + +Next, [push](/docs/commands/push.html) your Terraform configuration: ``` -$ terraform push -name="ATLAS_USERNAME/getting-started" +$ terraform push ``` This will automatically trigger a `terraform plan`, which you can -review in the [Environments tab in Atlas](https://atlas.hashicorp.com/environments). +review in the [Terraform page](https://atlas.hashicorp.com/terraform). If the plan looks correct, hit "Confirm & Apply" to execute the infrastructure changes. -# Version Control for Infrastructure -Running Terraform in Atlas creates a complete history of +Running Terraform in Terraform Enterprise creates a complete history of infrastructure changes, a sort of version control for infrastructure. Similar to application version control systems such as Git or Subversion, this makes changes to @@ -81,6 +165,6 @@ You now know how to create, modify, destroy, version, and collaborate on infrastructure. With these building blocks, you can effectively experiment with any part of Terraform. -Next, we move on to features that make Terraform configurations -slightly more useful: [variables, resource dependencies, provisioning, -and more](/intro/getting-started/dependencies.html). +We've now concluded the getting started guide, however +there are a number of [next steps](/intro/getting-started/next-steps.html) +to get started with Terraform. diff --git a/website/source/intro/use-cases.html.markdown b/website/source/intro/use-cases.html.markdown index 41357e7b00..212bf0e09a 100644 --- a/website/source/intro/use-cases.html.markdown +++ b/website/source/intro/use-cases.html.markdown @@ -22,7 +22,7 @@ non-trivial applications quickly need many add-ons and external services. Terraform can be used to codify the setup required for a Heroku application, ensuring that all the required add-ons are available, but it can go even further: configuring -DNSimple to set a CNAME, or setting up CloudFlare as a CDN for the +DNSimple to set a CNAME, or setting up Cloudflare as a CDN for the app. Best of all, Terraform can do all of this in under 30 seconds without using a web interface. diff --git a/website/source/intro/vs/cloudformation.html.markdown b/website/source/intro/vs/cloudformation.html.markdown index 382a76582d..abb3128432 100644 --- a/website/source/intro/vs/cloudformation.html.markdown +++ b/website/source/intro/vs/cloudformation.html.markdown @@ -17,7 +17,7 @@ Terraform similarly uses configuration files to detail the infrastructure setup, but it goes further by being both cloud-agnostic and enabling multiple providers and services to be combined and composed. For example, Terraform can be used to orchestrate an AWS and OpenStack cluster simultaneously, -while enabling 3rd-party providers like CloudFlare and DNSimple to be integrated +while enabling 3rd-party providers like Cloudflare and DNSimple to be integrated to provide CDN and DNS services. This enables Terraform to represent and manage the entire infrastructure with its supporting services, instead of only the subset that exists within a single provider. It provides a single diff --git a/website/source/layouts/_announcement-bnr.erb b/website/source/layouts/_announcement-bnr.erb deleted file mode 100644 index 4773605a68..0000000000 --- a/website/source/layouts/_announcement-bnr.erb +++ /dev/null @@ -1,18 +0,0 @@ -
-
-
-
-

- Announcing - - Collaborative Infrastructure Automation - - Find out more - -

-
-
-
-
diff --git a/website/source/layouts/_header.erb b/website/source/layouts/_header.erb index 8d98f7095d..29b428dc02 100644 --- a/website/source/layouts/_header.erb +++ b/website/source/layouts/_header.erb @@ -1,12 +1,10 @@